Leading and following can emerge naturally in human teams. However, such roles are usually predefined in human-robot teams due to the difficulty of scalably learning and adapting to each agent’s roles. Our goal is to enable a robot to learn how ... (click for more)
ACAS sXu is a protocol for collision avoidance for small drones. In this project we aim to do a formal robustness analysis of a specific implementation of sXu being used by GE. This implementation uses a deep neural network (DNN). The goal is to show ... (click for more)
The control of unmanned aircraft systems must be rigorously tested and verified to ensure their correct functioning and airworthiness. Incorporating components that use novel techniques such as deep learning can pose a significant challenge ... (click for more)
Adaptive stress testing (AST) is an approach for validating safety-critical autonomous control systems such as self driving cars and unmanned aircraft. The approach involves searching for the most likely failure scenarios according to some measure ... (click for more)
Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.
Designing systems that intelligently balance learning under uncertainty and acting safety.
Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.
Anthony Corso / Spring 2022
Mykel Kochenderfer / Winter 2023
Dorsa Sadigh / Fall 2019
Steve Luby
Professor of Medicine (Infectious Diseases & Geographic Medicine), Co-Director, Stanford Existential Risks Initiative
Mac Schwager
Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab
Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.
Benefits | Core | Associate |
---|---|---|
Sponsorship level per year | 300k | 100k |
Commitment | 3 years | yearly |
Opportunity to contribute to the definition of a flagship research project involving multiple faculty and students |
Yes | No |
Visiting scholar positions | Yes | Additional fee |
Faculty visits* | Yes | No |
Participation on the board of advisors | Yes | No |
Semiannual research retreat | Yes | Yes |
Slack channel invitation | Yes | Yes |
Research seminars | Yes | Yes |
Student resume book | Yes | Yes |
Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD. Companies may provide additional funding above the membership fee to support an area of on-going research with faculty participating within the program. all research results arising from the use of the additional funding will be shared with all program members and the general public.
* The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.
For more information, please view our Corporate Membership
Document
To view Stanford policies, please see Stanford
University Policies for Industrial Affiliate Programs
Anthony Corso
Executive Director - Center for AI Safety
email: acorso@stanford.edu