Leading and following can emerge naturally in human teams. However, such roles are usually predefined in human-robot teams due to the difficulty of scalably learning and adapting to each agent’s roles. Our goal is to enable a robot to learn how ... (click for more)
ACAS sXu is a protocol for collision avoidance for small drones. In this project we aim to do a formal robustness analysis of a specific implementation of sXu being used by GE. This implementation uses a deep neural network (DNN). The goal is to show ... (click for more)
The control of unmanned aircraft systems must be rigorously tested and verified to ensure their correct functioning and airworthiness. Incorporating components that use novel techniques such as deep learning can pose a significant challenge ... (click for more)
Adaptive stress testing (AST) is an approach for validating safety-critical autonomous control systems such as self driving cars and unmanned aircraft. The approach involves searching for the most likely failure scenarios according to some measure ... (click for more)
Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.
Designing systems that intelligently balance learning under uncertainty and acting safety.
Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.
Dorsa Sadigh / Spring 2020
Mykel Kochenderfer / Winter 2023
Dorsa Sadigh / Fall 2019
Mykel J. Kochenderfer / Fall 2020
Dorsa Sadigh / Spring 2020
Aleksandar Zeljić / Fall 2019
Grace X. Gao
Assistant Professor of Aeronautics and Astronautics, and by courtesy, Electrical Engineering
Associate Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director
Associate Professor of Aeronautics and Astronautics and, by courtesy, of Electrical Engineering, of Computational and Mathematical Engineering, and of Information Systems
Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering
Automated Reasoning, Formal Methods, Verification of Neural Networks
Reliability, security, and robustness through the lens of human-AI interaction and collaboration
Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.
|Sponsorship level per year||300k||100k|
|Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
|Visiting scholar positions||Yes||Additional fee|
|Participation on the board of advisors||Yes||No|
|Semiannual research retreat||Yes||Yes|
|Slack channel invitation||Yes||Yes|
|Student resume book||Yes||Yes|
Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD. Companies may provide additional funding above the membership fee to support an area of on-going research with faculty participating within the program. all research results arising from the use of the additional funding will be shared with all program members and the general public.
* The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.
For more information, please view our Corporate Membership Document
To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs
Executive Director - Center for AI Safety
Sign up HERE to receive announcements and updates via Stanford AISafety Centre's mailing list