Leading and following can emerge naturally in human teams. However, such roles are usually predefined in human-robot teams due to the difficulty of scalably learning and adapting to each agent’s roles. Our goal is to enable a robot to learn how ... (click for more)
ACAS sXu is a protocol for collision avoidance for small drones. In this project we aim to do a formal robustness analysis of a specific implementation of sXu being used by GE. This implementation uses a deep neural network (DNN). The goal is to show ... (click for more)
The control of unmanned aircraft systems must be rigorously tested and verified to ensure their correct functioning and airworthiness. Incorporating components that use novel techniques such as deep learning can pose a significant challenge ... (click for more)
Adaptive stress testing (AST) is an approach for validating safety-critical autonomous control systems such as self driving cars and unmanned aircraft. The approach involves searching for the most likely failure scenarios according to some measure ... (click for more)
Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.
Designing systems that intelligently balance learning under uncertainty and acting safety.
Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.
Dorsa Sadigh / Spring 2020
Mykel Kochenderfer / Winter 2020
Dorsa Sadigh / Fall 2019
Mykel J. Kochenderfer / Fall 2019
Dorsa Sadigh / Spring 2020
Aleksandar Zeljić / Fall 2019
Assistant Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director
Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab
Automated Reasoning, Formal Methods, and Verification of Neural Networks
Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.
|Sponsorship level per year||300k||100k|
|Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
|Visiting scholar positions||Yes||No|
|Participation on the board of advisors||Yes||No|
|Semiannual research retreat||Yes||Yes|
|Slack channel invitation||Yes||Yes|
|Student resume book||Yes||Yes|
Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD.
For more information, please view our Corporate Membership Document
To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs
Dr. Erika Strandberg
Executive Director of Strategic Research Initiatives