Center for AI Safety






















The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.

Read more in the white paper

FLAGSHIP PROJECTS




OUR FOCUS


FORMAL METHODS


Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.



LEARNING AND CONTROL


Designing systems that intelligently balance learning under uncertainty and acting safety.



TRANSPARENCY


Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.

COURSES
FEATURED PUBLICATIONS

PEOPLE
person

Clark Barrett

Associate Professor of Computer Science,   Co-Director

person

Emma Brunskill

Assistant Professor of Computer Science

person

Mariano Florentino Cuéllar

Visting Professor of Law, Herman Phelger Visiting Professor

person

David Dill

Donald E. Knuth Professor in the School of Engineering

person

Grace X. Gao

Assistant Professor of Aeronautics and Astronautics, and by courtesy, Electrical Engineering

person

Mykel Kochenderfer

Assistant Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director

person

Marco Pavone

Assistant Professor of Aeronautics and Astronautics and, by courtesy, of Electrical Engineering, of Computational and Mathematical Engineering, and of Information Systems

person

Dorsa Sadigh

Assistant Professor of Computer Science and of Electrical Engineering, Co-Director

person

Mac Schwager

Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab

person

Erika Strandberg

Executive Director

person

James Zou

Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering


POSTDOCTORAL STUDENTS
person

Ahmed Irfan

Postdoctoral Scholar

Automated Reasoning, Formal Methods, and Verification of Neural Networks

person

Dylan Losey

Postdoctoral Scholar

Human-Robot Interaction, Learning From Humans, and Control Theory

person

Aleksandar Zeljić

Postdoctoral Scholar

Automated Reasoning, Formal Methods, Verification of Neural Networks

person

Ransalu Senanayake

Postdoctoral Scholar

Safe interactions of autonomous systems


GRADUATE STUDENTS
person

Edward Balaban

PH.D. Candidate

Health-aware decision making

person

Raunak Bhattacharyya

PH.D. Candidate

Automotive driving models

person

Erdem Bıyık

PH.D. Candidate

Active preference learning

person

Maxime Bouton

PH.D. Candidate

Monte Carlo tree search and driving

person

Kyle Brown

PH.D. Candidate

Behavior Prediction for Autonomous Driving

person

Anthony Corso

PH.D. Candidate

Adaptive Stress

person

Zhiyang He

Graduate Student

Robotic simulation

person

Masha Itkina

PH.D. Candidate

Probabilistic pereption models for automated driving

person

Kyle Julian

PH.D. Candidate

Aircraft collision avoidance

person

Mark Koren

Graduate Student

Adaptive stress testing of automated vehicles

person

Minae Kwon

PH.D. Candidate

Human-robot interation

person

Nick Landolfi

PH.D. Candidate

person

Chelsea Sidrane

PH.D. Candidate

Disaster response optimization

person

Chris Lazarus

Graduate Student

Formal methods for validating deep neural networks

person

Sheng Li

Graduate Student

Aircraft collision avoidane in ultra-dense airspace

person

Xiaobai Ma

PH.D. Candidate

Automated driving simulation and control

person

Andy Palaniappan

PH.D. Candidate

Cooperative learning

person

Haoze Wu

PH.D. Candidate

Formal Verification

person

Sydney Katz

Graduate Student

Urban Air Mobility

person

Tomer Arnon

Graduate Student

Neural network verification


person

Soyeon Jung

Graduate Student

Terminal airspace modeling

person

Eric Cristofalo

PH.D. Candidate

Vision-based multi-robot systems

person

Ravi Haksar

PH.D. Candidate

Multi-robot Disaster Response

person

Mingyu Wang

PH.D. Candidate

Autonomous Vehicles Planning

person

Kunal Shah

PH.D. Candidate

Multi-robot Motion Planning


UNDERGRADUATE STUDENTS
person

Gleb Shevchuk

Undergraduate

Human-centered robotics

MEMBERSHIP

Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.



Benefits Core Associate
Sponsorship level per year 300k 100k
Commitment 3 years yearly
Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
Yes No
Visiting scholar positions Yes No
Faculty visits Yes No
Participation on the board of advisors Yes No
Semiannual research retreat Yes Yes
Slack channel invitation Yes Yes
Research seminars Yes Yes
Student resume book Yes Yes


Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD.


For more information, please view our Corporate Membership Document

To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs

CONTACT


Dr. Erika Strandberg

Executive Director of Strategic Research Initiatives

tel: 650.497-2790

email: estrandb@stanford.edu

We are grateful to our affiliates program members for their generous support:

Core Members:



We are grateful to the following organizations for their generous support:



Federal Sponsors: