Stanford Center for AI Safety



























The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.

Read more in the white paper

FLAGSHIP PROJECTS




OUR FOCUS


FORMAL METHODS


Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.



LEARNING AND CONTROL


Designing systems that intelligently balance learning under uncertainty and acting safety.



TRANSPARENCY


Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.

COURSES
FEATURED PUBLICATIONS

PEOPLE
person

Clark Barrett

Professor (Research) of Computer Science,   Co-Director

person

Emma Brunskill

Associate Professor of Computer Science

person

Mariano Florentino Cuéllar

Visting Professor of Law, Herman Phelger Visiting Professor

person

David Dill

Donald E. Knuth Professor in the School of Engineering, Emeritus

person

Grace X. Gao

Assistant Professor of Aeronautics and Astronautics, and by courtesy, Electrical Engineering

person

Paul Edwards

Director, Program on Science, Technology & Society, Co-Director, Stanford Existential Risks Initiative, Professor of Information and History (Emeritus), University of Michigan

person

Carlos Guestrin

Associate Professor of Computer Science

person

Mykel Kochenderfer

Associate Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director

person

Steve Luby

Professor of Medicine (Infectious Diseases & Geographic Medicine), Co-Director, Stanford Existential Risks Initiative

person

Marco Pavone

Associate Professor of Aeronautics and Astronautics and, by courtesy, of Electrical Engineering, of Computational and Mathematical Engineering, and of Information Systems

person

Dorsa Sadigh

Assistant Professor of Computer Science and of Electrical Engineering, Co-Director

person

Mac Schwager

Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab

person

James Zou

Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering

person

Anthony Corso

Executive Director


POSTDOCTORAL STUDENTS
person

Aleksandar Zeljić

Postdoctoral Scholar

Automated Reasoning, Formal Methods, Verification of Neural Networks

person

Ransalu Senanayake

Postdoctoral Scholar

Safe interactions of autonomous systems


GRADUATE STUDENTS
person

Tomer Arnon

Ph.D. Candidate

Neural network verification

person

Suneel Belkhale

Ph.D. Candidate

Robot perception and decision making

person

Erdem Bıyık

Ph.D. Candidate

Active preference learning

person

Zhangjie Cao

Ph.D. Candidate

Transfer and imitation learning in interactive settings

person

Eric Cristofalo

Ph.D. Candidate

Vision-based multi-robot systems

person

Shubh Gupta

Ph.D. Candidate

Localization safety and integrity monitoring

person

Ravi Haksar

Ph.D. Candidate

Multi-robot disaster response

person

Masha Itkina

Ph.D. Candidate

Probabilistic perception models for automated driving

person

Soyeon Jung

Ph.D. Candidate

Terminal airspace modeling

person

Sydney Katz

Ph.D. Candidate

Verification of autonomous systems

person

Minae Kwon

Ph.D. Candidate

Human-robot interation

person

Chris Lazarus

Ph.D. Candidate

Formal methods for validating deep neural networks

person

Sheng Li

Ph.D. Candidate

Aircraft collision avoidane in ultra-dense airspace

person

Xiaobai Ma

Ph.D. Candidate

Automated driving simulation and control

person

Robert J. Moss

Ph.D. Candidate

Risk assessment of autonomous systems

person

Kunal Shah

Ph.D. Candidate

Multi-robot motion planning

person

Andy Shih

Ph.D. Candidate

Representation learning in multi-agent games

person

Chelsea Sidrane

Ph.D. Candidate

Neural network verification

person

Megha Srivastava

Ph.D. Candidate

Reliability, security, and robustness through the lens of human-AI interaction and collaboration

person

Christopher A. Strong

Master's Student

Neural network verification and safe control

person

Mingyu Wang

Ph.D. Candidate

Autonomous vehicles planning

person

Haoze Wu

Ph.D. Candidate

Formal verification

MEMBERSHIP

Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.



Benefits Core Associate
Sponsorship level per year 300k 100k
Commitment 3 years yearly
Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
Yes No
Visiting scholar positions Yes Additional fee
Faculty visits* Yes No
Participation on the board of advisors Yes No
Semiannual research retreat Yes Yes
Slack channel invitation Yes Yes
Research seminars Yes Yes
Student resume book Yes Yes


Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD. Companies may provide additional funding above the membership fee to support an area of on-going research with faculty participating within the program. all research results arising from the use of the additional funding will be shared with all program members and the general public.

* The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.


For more information, please view our Corporate Membership Document

To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs


CONTACT


Anthony Corso

Executive Director - Center for AI Safety

email: acorso@stanford.edu

MAILING LIST


Sign up HERE to receive announcements and updates via Stanford AISafety Centre's mailing list

We are grateful to our affiliates program members for their generous support:

Core Members:



Associate Members:



Federal Sponsors:


Corporate Research Sponsors:


Affiliated Organizations:


Stanford Existential
Risks Initiative (SERI)