Stanford Center for AI Safety



























The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.

Read more in the white paper

FLAGSHIP PROJECTS




OUR FOCUS



FORMAL METHODS


Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.




LEARNING AND CONTROL


Designing systems that intelligently balance learning under uncertainty and acting safety.




TRANSPARENCY


Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.

EVENTS

2023 AI Safety Annual Meeting Recap:


FEATURED PUBLICATIONS

PEOPLE
person

Clark Barrett

Professor (Research) of Computer Science,   Co-Director

person

Emma Brunskill

Associate Professor of Computer Science

person

Mariano Florentino Cuéllar

Visting Professor of Law, Herman Phelger Visiting Professor

person

David Dill

Donald E. Knuth Professor in the School of Engineering, Emeritus

person

Grace X. Gao

Assistant Professor of Aeronautics and Astronautics, and by courtesy, Electrical Engineering, Co-Director

person

Paul Edwards

Director, Program on Science, Technology & Society, Co-Director, Stanford Existential Risks Initiative, Professor of Information and History (Emeritus), University of Michigan

person

Carlos Guestrin

Associate Professor of Computer Science

person

Mykel Kochenderfer

Associate Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director

person

Sanmi Koyejo

Assistant Professor of Computer Science

person

Steve Luby

Professor of Medicine (Infectious Diseases & Geographic Medicine), Co-Director, Stanford Existential Risks Initiative

person

Azalia Mirhoseini

Assistant Professor of Computer Science

person

Marco Pavone

Associate Professor of Aeronautics and Astronautics and, by courtesy, of Electrical Engineering, of Computational and Mathematical Engineering, and of Information Systems

person

Dorsa Sadigh

Assistant Professor of Computer Science and of Electrical Engineering

person

Mac Schwager

Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab

person

Diyi Yang

Assistant Professor of Computer Science

person

James Zou

Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering

person

Duncan Eddy

Executive Director


POSTDOCTORAL SCHOLARS AND VISITORS
person

Pei Huang

Postdoctoral Scholar

Automated reasoning and trustworthy AI

person

Sydney Katz

Postdoctoral Scholar

Verification of autonomous systems

person

Max Lamparth

Postdoctoral Scholar

Interpretability, robustness and safety of language models

person

Min Wu

Postdoctoral Scholar

AI safety: robustness, explainability, and fairness


GRADUATE STUDENTS
person

Samuel Akinwande

Ph.D. Candidate

Formal verification of nonlinear systems

person

Suneel Belkhale

Ph.D. Candidate

Robot perception and decision making

person

Polo Contreras

Ph.D. Candidate

Multi-robot systems

person

Tim Chen

Ph.D. Candidate

Safety in Learned Environments

person

Kaila Coimbra

Ph.D. Candidate

Lunar surface rover positioning

person

Marta Cortinovis

Ph.D. Candidate

Lunar navigation ephemeris modeling

person

Adam Dai

Ph.D. Candidate

Provably safe trajectory planning under uncertainty

person

Harrison Delecki

Ph.D. Candidate

Autonomous subterranean exploration

person

Aaron Feldman

Ph.D. Candidate

Statistical performance guarantees

person

Amelia Hardy

M.S. Candidate

Constrained decision problems

person

Keidai Iiyama

Ph.D. Candidate

Navigation for the Moon

person

Liam Kruse

Ph.D. Candidate

Probabilistic safe sets

person

Bernard Lange

Ph.D. Candidate

Deep learning for automotive perception

person

Udayan Mandal

M.S. Canditate

Formal verification

person

Robert J. Moss

Ph.D. Candidate

Risk assessment of autonomous systems

person

Daniel Neamati

Ph.D. Candidate

Urban GNSS-based navigation using environmental features

person

Mira Partha

Ph.D. Candidate

Neural Radiance Field (NeRF) maps of urban Environments

person

Ann-Katrin Reuel

Ph.D. Candidate

Ethics of automated driving

person

Marc Schlichting

Ph.D. Candidate

Machine learning for safety validation

person

Megha Srivastava

Ph.D. Candidate

Reliability, security, and robustness through the lens of human-AI interaction and collaboration

person

Jiankai Sun

Ph.D. Candidate

Reliable and efficient AI systems

person

Romeo Valentin

Ph.D. Candidate

Uncertainty calibration of neural networks

person

Guillem Casadesus Vila

Ph.D. Candidate

Lunar positioning, navigation, and timing

person

Joe Vincent

Ph.D. Candidate

Analysis of learned systems

person

Scott Viteri

Ph.D. Candidate

AI alignment via ontology maps

person

Asta Wu

Ph.D. Candidate

GNSS-based relative localization for tandem drifting cars

person

Haoze Wu

Ph.D. Candidate

Formal verification

person

Alan Yang

Ph.D. Candidate

GNSS code design using AI and machine learning

person

Yibo Zhang

Ph.D. Candidate

AI safety from a fundamental perspective

MEMBERSHIP

Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.



Benefits Core Associate
Sponsorship level per year 300k 100k
Commitment 3 years yearly
Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
Yes No
Visiting scholar positions Yes Additional fee
Faculty visits* Yes No
Participation on the board of advisors Yes No
Semiannual research retreat Yes Yes
Slack channel invitation Yes Yes
Research seminars Yes Yes
Student resume book Yes Yes


Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model, such as BSD. Companies may provide additional funding above the membership fee to support an area of on-going research with faculty participating within the program. all research results arising from the use of the additional funding will be shared with all program members and the general public.

* The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.


For more information, please view our Corporate Membership Document

To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs


CONTACT


Duncan Eddy

Executive Director - Center for AI Safety

email: deddy@stanford.edu

MAILING LIST


Sign up HERE to receive announcements and updates via Stanford AISafety Centre's mailing list

We are grateful to our affiliates program members for their generous support:

Core Members:



Associate Members:



Federal Sponsors:


Additional Sponsors:


Affiliated Organizations:


Stanford Existential
Risks Initiative (SERI)