Stanford Center for AI Safety



























The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.

Read more in the white paper

FLAGSHIP PROJECTS




OUR FOCUS



FORMAL METHODS


Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.




LEARNING AND CONTROL


Designing systems that intelligently balance learning under uncertainty and acting safety.




TRANSPARENCY


Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.

EVENTS

FEATURED PUBLICATIONS

PEOPLE
person

Clark Barrett

Professor (Research) of Computer Science,   Co-Director

person

Emma Brunskill

Associate Professor of Computer Science

person

Mariano Florentino Cuéllar

Visting Professor of Law, Herman Phelger Visiting Professor

person

David Dill

Donald E. Knuth Professor in the School of Engineering, Emeritus

person

Charles Eesley

Associate Professor and W.M. Keck Foundation Faculty Scholar in the Department of Management Science and Engineering

person

Paul Edwards

Director, Program on Science, Technology & Society, Co-Director, Stanford Existential Risks Initiative, Professor of Information and History (Emeritus), University of Michigan

person

Grace X. Gao

Assistant Professor of Aeronautics and Astronautics, and by courtesy, Electrical Engineering, Co-Director

person

Carlos Guestrin

Associate Professor of Computer Science

person

Mykel Kochenderfer

Associate Professor of Aeronautics and Astronautics and, by courtesy, Computer Science, Co-Director

person

Sanmi Koyejo

Assistant Professor of Computer Science

person

Steve Luby

Professor of Medicine (Infectious Diseases & Geographic Medicine), Co-Director, Stanford Existential Risks Initiative

person

Azalia Mirhoseini

Assistant Professor of Computer Science

person

Marco Pavone

Associate Professor of Aeronautics and Astronautics and, by courtesy, of Electrical Engineering, of Computational and Mathematical Engineering, and of Information Systems

person

Dorsa Sadigh

Assistant Professor of Computer Science and of Electrical Engineering

person

Mac Schwager

Assistant Professor of Aeronautics and Astronautics, Director of the Multi-robot Systems Lab

person

Diyi Yang

Assistant Professor of Computer Science

person

James Zou

Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering

person

Duncan Eddy

Executive Director


POSTDOCTORAL SCHOLARS AND VISITORS
person

Pei Huang

Postdoctoral Scholar

Automated reasoning and trustworthy AI

person

Sydney Katz

Postdoctoral Scholar

Verification of autonomous systems

person

Max Lamparth

Postdoctoral Scholar

Interpretability, robustness and safety of language models

person

Min Wu

Postdoctoral Scholar

AI safety: robustness, explainability, and fairness


GRADUATE STUDENTS
person

Chris Agia

Ph.D. Candidate

Robot learning, integrated task & motion planning

person

Wajeeha Ahmad

Ph.D. Candidate

Impact of AI on misinformation and web content quality

person

Samuel Akinwande

Ph.D. Candidate

Formal verification of nonlinear systems

person

Suneel Belkhale

Ph.D. Candidate

Robot perception and decision making

person

Polo Contreras

Ph.D. Candidate

Multi-robot systems

person

Tim Chen

Ph.D. Candidate

Safety in Learned Environments

person

Kaila Coimbra

Ph.D. Candidate

Lunar surface rover positioning

person

Marta Cortinovis

Ph.D. Candidate

Lunar navigation ephemeris modeling

person

Adam Dai

Ph.D. Candidate

Provably safe trajectory planning under uncertainty

person

Harrison Delecki

Ph.D. Candidate

Autonomous subterranean exploration

person

Amine Elhafsi

Ph.D. Candidate

Planning and control for safe robotic navigation

person

Aaron Feldman

Ph.D. Candidate

Statistical performance guarantees

person

Matt Foutter

Ph.D. Candidate

Safe and reliable autonomy

person

Amelia Hardy

M.S. Candidate

Constrained decision problems

person

Keidai Iiyama

Ph.D. Candidate

Navigation for the Moon

person

Liam Kruse

Ph.D. Candidate

Probabilistic safe sets

person

Bernard Lange

Ph.D. Candidate

Deep learning for automotive perception

person

Udayan Mandal

M.S. Canditate

Formal verification

person

Robert J. Moss

Ph.D. Candidate

Risk assessment of autonomous systems

person

Daniel Neamati

Ph.D. Candidate

Urban GNSS-based navigation using environmental features

person

Mira Partha

Ph.D. Candidate

Neural Radiance Field (NeRF) maps of urban Environments

person

Ann-Katrin Reuel

Ph.D. Candidate

Ethics of automated driving

person

Marc Schlichting

Ph.D. Candidate

Machine learning for safety validation

person

Rohan Sinha

Ph.D. Candidate

Trustworthy autonomy and out-of-distribution generalization

person

Megha Srivastava

Ph.D. Candidate

Reliability, security, and robustness through the lens of human-AI interaction and collaboration

person

Jiankai Sun

Ph.D. Candidate

Reliable and efficient AI systems

person

Romeo Valentin

Ph.D. Candidate

Uncertainty calibration of neural networks

person

Guillem Casadesus Vila

Ph.D. Candidate

Lunar positioning, navigation, and timing

person

Joe Vincent

Ph.D. Candidate

Analysis of learned systems in AI and control theory applications

person

Scott Viteri

Ph.D. Candidate

AI alignment via ontology maps

person

Asta Wu

Ph.D. Candidate

GNSS-based relative localization for cars

person

Haoze Wu

Ph.D. Candidate

Formal verification

person

Alan Yang

Ph.D. Candidate

GNSS code design using AI and machine learning

person

Yibo Zhang

Ph.D. Candidate

AI safety from a fundamental perspective

MEMBERSHIP

Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.



Benefits Core Associate
Sponsorship level per year 300k 100k
Commitment yearly* yearly
Sponsorship of a single research project collaboratively-defined with a Center researchers Yes Yes
Access to student recruiting opportunities Yes Yes
Annual research symposium invitations Yes Yes
Research seminar participation Yes Yes
Discounted employee enrollments in select professional and technical courses from the Stanford Engineering Center for Global and Online Education Yes Yes
Visiting scholar positions** Yes Additional fee
Opportunity to contribute to the definition of a
flagship research project involving multiple faculty and students
Yes No
Faculty visits*** Yes No


The Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model.

Affiliate Program members may provide additional funding. All research results arising from the use of the additional funding will be shared with all program members and the general public. Affiliate Program members may request the additional funding be used to support a particular area of program research identified on the program’s website, or the program research of a named faculty member, as long as the faculty is identified on the program website as participating in the Affiliate Program. In either instance, the director of the Affiliate Program will determine how the additional funding will be used in the program’s research.

* The desired commitment is for 3 years to ensure continuity of funding and successful completion of research. Membership is renewed annually.

** For additional information please see the Visiting Scholar Policy

*** The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.


For more information, please view our Corporate Membership Document

To view Stanford policies, please see Stanford University Policies for Industrial Affiliate Programs


CONTACT


Duncan Eddy

Executive Director - Center for AI Safety

MAILING LIST

Subscribe to Stanford Center for AI Safety Newsletter and Announcements

* indicates required
If no company association put N/A

View previous newsletters

We are grateful to our affiliates program members for their generous support:

Core Members:



Associate Members:



Federal Sponsors:


Additional Sponsors:


Affiliated Organizations:


Stanford Existential
Risks Initiative (SERI)