We aim to develop a Responsible AI (RAI) assessment framework consisting of evaluation metrics and mitigation strategies. This research aims to combine industry and academic perspectives to create a comprehensive guide for building RAI. ... (click for more)
AI practitioners often need to perform model comparison to select the most suitable models for integration into larger pipelines, or to cross-check fine-tuned models against their predecessors for desired improvements. ... (click for more)
Deep reinforcement learning (DRL) is a powerful machine learning paradigm for generating agents that control autonomous systems. However, the black-box nature of DRL agents limits their deployment in real-world safety-critical applications. A promising approach ... (click for more)
Recently, 3D neural maps learned from sensor data have shown astounding results in visual realism and detail, as well as more compactness and efficiency compared to traditional map representations. However, despite their strengths over traditional map representations, neural maps are still limited by ...gi (click for more)
Using precise mathematical modeling to ensure the safety, security, and robustness of conventional software and hardware systems.
Designing systems that intelligently balance learning under uncertainty and acting safety.
Understanding safety in the context of fairness, accountability, and explainability for autonomous and intelligent systems.
Grace Gao
Winter 2025
Max Lamparth
Fall 2024
Carlos Guestrin
Spring 2024
Sanmi Koyejo
Spring 2024
Mac Schwager
Srping 2024
Our corporate members are a vital and integral part of the Center for AI Safety. They provide insight on real-world use cases, valuable financial support for research, and a path to large-scale impact.
Benefits | Core | Associate |
---|---|---|
Sponsorship level per year | 300k | 100k |
Commitment | yearly* | yearly |
Sponsorship of a single research project collaboratively-defined with a Center researchers | Yes | Yes |
Access to student recruiting opportunities | Yes | Yes |
Annual research symposium invitations | Yes | Yes |
Research seminar participation | Yes | Yes |
Discounted employee enrollments in select professional and technical courses from the Stanford Engineering Center for Global and Online Education | Yes | Yes |
Visiting scholar positions** | Yes | Additional fee |
Opportunity to contribute to the definition of a flagship research project involving multiple faculty and students |
Yes | No |
Faculty visits*** | Yes | No |
The Stanford Center for AI Safety researchers will use and develop open-source software, and it is the intention of all Center for AI Safety researchers that any software released will be released under an open source model.
Affiliate Program members may provide additional funding. All research results arising from the use of the additional funding will be shared with all program members and the general public. Affiliate Program members may request the additional funding be used to support a particular area of program research identified on the program’s website, or the program research of a named faculty member, as long as the faculty is identified on the program website as participating in the Affiliate Program. In either instance, the director of the Affiliate Program will determine how the additional funding will be used in the program’s research.
* The desired commitment is for 3 years to ensure continuity of funding and successful completion of research. Membership is renewed annually.
** For additional information please see the Visiting Scholar Policy
*** The site presentations and all information, data and results arising from such visitation interactions will be shared with all members and the public.
For more information, please view our Corporate Membership
Document
To view Stanford policies, please see Stanford
University Policies for Industrial Affiliate Programs
Duncan Eddy
Executive Director - Center for AI Safety