AI Red-Teaming Bootcamp

The Berkeley Risk and Security Lab hosted early-career professionals, with expertise in CBRN, cybersecurity, machine learning, disinformation studies, and other relevant fields to AI safety for the 2025 UC Berkeley AI Red-Teaming Bootcamp.

As commercial artificial intelligence models have become increasingly sophisticated, leading AI companies have turned to red teaming as a tool for improving model safety and identifying potential cases of misuse. As such, AI red teaming is increasingly becoming its own discipline within AI safety and security. To grow this discipline, and help train the next generation of AI safety experts, the Berkeley Risk and Security Lab created the AI Red-Teaming Bootcamp with support from Open Philanthropy.

The bootcamp was hosted on UC Berkeley’s campus and featured instructors and lecturers from the private sector, government (including the national labs), academia, and civil society.

Participants had the opportunity to engage with these experts and understand their different approaches and priorities with regard to AI safety. Participants also learned about the history of red-teaming, the current red-teaming landscape, and worked through a series of red-teaming exercises. At the end of the week, participants were left with a better understanding of how AI systems work, how they fail, and how those failures can be detected.

We thank Open Philanthropy for their support of this program.