AI Frontiers Initiative

Answering strategic questions through technical experiments.

The AI Frontiers Initiative is a research program at the Berkeley Risk and Security Lab dedicated to exploring and evaluating AI models, datasets, and hardware. Its goal is to provide in-depth insights into the rapidly evolving field of artificial intelligence and assess its geopolitical implications. By rigorously evaluating the performance, applications, and limitations of cutting-edge AI systems, the initiative aims to track emerging trends, disruptions, and breakthroughs in the AI ecosystem, while linking them to broader strategic questions. This effort not only advances technical knowledge but also seeks to inform the decision-making of AI practitioners, industry leaders, and policymakers.

Publications

Whack-a-Chip: The Futility of Hardware-Centric Export Controls 

Ritwik Gupta, Leah Walker, Andrew W. Reddie

U.S. export controls on semiconductors are widely known to be permeable, with the People’s Republic of China (PRC) steadily creating state-of-the-art artificial intelligence (AI) models with exfiltrated chips. This paper presents the first concrete, public evidence of how leading PRC AI labs evade and circumvent U.S. export controls. We examine how Chinese companies, notably Tencent, are not only using chips that are restricted under U.S. export controls but are also finding ways to circumvent these regulations by using software and modeling techniques that maximize less capable hardware. Specifically, we argue that Tencent’s ability to power its Hunyuan-Large model with non-export controlled NVIDIA H20s exemplifies broader gains in efficiency in machine learning that have eroded the moat that the United States initially built via its existing export controls. Finally, we examine the implications of this finding for the future of the United States’ export control strategy. Read on Arxiv

Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies

Ritwik Gupta, Leah Walker, Rodolfo Corona, Stephanie Fu, Suzanne Petryk, Janet Napolitano, Trevor Darrell, Andrew W. Reddie

The authors argue that today’s AI governance efforts mistakenly focus on a small set of model-based thresholds, particularly their size or the amount of computation required to train them. Machine learning capabilities depend on both the model and the data provided. Through experiments, the paper shows that models alone are not harmful; rather, the unique combination of models exposed to specific datasets and subsequently being used for specific purposes should be the focus of our concern about risk to public safety.  Finally, the authors note that a narrow, model-centric approach overlooks significant risks related to the data that these models are trained on and may therefore fail to provide effective oversight. Read on Arxiv

 

Open-Source Assessments of AI Capabilities: The Proliferation of AI Analysis Tools, Replicating Competitor Models, and the Zhousidun Dataset

Ritwik Gupta, Leah Walker, Eli Glickman, Raine Koizumi, Sarthak Bhatnagar, Andrew W. Reddie

The integration of artificial intelligence (AI) into military capabilities has become a norm for major military power across the globe. Understanding how these AI models operate is essential for maintaining strategic advantages and ensuring security. This paper demonstrates an open-source methodology for analyzing military AI models through a detailed examination of the Zhousidun dataset, a Chinese-originated dataset that exhaustively labels critical components on American and Allied destroyers. By demonstrating the replication of a state-of-the-art computer vision model on this dataset, we illustrate how open-source tools can be leveraged to assess and understand key military AI capabilities. This methodology offers a robust framework for evaluating the performance and potential of AI-enabled military capabilities, thus enhancing the accuracy and reliability of strategic assessments. Read on Arxiv



Please join us!

If you’d like to be added to our mailing list, please fill out the form below:

Email Us

Or to reach out to us via email at brsl@berkeley.edu if you have any questions.