The U.S. Artificial Intelligence Safety Institute, housed within the Department of Commerce’s National Institute of Standards and Technology (NIST), has announced formal agreements with two of the most prominent artificial intelligence startups, OpenAI and Anthropic. This collaboration marks a significant step in the government’s effort to enhance safety and ethics in the rapidly advancing AI industry.
Under these agreements, the U.S. AI Safety Institute will have access to major new AI models from both OpenAI and Anthropic before and after their public release. This access is part of a broader initiative to conduct rigorous testing and evaluation of the capabilities and safety risks associated with these advanced AI systems. The goal is to identify and mitigate potential risks, ensuring that AI development proceeds responsibly.
The agreements follow the establishment of the U.S. AI Safety Institute, which was created in response to the Biden-Harris administration’s first-ever executive order on artificial intelligence issued in October 2023. This executive order called for new safety assessments, equity and civil rights guidance, and research into AI’s impact on the labor market. The collaboration between the government and these leading AI companies is seen as a crucial step in advancing the science of AI safety.
Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of safety in driving technological innovation. She highlighted that these agreements represent an important milestone in the Institute’s mission to responsibly steward the future of AI. In addition to pre-release testing, the U.S. AI Safety Institute will provide feedback to both companies on potential safety improvements, working closely with its partners at the U.K. AI Safety Institute.
OpenAI and Anthropic, two companies at the forefront of AI development, have agreed to these measures as concerns about the safety and ethics of AI continue to grow within the industry. The collaboration aims to build on NIST’s long-standing legacy of advancing measurement science, technology, and standards. Through deep collaboration and exploratory research on advanced AI systems, these agreements are expected to contribute significantly to the safe, secure, and trustworthy development and use of AI.
As the AI industry evolves, the involvement of government bodies like the U.S. AI Safety Institute in pre-release testing and safety evaluations is likely to become a critical component in ensuring that the development of AI technologies aligns with societal values and safety standards. This partnership sets a precedent for future collaborations between the government and AI developers, reinforcing the importance of oversight and safety in this rapidly changing field.