Artificial intelligence startups OpenAI and Anthropic have signed agreements with the U.S. government to research, test and evaluate their AI models, the U.S. AI Safety Institute announced Thursday.
These are the first agreements of their kind. They come at a time when companies are facing increased regulatory scrutiny over the safe and ethical use of AI technologies.
California lawmakers will vote this week on a bill that would comprehensively regulate the development and use of AI in the state.
"Safe and reliable AI is the foundation for the technology's positive impact. Through our collaboration with the US AI Safety Institute, we can leverage the Institute's extensive expertise to thoroughly test our models before widespread deployment," said Jack Clark, co-founder and policy director of Anthropic, which is backed by Amazon and Alphabet.
Under the agreements, the AI Safety Institute will have access to key new models from OpenAI and Anthropic, both before and after their release.
Enhanced safety
The agreements also enable joint research to assess the capabilities of AI models and the associated risks.
"We believe the Institute has a critical role to play in defining U.S. leadership in the responsible development of artificial intelligence, and we hope our joint work will provide a framework for the rest of the world to build upon," said Jason Kwon, chief strategy officer at OpenAI, the maker of ChatGPT.
"These agreements are just the beginning, but they are an important milestone on the path to a responsible approach to the future of AI," said Elizabeth Kelly, Director of the AI Safety Institute.
The Institute, which is part of the US Department of Commerce's National Institute of Standards and Technology (NIST), will also work with the UK's AI Safety Institute to provide feedback to companies on potential safety improvements.
The US AI Safety Institute was established last year as part of an executive order by President Joe Biden to assess known and emerging risks of AI models."