Early last year, a hacker gained access to OpenAI's internal messaging systems, the maker of ChatGPT, and stole information about the design of their artificial intelligence technologies. The attacker extracted details from discussions in an online forum where employees talked about the latest technologies from OpenAI but did not enter the systems where their AI is hosted and developed, according to two people familiar with the incident.

OpenAI's Response

OpenAI executives revealed the incident to their employees during a company-wide meeting at the company's San Francisco offices in April 2023 and informed their board of directors. However, they decided not to make the news public as no customer or partner information was stolen. The executives did not consider the incident a national security threat, believing the hacker to be a private individual with no known ties to foreign governments, so they did not inform the FBI or other authorities.

The news generated fears among some employees that foreign adversaries, such as China, could steal AI technology that could eventually jeopardize U.S. national security. It also raised questions about how seriously OpenAI was handling security and exposed divisions within the company over the risks of artificial intelligence.

After the incident, Leopold Aschenbrenner, a technical program manager, sent a memo to the board arguing that OpenAI was not doing enough to prevent the Chinese government and other foreign adversaries from stealing their secrets. Aschenbrenner was fired in the spring for leaking other information and argued that his dismissal was politically motivated. He alluded to the breach in a recent podcast, but the details had not been previously reported. He said that OpenAI's security was not strong enough to protect against the theft of key secrets if foreign actors infiltrated the company.

"We appreciate the concerns Leopold raised while he was at OpenAI, and this did not lead to his separation," said OpenAI spokesperson Liz Bourgeois. Referring to the company's efforts to build general artificial intelligence, she added, "While we share his commitment to building safe AI, we disagree with many of the claims he has made since about our work. This includes his characterizations of our security, particularly this incident, which we addressed and shared with our board before he joined the company."

Fears that a hack could have links to China are not unfounded. Last month, Brad Smith, president of Microsoft, testified on Capitol Hill about how Chinese hackers used Microsoft's systems to launch an attack on federal government networks. However, under federal and California laws, OpenAI cannot prevent people from working at the company based on their nationality. Banning foreign talent could significantly hinder AI progress in the United States. "We need the best and brightest minds working on this technology," said Matt Knight, OpenAI's head of security. "It carries some risks, and we have to solve them."

Companies like OpenAI and its competitors Anthropic and Google add security barriers to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the applications to spread misinformation or cause other problems. But there is little evidence that current AI technologies pose a significant national security risk. Studies conducted by OpenAI, Anthropic, and others have shown that AI is not significantly more dangerous than search engines.

Chinese companies are building their own systems that are almost as powerful as the leading U.S. systems. By some metrics, China has eclipsed the United States as the largest producer of AI talent, generating nearly half of the world's top AI researchers. "It's not far-fetched to think that China will soon be ahead of the United States," said Clément Delangue, CEO of Hugging Face