In a shocking revelation, OpenAI was found to have suffered a security breach last year. The intruder managed to infiltrate the company's internal messaging systems and stole details about the design of OpenAI's AI technologies.
What Happened: The hacker was able to gain access to an online forum where OpenAI employees were discussing the company's latest technologies, The New York Times reported on Friday. The intruder lifted details from these discussions, although they did not manage to gain access to the systems where the company builds its artificial intelligence, according to two individuals familiar with the incident.
The incident was disclosed to OpenAI employees and its board of directors during an all-hands meeting at the company's San Francisco offices in April 2023. The executives decided not to share the news publicly as no customer or partner information was stolen, and they did not consider the incident a threat to national security.
However, the breach did raise concerns among some OpenAI employees about the potential for foreign adversaries, such as China, to steal AI technology that could eventually pose a threat to U.S. national security.
Following the breach, Leopold Aschenbrenner, an OpenAI technical program manager, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent foreign adversaries from stealing its secrets.
Aschenbrenner was later fired for leaking information outside the company. He alluded to the breach on a recent podcast, stating that OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
OpenAI spokeswoman, Liz Bourgeois, disagreed with Aschenbrenner's characterizations of the company's security, stating that the incident had been addressed and shared with the board before Aschenbrenner joined the company.
Matt Knight, OpenAI's head of security, despite the concerns, emphasized the importance of having the best minds working on AI technology, acknowledging that it comes with some risks.
Why It Matters: This incident comes on the heels of OpenAI's decision to disband a team dedicated to ensuring the safety of potentially ultra-capable AI systems. The company also restricted access to its tools and software in China amid rising tensions.
These actions were seen as an attempt to exclude users from countries where its services are not available. The breach also coincides with a report that China has surged ahead in the global race for generative AI patents, despite U.S. sanctions.
The report comes at the heels of another reported security flaw in OpenAI's macOS App where conversations were being stored in plain texts, making them vulnerable to unauthorized access. The issue has been resolved.