be2e978b8986bc07252ffe9ff28bf825
Data Breach, Technology Industry

The Untold Story: OpenAI’s Failure to Report a Major Data Breach in 2023

A previously unreported security breach at OpenAI, the developer of ChatGPT, has raised alarms over the potential of foreign adversaries, such as China, accessing sensitive AI technologies.

While the hacker did not access the core code, the incident that took place in 2023 has sparked fears over foreign adversaries — particularly China — stealing sensitive AI secrets, The New York Times reported.

The breach occurred early last year when a hacker infiltrated OpenAI’s internal messaging system, gaining access to employee discussions regarding the company’s latest AI advancements, the newspaper said quoting two anonymous sources.

OpenAI, headquartered in San Francisco, confirmed the breach to employees as well as the board of directors in April 2023 in an all-hands call but opted not to make it public. The company reasoned that no customer or partner data was compromised, and they believed the hacker was an individual, not a state-sponsored actor, the report added.

OpenAI executives decided against making the incident public, as they believed the stolen information did not pose an immediate threat and that the hacker was likely acting alone. The company also chose not to notify law enforcement agencies, including the FBI, according to two sources cited by the newspaper.

However, this incident raised concerns among OpenAI employees regarding the possibility of foreign adversaries, such as China, stealing the company’s AI technologies and posing a threat to US national security, as reported by the newspaper.

Following the breach, Leopold Aschenbrenner, a former technical program manager at OpenAI, expressed criticism of the company’s security measures in a memo to the board. He argued that OpenAI was not taking sufficient steps to mitigate potential threats from foreign adversaries.

Subsequently, Aschenbrenner was reportedly terminated from his position, according to the newspaper.

Aschenbrenner’s concerns were dismissed by OpenAI, which maintained that his departure from the company was unrelated to his security criticisms. 

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” OpenAI spokesperson Liz Bourgeois was quoted as saying to The New York Times. “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work, This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

OpenAI did not immediately respond to a request for comment.

The incident highlights the growing debate surrounding the national security implications of powerful AI technology. Industry watchers also fear AI could be weaponized in the future. This concern is compounded by China’s rapid growth in the AI field, with some estimates suggesting they may soon surpass the US in terms of AI research talent.

Besides, the company has consistently faced criticism and scrutiny regarding its safety and security practices. Over the last couple of months, the company has seen multiple senior staff quitting, either to join a competitor that claims to be a more responsible AI firm or to start something of their own.

Most notably, the company’s chief scientist Ilya Sutskever quit citing safety concerns. “I’m confident that OpenAI will build AGI that is both safe and beneficial,” Sutskever said while leaving the company in May. A month later he founded his own company “Safe Superintelligence Inc.”

Similarly, Jan Leike, a prominent AI researcher at OpenAI quit the ChatGPT-maker over safety concerns to join Anthropic.

To address these concerns, OpenAI has since formed a Safety and Security Committee, including former National Security Agency director General Paul Nakasone. Additionally, calls for government regulations on AI development are growing louder. However, experts caution that AI’s most serious potential threats are likely still years or even decades away.

“The recent security breach at OpenAI highlights the pressing need for robust cybersecurity measures within AI companies,” said Neil Shah, VP for research and partner at Counterpoint Research. “As AI technologies become more advanced and integral to various sectors, safeguarding sensitive information against potential threats, whether from individual hackers or state actors, is crucial to maintaining trust and ensuring long-term innovation.”

Innovative solutions for a better tomorrow.


Discover more from ParJenn Technologies

Subscribe to get the latest posts sent to your email.

HTML Snippets Powered By : XYZScripts.com