Ex-OpenAI employee on why he was fired: 'Did something Sam Altman does'
Leopold Aschenbrenner worked on OpenAI's super alignment team before he was "fired for leaking" in April.
A former OpenAI researcher reflected on why he was fired saying that he "ruffled some feathers" by writing and sharing some documents related to safety at the Sam Altman company. Leopold Aschenbrenner worked on OpenAI's super alignment team before he was "fired for leaking" in April.

Read more: Sam Altman plans to turn OpenAI into a regular for-profit company: Report
The researcher said that he wrote and shared a memo after a "major security incident" and shared it with some OpenAI board members. In the memo, he wrote that OpenAI's security was "egregiously insufficient" in protecting against the theft of “key algorithmic secrets from foreign actors”.
Read more: Microsoft CEO Satya Nadella's worries double after Apple's OpenAI deal. Here's why
Following the memo, he was warned by the company which said to him that it was "racist" and "unconstructive" to worry about China Communist Party espionage. Leopold Aschenbrenner claimed the company then went through his OpenAI digital artifacts and he was fired after that as OpenAI alleged that he leaked confidential information.
The leak in question referred to a "brainstorming document on preparedness, on safety, and security measures" needed for artificial general intelligence, he said. He said he had reviewed the document before sharing it for any sensitive information and that it was "totally normal" at the company to share such feedback.
Read more: OpenAI launches ChatGPT Edu for universities: What is it? How can you use? All you need to know
OpenAI's superalignment team was aimed at solving “core technical challenges of superintelligence alignment in four years”, the company said.
"I didn't think that planning horizon was sensitive. You know it's the sort of thing Sam says publicly all the time," he said.
An OpenAI spokesperson told Business Insider on claims by Leopold Aschenbrenner, “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work.”