Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023.
Jack Guez | AFP | Getty Images
OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup in May, has raised $1 billion from investors for his new AI company, Safe Superintelligence (SSI).
The company announced in a post on X that investors included Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel, as well as NFDG, an investment partnership co-run by SSI executive Daniel Gross.
“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” Sutskever wrote on X in May to announce the new venture.
Sutskever was OpenAI’s chief scientist and co-led the company’s Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic. Shortly after their departures, OpenAI disbanded the team, just one year after it announced the group. Some of the team members were reassigned to other teams within the company, a source familiar with the situation told CNBC at the time.
Leike wrote in a post on X at the time that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
Sutskever started SSI with Daniel Gross, who oversaw Apple’s AI and search efforts, and former OpenAI employee Daniel Levy. The company has offices in Palo Alto, California, and Tel Aviv, Israel.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the company posted on X. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sutskever was one of the OpenAI board members involved in the temporary ouster of co-founder and CEO Sam Altman in November.
In November, OpenAI’s board said in a statement that Altman had not been “consistently candid in his communications with the board.” The issue quickly came to look more complex. The Wall Street Journal and other media outlets reported that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.
Almost all of OpenAI’s employees signed an open letter saying they would leave in response to the board’s action. Days later, Altman was back at the company.
Following Altman’s sudden ouster and before his quick reinstatement, Sutskever publicly apologized for his role in the ordeal.
“I deeply regret my participation in the board’s actions,” Sutskever wrote in a post on X on Nov. 20. “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”