Skip to Content

More OpenAI drama: Exec quits over concerns about focus on profit over safety

By Clare Duffy, CNN

New York (CNN) — A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door.

Jan Leike, who resigned from his role leading the company’s “superalignment” team this week, said in a thread on X Friday that he disagreed with OpenAI leadership’s “core priorities” and had “reached a breaking point.”

“Alignment” or “superalignment” are terms used in the artificial intelligence space to refer to work on training AI systems to operate within human needs and priorities. Leike joined OpenAI in 2021, and last summer the company announced that he would co-lead the Superalignment team focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

However, Leike said Friday that in recent months, the team has been under resourced and “sailing against the wind.”

“Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he said on X, adding that Thursday was his last day at the startup. “Building smarter-than-human machines is an inherently dangerous endeavor … But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike’s exit, which he announced Wednesday, comes amid a broader leadership shuffle at OpenAI. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company.

Sutskever said he was leaving to work on a “project that is very personally meaningful to me.” But his exit was notable given the central role he played in the dramatic firing — and return — of OpenAI CEO Sam Altman last year, when he voted to remove Altman as chief executive and chairman of the board.

CNN contributor Kara Swisher previously reported that Sutskever had been concerned that Altman was pushing AI technology “too far, too fast.” But days after Altman’s ouster, Sutskever had a change of heart: He signed an employee letter calling for the entire board to resign and for Altman to return.

Still, questions about how — and how quickly — to develop and publicly release AI technology may have continued to cause tension within the company in the months after Altman regained control of the firm. The executive exits come after OpenAI announced this week that it would make its most powerful AI model yet, GPT-4o, available for free to the public through ChatGPT. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations.

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike wrote in his X thread on Friday. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

Asked for comment on Leike’s claims, OpenAI directed CNN to an X post from Altman saying the company is committed to safety.

“i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. “he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”

–CNN’s Samantha Delouya contributed to this report.

The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN – Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3-12 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content