OpenAI Shakeups Breed Growing Concerns Over The Tech’s Safety

“I’m sorry, Dave. I’m afraid I can’t do that.”

That famous line from 2001: A Space Odyssey computer Hal is becoming an uncomfortable reminder for many that AI needs a firm hand. And recent developments along those lines are not encouraging.

More from Deadline

A shake-up at OpenAI that has seen the exit of its safety department and two key executives has observers worried about the corporate turmoil.

The OpenAI chief scientist, Ilya Sutskever, announced on X that he was leaving on Tuesday. Later that same day, his colleague, Jan Leike, also departed.

Sutskever and Leike led OpenAI’s super alignment team, focusing on developing AI systems compatible with human interests.

“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote on X on Friday.

Cofounder Sam Altman called Sutskever “one of the greatest minds of our generation” and said he was “super appreciative” of Leike’s contributions in posts on X. He also said Leike was right: “We have a lot more to do; we are committed to doing it.”

After news of the departures circulated, many observers — already nervous about AI’s potential hazards – began to question whether things will spin out of control with the safety department gutted.

CEO Altman and President Greg Brockman moved today to quell the alarms.

Brockman posted about how OpenAI will approach safety and risk moving forward.

In a lengthy post on X signed by Brockman and Altman, Brockman noted how OpenAI has already taken measures to ensure AI’s safe development and deployment of the technology.

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.