OpenAI responds to warnings of self governance by former board members, the Economist reports

FILE PHOTO: FILE PHOTO: Illustration shows OpenAI logo

(Reuters) - OpenAI's board on Thursday pushed back on allegations from its former members that concerns over artificial intelligence safety at the startup necessitated Sam Altman's shocking ouster last year.

OpenAI's board members in an article published in the Economist said the review into the events found the previous board's decision did not arise out of concerns over the pace of AI development or statements made to the startup's investors, customers or business partners, among others.

"In six months of nearly daily contact with the company, we have found Altman highly forthcoming on all relevant issues and consistently collegial with his management team," it said.

Helen Toner and Tasha McCauley, who had left the board in November when Altman returned as CEO, had told the Economist in an invitation piece on Sunday that they stood by the decision to dismiss Altman, given the board's duty to "provide independent oversight and protect the company's public-interest mission."

They also said that developments since their departure bode ill for OpenAI's experiment in self-governance, pointing to Altman's return to the Microsoft-backed startup's board, as well as the departure of senior safety-focused talent.

OpenAI's board, chaired by former Salesforce co-CEO Bret Taylor, said it agreed with Toner and McCauley's view that AI requires effective regulation and added that the ChatGPT maker has held talks with government officials on various issues surrounding generative AI.

OpenAI said on Tuesday it formed a safety and security committee that will be led by board members as it begins training its next AI model.

(Reporting by Zaheer Kachwala in Bengaluru; Editing by Maju Samuel)