AI Risk Levels Being Suppressed, Claims Letter From OpenAI And Google DeepMind Employees

Former OpenAI and Google DeepMind employees have published an open letter that asks AI companies to allow employees to raise concerns about the technology without fear of retaliation.

The website Ars Technica reports that the letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has been signed by 13 individuals, including some anonymous people fearing potential repercussions.

More from Deadline

The signatories argue that AI’s risks include “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

More concerning is the assertion that AI companies possess substantial non-public information about their systems’ capabilities, limitations, and risk levels. Currently, they have minimal obligations to share this information with governments, and none with civil society, Ars Technica reports.

Non-anonymous signers former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler, as well as former Google DeepMind employees Ramana Kumar and Neel Nanda.

Four key principles are requested by the group: not enforcing agreements that prohibit criticism of the company for risk-related concerns, facilitating an anonymous process for employees to raise concerns, supporting a culture of open criticism, and not retaliating against employees who share confidential information after other processes have failed.

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.