UPDATE: President Joe Biden signed an executive order on AI, billing the U.S. as out front of other countries when it comes to establishing guardrails around the fast-emerging technology.
But Biden told those gathered in the East Room that other steps will require congressional action. That will be a much more complicated process, as lawmakers have been in a stalemate for years when it comes to any meaningful action on tech giants.
More from Deadline
Speaking to reporters after the White House ceremony, Senate Majority Leader Chuck Schumer (D-NY) said that his effort to craft a bill. “This is about the hardest thing I have attempted to undertake legislatively because a) it is so complicated, and b) it affects every aspect of society.” Schumer said that he and the rest of a bipartisan group of lawmakers will meet with Biden on Tuesday to talk about the legislation.
Biden specifically mentioned the need for legislation to stop tech firms from collecting minors’ data online, as well as another to ban targeting of online advertisements to children. But bills that have been proposed in the past have failed to advance in Congress.
The executive order also calls on the federal government to come up with best practices for employers to “mitigate the harms and maximize the benefits” of AI. But it’s unclear whether that would be in any way enforceable. It’s especially relevant to the current labor unrest in Hollywood, as AI is still a significant issue in the SAG-AFTRA talks with studios, who still want flexibility in their use of the technology.
In his remarks before the East Room crowd — which included podcast hosts Kara Swisher and Scott Galloway — Biden joked a bit about AI deepfakes. “I watched one of me. I said, ‘When the hell did I say that?’ But all kidding aside, a three-second recording of your voice to generate an impersonation good enough to fool your family — or you. I swear to God. Take a look at it. It’s mind blowing. And they can use it to scam loved ones into sending money because they think you are in trouble.”
The executive order does not require AI companies to label AI-generated content, but directs the Department of Commerce to develop standards for authentication and watermarking.
PREVIOUSLY: AI companies will be required to share their safety test results with the U.S. government as part of President Biden’s new executive order, designed to mitigate the risks of the emerging technology.
The White House unveiled a series of steps that Biden is taking amid fears that unchecked AI systems will pose danger to safety and security, as well as misinformation.
A big concern is that AI will unleash a wave of “deepfakes,” video and audio that can spread wildly on social media even though it is not real.
The executive order does not require that AI generated content be labeled as such, but it does direct the Department of Commerce to develop standards for authentication and watermarking. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from the government are authentic — and set an example for the private sector and governments around the world,” the White House said.
Biden and Vice President Kamala Harris will appear at a ceremony today at the White House to outline the executive order.
Bruce Reed, White House deputy chief of staff, said in a statement that the actions in the EO are the strongest “any government in the world has ever taken on AI safety, security, and trust. It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”
Other aspects of the executive order:
Test results. The administration, invoking the Defense Production Act, said that the order “will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.”
Safety standards. The National Institute for Standards and Technology will set standards for “red-team testing” before an AI system is released. Federal agencies will apply those standards to infrastructure and national security. New standards also will be developed for biological synthesis screening.
Privacy. The order requires prioritizing federal support for the development of “privacy-preserving techniques,” such as those that enable AI systems to be trained yet “preserve the privacy of the training data.” The order calls for funding a Research Coordination Network to develop cryptographic tools. Federal agencies also will look to strengthen their privacy protections in light of the risks of AI.
Biden also is expected to call on Congress to take further action, including in areas like data privacy. Lawmakers have for years held hearings and proposed data privacy legislation, but nothing has advanced. Senate Majority Leader Chuck Schumer (D-NY), though, has recently been convening a series of “AI Insight Forums,” with an eye toward sweeping legislation.
Civil rights. The order requires guidance to landlords, federal benefits programs and federal contractors to keep AI algorithms “from being used to exacerbate discrimination. The Justice Department and federal civil rights offices also will establish best practices for the use of AI in the criminal justice system.
Labor. The order mandates the creation of principles and best practices to “mitigate the harms and maximize the benefits” of AI for workers. This will include guidance to try to prevent employers from “undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.” The order also requires a report on the potential labor market impacts of AI. AI’s impact on labor has been an issue in the recent WGA strike and ongoing SAG-AFTRA strike.
Competition. Small developers will be provided access to technical assistance and resources, while the Federal Trade Commission will be encouraged to “exercise its authorities” as concerns are raised about antitrust and competition.
The executive order does not address copyright, as a number of authors and publishers have sued OpenAI and Meta for infringement in their use of protected works for training models. That leads to the possibility of a judge or jury ultimately deciding the parameters of “fair use” of copyrighted material.
Last summer, Biden gathered a number of AI executives at the White House to sign a voluntary pledge, including one in which they promised to create watermarking tools. The pledges were voluntary, and it’s not entirely clear that the federal government, through the FTC, would be able to enforce them. Another pledge was to allow independent experts to review their systems. Among the companies signing were OpenAI, Amazon, Microsoft, Google and Meta.
Best of Deadline