Before lawmakers regulate AI, they must define it—and that isn’t easy

OpenAI CEO Sam Altman testifies before Congress.
OpenAI CEO Sam Altman testifies before Congress.

Regulators around the world are hammering out how to regulate artificial intelligence. The fast-moving, nebulous technology is hard to define, presenting a hurdle for lawmakers.

What is AI?

AI is a catch-all term for applications that perform complex tasks that humans have traditionally done. Machine learning is one important subset within AI, which focuses on building systems that learn or improve their performance based on the data they consume. “I can see a lot of ambiguity in there, and vagueness that would play in different actions,” Aleksander Mądry, a professor at MIT, told Quartz.

Read more

The AI field is changing rapidly. It has moved from linear regression models to make financial forecasts to large language models that can generate new content—and breakthroughs are coming quickly, so there are concerns that laws will constrain the development of AI, Mądry said. Policymakers are not necessarily in the best position to be able to define what parts of AI need to be regulated or not, he added.

But that’s not to say policymakers need to be able to define AI, said Mądry, who testified before the House subcommittee in March. He added that even if you give him an algorithm, he wouldn’t be able to tell you how much of it is AI or human created, reflecting the ambiguous nature of AI. So creating a law to execute how to define AI is a risky business. “We don’t want systems to lead to bad consequences, but that’s hard to do,” he said. “It’s really hard to be prescriptive about anything AI.”

How US and European regulators are approaching AI

The AI Act (pdf) from the European Union focuses, which is expected to be the world’s first rule on AI, focuses on understanding what goes into AI models, (pdf) including defining the data sources, the intended purpose of the AI systems, and the logic of the models. That’s tricky given the ambiguous nature of AI.

The US and Europe diverge on how to regulate AI, with the latter taking a more preventative approach using government as the arbiter, whereas the US is leaning more on the tech industry to come up with its own safeguards.

In May, US vice president Kamala Harris invited the CEOs of four major AI companies for a discussion about the responsible development of AI and to commit to participating in evaluating AI systems that are consistent with responsible disclosure principles. That said, it appears even Biden officials are divided over how to regulate AI tools, with some supporting guidance from EU proposals, while others say aggressive regulation will put the US at a competitive disadvantage, sources involved in the discussion told Bloomberg.

AI regulation should focus on outputs rather than inputs

But Mądry argues there is a solution to this, which is to focus on the outcomes from AI systems, rather than what goes into the algorithms. So for instance, if an employer uses AI-driven hiring tools to help assess job candidates, and evidence is found that these AI tools discriminated against a job candidate, then AI laws could be focused on the outcome of who’s responsible. There will also be new questions that regulators need to tackle, such as disclosures around the use of AI, which have not been regulated in the past.

“We are worried about outcomes, so let’s talk about outcomes and not all the ways how we should go to avoid these outcomes,” said Mądry. “Because this is not the area of the expertise of the policymakers, and even engineers at the top companies don’t know either. It’s still a developing field of both the capabilities involved, and the ways to keep things in check.”

More from Quartz

Sign up for Quartz's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.