Advertisement

"Take anti-tank missile as much as you need" — Amazon researchers find that massive amount of the open web is just AI-produced, machine translated nonsense

 Confused dude using a computer.
Confused dude using a computer.

Researchers at the AI lab of Amazon Web Services (AWS) have discovered that a large amount of online content comes from machine-translated (MT) sources.

This content, which is translated across many different languages, is frequently of low quality, which the team says highlights the critical need for data quality and source consideration when training large language models (LLMs).

The researchers also found that machine-generated content is common in translations for languages that have fewer resources, and that it makes up a significant portion of all content on the web.

Selection bias

“We actually got interested in this topic because several colleagues who work in MT and are native speakers of low resource languages noted that much of the internet in their native language appeared to be MT generated,” Mehak Dhaliwal, a former applied science intern at AWS and current PhD student at the University of California, Santa Barbara, told Motherboard.

“So the insight really came from the low-resource language speakers, and we did the study to understand the issue better and see how widespread it was.”

The team developed a vast resource known as the Multi-Way ccMatrix (MWccMatrix) to better understand the features of content translated by machines. This resource contains 6.4 billion unique sentences in 90 different languages and includes translation tuples, which are sets of sentences in various languages that are translations of one another.

The study, which was submitted to Cornell University's pre-print server arXiv, found that vast amounts of web content is often translated into numerous languages, mostly by machine translation. This content is not only prevalent in translations in languages with fewer resources but also makes up a significant portion of all web content in these languages.

The researchers additionally noticed a selection bias in the kind of content that's translated into multiple languages, likely for the purpose of generating ad revenue.

The paper concludes that “MT technology has improved dramatically over the last decade, but still falls short of human quality. MT content has been added to the web over many years using MT systems available at the time, so much of the MT on the web is likely very low quality by modern standards. This could produce less fluent LLM models with more hallucinations, and the selection bias indicates the data may be of lower quality, even before considering MT errors. Data quality is crucial in LLM training, where high quality corpora like books and Wikipedia articles are typically upsampled several times.”

More from TechRadar Pro