Advertisement

Dangerous online content runs wild in the EU—and private chats aren't the culprit

 Kids on Phones.
Kids on Phones.

More than 1,700 websites in the EU may contain unreported child sexual abuse (CSAM) content, a new report found.

These worrying findings come from a recent study conducted by experts at Surfshark. Researchers looked at the issue on a global scale, too, recording an increase in CSAM reports filed to authorities. There were about 83 million between 2020 and 2022, with EU countries accounting for 3.1 million reports.

This investigation comes a few days after a group of tech companies—Surfhsark included—wrote an open letter urging EU ministers to withdraw from a proposed anti-CSAM regulation that could enable authorities to scan all citizens' private communication for dangerous content. By looking at currently available tech solutions, the VPN service provider seeks to raise critical questions about how this growing problem can be addressed without infringing on people's privacy.

Children's online security at risk

"There may be thousands of unreported websites containing CSAM at any given moment. Our study estimates as many as 1,720 websites in the EU alone. It’s scary to think about how many websites containing CSAM are live at this moment in the rest of the world and have not yet been reported," Lina Survila, Surfshark spokesperson, told me when commenting on the findings.

As we mentioned, researchers at Surfshark examined the extent of the child exploitation issue online across the EU and around the world.

In Europe, Poland seems to have the biggest CSAM problem as the country may account for 16% of EU cases (269 unreported local harmful websites). France follows suit with 260 potentially dangerous websites, Germany with 158, Hungary with 152, and Italy with 110.

Around the world, Asia leads concerns over children's online safety, accounting for two-thirds of the 83 million CSAM reports filed between 2020 and 2022. According to researchers, India accounts for almost 16% of these reports (over 13 million), followed by the Philippines with 7.1 million reports, Pakistan with 5.4 million, Indonesia and Bangladesh with 4.7 million each.

To compile this worrying set of data, researchers made use of 2020-2022 open-source information from the National Center for Missing and Exploited Children (NCMEC)—the US body whose Big Tech companies are legally required to contact in these instances. These resources were then compared with data reported by the Communications Regulatory Authority of Lithuania (RRT). You can check more details of Surfhsark's methodology here.

Tech innovation for privacy-preserving solutions

Perhaps, the most important part of Surfshark's study lies behind the RRT findings. In 2022, the national body regulator carried out an experiment in partnership with proxy service provider, Oxilabs, to show how new tech can help fight back CSAM issues in a privacy-preserving way.

The company developed a new AI-powered tool able to scrape the web to effectively identify illegal content. It analyzes image metadata and determines if there are any matches in the police database. These images then pass through a machine-learning model capable of detecting pornographic material.

The pro bono project lasted two months and scanned about 300 thousand Lithuanian websites. The tool managed to identify 19 local websites violating national or EU laws. This brought to eight police reports and two pre-trial investigations.

According to Survila, Oxilabs' experiment should serve as an example of how tech innovation can support authorities' efforts to halt child sexual abuse online. She told me: "While there is no one-size-fits-all solution, the proactive steps taken by some governments can serve as a guiding model for others in addressing these complex challenges."

Did you know?

In October last year, the EU Parliament reached a historical agreement, asking for the removal of the Chat Control clause from the EU Child Sexual Abuse Material (CSAM) Scanning Proposal. Reiterating privacy as a fundamental right, the decision comes to safeguard online security and encryption. However, it's now the time for each EU Member State to agree on their own position. Ministers expect to find an agreement by March.

Currently being debated in the EU Parliament, the so-called Chat Control proposal appears to take a very different direction which experts warn could be detrimental to citizens' security.

They emphasized how side-scanning chats isn't just an attack on encryption that infringes people's privacy, but that it may also open up a backdoor that criminals can exploit.

If it's true that this invasive approach has been thought to address wider dangers online, "an individual’s right to privacy should be non-negotiable, and such laws should not even be considered before employing every other possible tool to fight abusive material online," said Survila.

She thinks, in fact, the first step for governments should be trying out less-invasive tools like web scraping to identify and fight publicly available dangerous material.

Denas Grybauskas, Head of Legal at Oxylabs, believes that the European Commission (EC) understands that such intrusion into citizens' privacy should be allowed only as a last resort. Still, he still thinks it's crucial to discuss tech-powered alternatives in more detail.

"I hope that examples like [Oxylabs’ pro bono project] and a wider array of technological options will be openly discussed by the EC coming up with the regulation that might bring potential damage to the privacy of all EU citizens," he told me.

In the meantime, he said, the team at Oxilabs keeps working with RRT to improve its existing AI-powered web scraping tool. The company also carries on further initiatives with organizations, students, and researchers to develop more software solutions to today's online threats.

On this point, Grybauskas said: "We are always open to new partnerships with researchers, academia, and public organizations that want to solve critical research questions and missions using public web data."