Disinformation probes - too late to make a difference?

Disinformation probes - too late to make a difference?

Meta’s Facebook and Instagram last week (30 April) became subject to an investigation under the EU’s online platform rules for seemingly lacking the robustness to counter Russian disinformation, the latest in a series of probes into platforms’ handling of the Digital Services Act (DSA).

Baltic and Nordic NGOs and researchers have painted a bleak picture of big tech's efforts to stop pro-Kremlin trolls, and doubt if these probes will put a halt to ongoing disinformation campaigns aimed at undermining NATO and the EU, while trying to bolster Russia’s credibility.

Faktabaari, a Finnish fact-checking and digital literacy service, has been tackling disinformation since the 2014 EU elections. Its founder Mikko Salo told Euronews that the EU has done a lot of pioneering work in this field, but said that these DSA probes are “a bit late” to make a change ahead of the European Elections in June.

“I believe these proceedings will take time and elections are taking place in a month. They are, however, useful to enforce current policies and build even better ones for the 2025-2029 mandate and to safeguard the upcoming US elections,” Salo said, assuming that the US would look to the EU for some best practices on election integrity.

Lukas Andriukaitis, one of the co-founders of Civic Resilience Initiative (CRI), a Lithuanian NGO aimed at stopping disinformation, echoes these comments.

“It’s a bit of a double edged sword; we have been supporting the DSA from the very start, as I think this is a very effective, if not the only way to motivate the social media platforms to take action,” Andriukaitis said.

He added that the NGO community has been ringing the alarm bells about what is happening with online platforms for a while. “Despite the platforms putting in some effort, they were not enough and definitely focused on showing themselves in a positive light. We have seen instances in Lithuania, where the government had harsh semi-public hearings with Meta, after which some positive changes took place,” he said.

EU efforts

Figures published last year by polling agency Ipsos show that levels of disinformation from Russia have risen steeps in recent years, in particular within the Baltic states – Estonia, Lithuania and Latvia, where every fourth citizen says they are heavily exposed to disinformation.

Finland for example saw a lot of fake news around its NATO membership – prior to joining the Alliance in April 2023 – mostly trying to convince users that Russia does not present a threat to Finland, Disinfo Lab said in a report.

Solvita Denisa-Liepniece, a Latvian researcher in the field of cognitive security, told Euronews that it “took a decade to understand how to deal with Big Tech”, and while there is progress in regulating “Western” tech giants, there have meanwhile been significant changes in social media consumption.

“In the Baltic States, for example, we observe an increased use of TikTok and Telegram, across different ages and audience’s profiles,” she said.

During the mandate of the von der Leyen Commission, there have been several efforts to clamp down on misinformation campaigns. An example is the strengthened Code of Conduct on Disinformation which the Commission set up in 2022, and which was signed by 34 companies – including Google and TikTok – that commit to tackling fake information online.

Another attempt to strengthen the basic conditions for free and independent media – the European Media Freedom Act – was greenlighted by EU lawmakers in March. The Act will oblige EU governments to better protect media against malign interference and limit the use of spyware against journalists. Lithuania’s Culture Minister Simonas Kairys stressed back in November the need for rules as the European information space is under “intense attack every day and every hour by Russian war propaganda and disinformation.”

DSA

The Digital Services Act (DSA) appears to be providing an effective remedy, given the six Commission investigations already launched into platforms’ non-compliance since the rules started applying in August.

Under the DSA, companies designated as a Very Large Online Platform (VLOP) – those with more than 45 million monthly average users in the EU — must abide by strict rules, such as transparency requirements and the protection of minors online. Besides the Meta probes, X is also subject to an investigation over its handling of the Hamas-Gaza war.

Faktabaari’s Salo said that though these EU actions have led to more awareness of disinformation, so has the fake news challenge itself grown by rapid technological development such as Generative AI.

“Russian full-scale invasion in Ukraine has 'unveiled the masks' in many countries, enabling better and more direct discussions on relevant issues including weaponization of the social media platforms by foreign and domestic actors. We should closely keep track of this, also independently to state actors,” Salo said.

He added that the proof of the pudding will be in the implementation strategy. “It’s important to keep-up the pressure, raise citizen awareness and to keep the platforms accountable for their actions and words,” he said.

Latvian researcher Denisa-Liepniece said that mitigation should also be comprehensive. “We should not only include limiting disinformation efforts, but also going beyond fact-checking and focusing on strengthening people’s understanding of information processing,” she said.

Content moderators

In a bid to counter fake news, Meta already works with independent fact-checking organisations across Europe. In Finland, for example with press agency AFP and in Estonia, Latvia and Lithuania with Delfi and Re:Baltica.

Ahead of the June vote, the tech giant said that it was setting up its own operations centre for the elections “to identify potential threats and put mitigations in place in real time”. In a separate statement, Facebook’s parent company said that it planned to start labelling AI-generated content in May 2024.

However, the number of content moderators for Nordic languages remains low, despite the threat of Russian disinformation. Facebook has three employees looking at content in Estonian, two at Latvian, six at Lithuanian and 15 at Finnish, claiming that a lot of the process is automated. By comparison, the platform has about 226 people looking at French, 54 at Dutch and 242 at German content.

A recent (1 May) report published by the independent fact-checking organisation European Digital Media Observatory indicates that Russia’s disinformation campaigns are still very much alive.

It alleged a wide-ranging Russian disinformation campaign by former news media Pravda, and citing state-owned media, such as Tass or RIA, on websites in local EU languages and often quote pro-Russian Telegram accounts. The websites post hundreds of articles an hour through AI based models, according to the report.

Meta said in its EU election preparation statement that it signed the industry-wide tech accord, alongside companies such as Google, Amazon and Snapchat, to combat the spread of deceptive AI content in the 2024 elections. “This work is bigger than any one company and will require a huge effort across industry, government, and civil society,” Meta said.