Two of the biggest game publishers are teaming up to tackle toxicity in-game, and leveraging on artificial intelligence to do so.
Ubisoft, the developer of Assassin’s Creed, Rainbow Six Siege, Brawlhalla and more, and Riot Games, the developer of League of Legends, VALORANT, and other titles, announced their collaboration on a research project to prevent harmful player interaction named “Zero Harm Comms” on Thursday (17 November).
The two companies said Zero Harm Comms was built to develop a "cross-industry shared database and labelling ecosystem" with real-time in-game data gathering, which should train Ais to better identify and prevent disruptive online behaviour.
"We agreed that the solutions that we can use today are not sufficient for the kind of player safety we have in mind for our players," said Yves Jacquier, Executive Director, Ubisoft La Forge in a media statement.
Both Riot Games and Ubisoft are active members of the Fair Play Alliance, a "global coalition of gaming professionals committed to developing quality games".
This alliance aims to make gamers feel safe and free from discrimination, harassment, and abuse.
"At Ubisoft, we have been working on concrete measures to ensure safe and enjoyable experiences, but we believe that, by coming together as an industry, we will be able to tackle this issue more effectively." said Jacquier.
He also said this partnership is being explored "to better prevent in-game toxicity".
Rainbow Six Siege, like most games, has a Code of Conduct that prohibits the use of "any language or content deemed illegal, dangerous, threatening, abusive, obscene, vulgar, defamatory, hateful, racist, sexist, ethically offensive, or constituting harassment".
However, the vast majority of games hardly bother to police these laws, leading to an extremely toxic gaming community.
Still, Ubisoft tried to implement measures to mitigate this ongoing problem.
In 2018, Ubisoft started autobanning players of the game who use toxic language. Despite the Rainbow Six Siege developers’ efforts to moderate in-game interactions, posts describing toxic community behaviour still continue to pour in.
How Zero Harm Comms will combat toxicity
The Zero Harm Comms database should cover every type of player and in-game behaviour, since Ubisoft has a very diverse portfolio, while Riot has highly competitive games.
This will help make the AI more effective in finding negative patterns and, in the future, hopefully eventually eliminate or minimise this type of behaviour in general.
The data, which consists of strings of text, is collected from various chat logs in games developed by Ubisoft and Riot and cleaned of personally identifiable information and other details.
They are then given a label based on their actions — is this completely neutral, or is it racism or sexism — to help train AI to spot and understand potentially harmful behaviour on the first encounter.
Wesley Kerr, Head of Technology Research at Riot Games mentioned that disruptive behaviour is common for any company with an online social platform, and that Riot recognises that this is "a bigger problem that only one company can solve".
"That is why we’re committed to working with industry partners like Ubisoft who believe in creating safe communities and fostering positive experiences in online spaces," he added.
Kerr said that this effort is just one part of Riot's larger effort to build infrastructure that promotes positive, welcoming player experiences across all of their titles.
Riot's toxicity problem
League of Legends and VALORANT have been notorious for being some of the most toxic gaming communities there are.
LoL’s community was so toxic that Riot decided not to add voice chat for solo queue players.
Jordan 'BarackProbama' Checkman, Senior User Experience Design Manager at Riot Games, addressed this query a year ago on Reddit.
"Unfortunately, what we've seen time and time again is that the rich[er] the communication tool, the more powerful it is as a means for disruptive behaviour."
He added in that post that voice can be "more damaging and disruptive to marginalized groups," revealing more information on a player based on gender, accent, and dialect that can increase the severity of the harassment.
However, Riot Games launched VALORANT with in-game voice chat for all, sparking debate in the overall community of players of Riot Games titles and leading to the developer monitoring in-game voice chat.
VALORANT has been ranked as the most toxic online game for two consecutive years (2020-2021) in a study conducted by Anti-Defamation League in the US, where 80 per cent of players reported harassment in 2020.
This was closely followed by Valve’s Dota 2. On the other hand, LoL ranked 11th on the survey, with a downward trend since 2019.
Riot has actively worked on AI systems to automatically detect negative behaviour and released a blog on Riot’s approach to player dynamics that talks about the challenges of this task and how the company is working on addressing these issues.
"Making online communities more inclusive is an ongoing mission that will never be fully completed," Riot’s announcement on the partnership read.
"With that being said, by working together, we can make meaningful improvements. We are committed to sharing our learnings from the first phase of this initiative with the entire industry next year."
In 2020, ADL and the Fair Play Alliance released the Disruption and Harms in Online Gaming Framework to address harassment in online games.
Zero Harm in Comms is still in its early stages, having been in development for about six months. Both companies plan to share their findings and next steps with the rest of the gaming industry in 2023.
Anna is a freelance writer and photographer. She is a gamer who loves RPGs and platformers, and is a League of Legends geek. She's also a food enthusiast who loves a good cup of black coffee.