HomeNews

Ubisoft and Riot are teaming up to tackle toxic online chat with AI, but big questions remain over how it’s going to work"We know this problem cannot be solved in a vacuum," Riot tell us

“We know this problem cannot be solved in a vacuum,” Riot tell us

Several operators from Rainbow Six Siege stare dramatically at the camera.

Do you ever feel likemultiplayer gameswould be better if other players were less abusive? Ubisoft and Riot Games are looking into training artificial intelligence to tackle bad behaviours within in-game chat, a research collaboration they’re callingZero Harm In Comms. Ahead of their announcement today, I put some questions to Ubisoft La Forge’s executive director Yves Jacquier and Riot’s head of technology research Wesley Kerr to get some more insight on the joint project, and ask them exactly how their proposal will work.

Rainbow Six Siege: Ghosteyes Squad TeaserRainbow Six Siege is one of Ubisoft’s core multiplayer games.Watch on YouTube

Rainbow Six Siege: Ghosteyes Squad Teaser

Cover image for YouTube video

After reading that, you’re probably wondering “These companies are tackling toxicity? Really?”UbisoftandRiothave their own histories of alleged inappropriate behaviour within their company cultures. Although both companies have said they’re committed to behavioural change, it could prove tough to win over players who are aware of their history. Though still in it’s early stages, Zero Harms In Comms is an attempt at co-operating to answer a thorny issue that’s relevant across the industry, but it’s just one possible response to the issue of disruptive behaviour in chat.

Ubisoft and Riot are already both members of theFair Play Alliancewith an existing shared commitment to creating fair, safe, and inclusive spaces among the wilderness of online gaming, and Zero Harms In Comms is the way they’re choosing to try to handle the issue of toxicity in chat. The companies didn’t specify whether their research will cover text or voice chat, or both, but they say they’re aiming to “guarantee the ethics and privacy” of the initiative.

Ubisoft and Riot are hoping their findings can be used to create a shared database for the whole games industry to gather data from, and use that to train AI moderation tools to pre-emptively detect and respond to dodgy behaviour. To train the AI that’s central to the Zero Harm In Comms project, Ubisoft and Riot are drawing on chatlogs from their respective diverse and online-focused games. This means their database should have a broad cover of the types of players and behaviours it’s possible to encounter when fragging and yeeting online. AI training isn’t infallible of course; we all remember Microsoft’s AI chatbot, which Twitterturned into a bigotwithin a day, though that’s admittedly an extreme example.

Riot Games' Head Of Technology Research Wesley Kerr (left) and Ubisoft La Forge’s Executive Director Yves Jacquier (right)

Photos of Riot Games' Wesley Kerr and Ubisoft La Forge’s Yves Jacquier

Riot feel working with Ubisoft broadens what they can hope to achieve through the research, Kerr tells me. “Ubisoft has a large collection of players that differ from the Riot player base,” he says, “so being able to pull these different data sets would potentially allow us to detect the really hard and edge cases of disruptive behaviour and build more robust models.” Ubisoft and Riot haven’t approached any other companies to join in, so far, but might in the future. “R&D is difficult and for two competitors to share data and expertise on an R&D project you need a lot of trust and a manageable space to be able to iterate,” Jacquier says.

As stated, the project revolves around AI, and improving its ability to interpret human language. “Traditional methods offer full precision but are not scalable, Jacquier tells me. “A.I. is way more scalable, but at the expense of precision.” Kerr adds that, in the past, teams have based their efforts on using AI to target specific keywords, but that’s always going to miss some disruptive behaviour. “With the advancements in natural language processing and specifically some of the more recent large language models,” he says, “we are seeing them be able to understand more context and nuance in the language used rather than simply looking for keywords.”

Project U is an upcoming session-based co-op shooter from Ubisoft.

Kerr breaks the process of gathering and labelling data to train these NLP algorithms down a bit more for me. “The data consists of player chat logs, additional game data, as well as labels indicating what type of disruptive behaviour is present if any,” he says. “Many of the labels are manually annotated internally and we leverage semi-supervised methods to add labels to examples where our models are quite confident that disruptive behaviour has occurred.” To pick up disruptive behaviour as successfully as possible, the NLP algorithm training will involve “hundreds or thousands of examples”, learning to spot patterns among them.

Of course, another elephant in the room here is the player. Any time we go online, we open ourselves up to the risk of bad interactions with other people, anonymous or otherwise. I asked Jacquier and Kerr how they thought players would react to AI judging their in-game convos. Jacquier acknowledged that it’s just a first step to tackling toxic spaces in the industry. “Our hope is that our players will gradually notice a meaningful positive shift in online gaming communities where they see less disruptive behaviour,” he said. Kerr added that he hoped players can understand that it takes time for projects such as Zero Harm In Comms to change behaviour in a meaningful way. Maybe players could just try being nice to each other, as formerOverwatchdirector Jeff Kaplanonce suggested?.

Online games such as League Of Legends are Riot Games' bread and butter.

League of Legends game promotional artwork

Although neither Jacquier nor Kerr discussed what will actually happen to players once their AI-based tools have detected disruptive behaviour, the eventual results of the Zero Harm project “won’t be something that players see overnight”. The research is only in the early data-gathering phase, and a way off from entering its second phase of actually using that data to better detect disruptive behaviour. “We’ll ship it to players as soon as we can,” Kerr tells me. Zero Harm In Comms is still in its infancy, but both Ubisoft and Riot hope the research will eventually have far-reaching, and positive, results to share with the entire games industry and beyond. “We know this problem cannot be solved in a vacuum,” Kerr says, and Jacquier agrees: “It’s 2022, everyone is online and everyone should feel safe.”

That said, it’s not yet certain whether the research project will even have anything meaningful to report, Jacquier points out. “It is too soon to decide how we will share the results because it depends on the outcomes of this first phase,” he says. “Will we have a successful framework to enable cross industry data sharing? Will we have a working prototype?” Regardless of how the project turns out, the companies say they’ll be sharing their findings next year.