Ubisoft and Riot Games are teaming up on a new research project that’s intended to reduce toxic in-game chats.
The new project, called “Zero Harm in Comms,” will be broken up into two main phases. For the first phase, Ubisoft and Riot will try to create a framework that lets them share, collect, and tag data in a privacy-protecting way. It’s a critical first step to ensure that the companies aren’t keeping data that contains personally identifiable information, and if Ubisoft and Riot find they can’t do it, “the project stops,” Yves Jacquier, executive director at Ubisoft La Forge, said in an interview with The Verge.
Once that privacy-protecting framework is established, Ubisoft and Riot plan to build tools that use AI trained by the datasets to try and detect and mitigate “disruptive behaviors,” according to a press release.
Traditionally, detecting harmful intent has relied on “dictionary-based technologies,” where you have a list of words spelled in different ways that can be used to determine if a message might be bad, according to Jacquier. With this partnership, Ubisoft and Riot are trying to use natural language processing to extract the general meaning of a sentence but take the context of the discussion into account, he said.
The goal, if everything works well, is that players see fewer toxic messages in chats. Both companies operate huge multiplayer games, so they stand to gain a lot from reducing harmful messages in chat — if people feel safe playing their games, then they’re probably going to play more of them. And Riot already monitors voice comms as part of its efforts to combat disruptive behaviors.
But Jacquier stressed that this work is research, and “it’s not like a project that will be delivered at some point… it’s way more complex than that.” And as we’ve seen before, AI so far hasn’t proved to be the silver bullet for content moderation.
Ubisoft and Riot will share “the learnings of the initial phase of the experiment” sometime next year, “no matter the outcome,” according to the press release.