Ubisoft and Riot Games are working together to combat toxic chats

Raw Text

Gaming /

Entertainment /

Tech

The two gaming giants are collaborating on a research project with a goal of using AI to detect disruptive behaviors.

By Jay Peters / @ jaypeters

Share this story

Ubisoft and Riot Games are teaming up on a new research project that’s intended to reduce toxic in-game chats.

The new project, called “Zero Harm in Comms,” will be broken up into two main phases. For the first phase, Ubisoft and Riot will try to create a framework that lets them share, collect, and tag data in a privacy-protecting way. It’s a critical first step to ensure that the companies aren’t keeping data that contains personally identifiable information, and if Ubisoft and Riot find they can’t do it, “the project stops,” Yves Jacquier, executive director at Ubisoft La Forge, said in an interview with The Verge .

Once that privacy-protecting framework is established, Ubisoft and Riot plan to build tools that use AI trained by the datasets to try and detect and mitigate “disruptive behaviors,” according to a press release.

Traditionally, detecting harmful intent has relied on “dictionary-based technologies,” where you have a list of words spelled in different ways that can be used to determine if a message might be bad, according to Jacquier. With this partnership, Ubisoft and Riot are trying to use natural language processing to extract the general meaning of a sentence but take the context of the discussion into account, he said.

The goal, if everything works well, is that players see fewer toxic messages in chats. Both companies operate huge multiplayer games, so they stand to gain a lot from reducing harmful messages in chat — if people feel safe playing their games, then they’re probably going to play more of them. And Riot already monitors voice comms as part of its efforts to combat disruptive behaviors.

But Jacquier stressed that this work is research, and “it’s not like a project that will be delivered at some point... it’s way more complex than that.” And as we’ve seen before , AI so far hasn’t proved to be the silver bullet for content moderation.

Ubisoft and Riot will share “the learnings of the initial phase of the experiment” sometime next year, “no matter the outcome,” according to the press release.

Related

Xbox transparency report reveals up to 4.78M accounts were proactively suspended in just six months

Taylor Swift crashed Ticketmaster following ‘historically unprecedented demand’ for tickets

Taylor Swift crashed Ticketmaster following ‘historically unprecedented demand’ for tickets

Elon Musk says he fired engineer who corrected him on Twitter

Elon Musk says he fired engineer who corrected him on Twitter

Elon Musk demands Twitter employees commit to ‘extremely hardcore’ culture or leave

Elon Musk demands Twitter employees commit to ‘extremely hardcore’ culture or leave

Elon Musk ignored Twitter’s internal warnings about his paid verification scheme

Elon Musk ignored Twitter’s internal warnings about his paid verification scheme

Microsoft’s Xbox chief settles the Call of Duty PlayStation debate once and for all

Microsoft’s Xbox chief settles the Call of Duty PlayStation debate once and for all

Verge Deals

/ Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.

Single Line Text

Gaming / Entertainment / Tech. The two gaming giants are collaborating on a research project with a goal of using AI to detect disruptive behaviors. By Jay Peters / @ jaypeters. Share this story. Ubisoft and Riot Games are teaming up on a new research project that’s intended to reduce toxic in-game chats. The new project, called “Zero Harm in Comms,” will be broken up into two main phases. For the first phase, Ubisoft and Riot will try to create a framework that lets them share, collect, and tag data in a privacy-protecting way. It’s a critical first step to ensure that the companies aren’t keeping data that contains personally identifiable information, and if Ubisoft and Riot find they can’t do it, “the project stops,” Yves Jacquier, executive director at Ubisoft La Forge, said in an interview with The Verge . Once that privacy-protecting framework is established, Ubisoft and Riot plan to build tools that use AI trained by the datasets to try and detect and mitigate “disruptive behaviors,” according to a press release. Traditionally, detecting harmful intent has relied on “dictionary-based technologies,” where you have a list of words spelled in different ways that can be used to determine if a message might be bad, according to Jacquier. With this partnership, Ubisoft and Riot are trying to use natural language processing to extract the general meaning of a sentence but take the context of the discussion into account, he said. The goal, if everything works well, is that players see fewer toxic messages in chats. Both companies operate huge multiplayer games, so they stand to gain a lot from reducing harmful messages in chat — if people feel safe playing their games, then they’re probably going to play more of them. And Riot already monitors voice comms as part of its efforts to combat disruptive behaviors. But Jacquier stressed that this work is research, and “it’s not like a project that will be delivered at some point... it’s way more complex than that.” And as we’ve seen before , AI so far hasn’t proved to be the silver bullet for content moderation. Ubisoft and Riot will share “the learnings of the initial phase of the experiment” sometime next year, “no matter the outcome,” according to the press release. Related. Xbox transparency report reveals up to 4.78M accounts were proactively suspended in just six months. Taylor Swift crashed Ticketmaster following ‘historically unprecedented demand’ for tickets. Taylor Swift crashed Ticketmaster following ‘historically unprecedented demand’ for tickets. Elon Musk says he fired engineer who corrected him on Twitter. Elon Musk says he fired engineer who corrected him on Twitter. Elon Musk demands Twitter employees commit to ‘extremely hardcore’ culture or leave. Elon Musk demands Twitter employees commit to ‘extremely hardcore’ culture or leave. Elon Musk ignored Twitter’s internal warnings about his paid verification scheme. Elon Musk ignored Twitter’s internal warnings about his paid verification scheme. Microsoft’s Xbox chief settles the Call of Duty PlayStation debate once and for all. Microsoft’s Xbox chief settles the Call of Duty PlayStation debate once and for all. Verge Deals. / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.