Voz media US Voz.us

Call of Duty spies on its users with AI in search of 'white supremacists'

Activision has partnered with tech firm Modulate to create an artificial intelligence that scans conversations by tracking "toxicity."

Imagen del videojuego 'Call of Duty: Modern Warfare' publicada el

(Flickr)

Published by

The video game industry joins others in using artificial intelligence to spy on users. This is the case of Activision, the company that created the Call of Duty saga. It partnered with technology firm Modulate, a company that uses artificial intelligence to monitor players' conversations on the popular video game.

As revealed by PC Gamer, Activision and Modulate developed ToxMod. The tool, which began testing last week on North American servers, has the ability to "identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more."

To do this, the company explains, the program analyzes both messages and voice chats on video games such as "Rec Room" and now the Call of Duty saga. Specifically, ToxMod will be used in Call of Duty: Warzone and Call of Duty: Modern Warfare II. It is also expected to be included in Call of Duty: Modern Warfare III, the new game that will go on sale in November worldwide, except in Asia.

Call of Duty restricts more than 1 million accounts

The tool has been running for almost a week. In that time, Activision says in a blog post, more than a million accounts have been restricted by Call of Duty's "anti-toxicity moderation":

Since the launch of Modern Warfare II, Call of Duty’s existing anti-toxicity moderation has restricted voice and/or text chat to over 1 million accounts detected to have violated the Call of Duty Code of Conduct. Consistently updated text and username filtering technology has established better real-time rejection of harmful language. In examining the data focused on previously announced enforcement, 20% of players did not reoffend after receiving a first warning. Those who did reoffend were met with account penalties, which include but are not limited to feature restrictions (such as voice and text chat bans) and temporary account restrictions. This positive impact aligns with our strategy to work with players in providing clear feedback for their behavior.

In addition, ToxMod is able to make complex distinctions thanks to the use of AI. To do this, says the developer of the tool, the program can "listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms":

While the 'n-word' (term for the word "black") is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities. If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation.

Collaboration with the Anti-Defamation League

Determining the context in which the n-word and other words are used is not easy. That's why Modulate partnered with the Anti-Defamation League (ADL) to create ToxMod to recognize "white supremacists" and "alt-right extremists" who, according to the company's website, fall in the "violent radicalization" category:

Using research from groups like ADL, studies like the one conducted by NYU, current thought leadership, and conversations with folks in the gaming industry, we’ve developed the category to identify signals that have a high correlation with extremist movements, even if the language itself isn’t violent. (For example, “let’s take this to Discord” could be innocent, or it could be a recruiting tactic).

tracking