Feature Spotlight: Contextual Chat

What do we talk about when we talk about Chat Moderation?

Picture this: the year is 1997, and you’ve just logged into Ultima Online, one of the first massively multiplayer online role-playing games (MMORPGs). Suddenly, you’re in a sprawling virtual world, brimming with knights, sorcerers, and dragons, but most excitingly — other players, just like you, hailing from every corner of the world.

This was the dawning age of multiplayer gaming with the ability to chat, and it was nothing short of revolutionary. For the first time, we could form parties with our friends, trade tales (and gear) with strangers, and really inhabit these virtual realms. The feeling was electric. Gaming was no longer a solitary pursuit; it was a social, shared experience.

But with the uncharted territory came unforeseen challenges. How long do you think it took for someone to write “penis” in chat? If you guessed less than two minutes, you were right: it took one minute and 23 seconds. And the chat environment devolved to trolling, scamming and trash-talking from there.”

When multiplayer gaming with the ability to chat exploded in the late 1990s, it brought the promise of more fun, more connection, genuine friendships forged between strangers who might never otherwise have met. And while all those things certainly happened, it also brought trolling, mean-spirited behavior, scamming, and all the other toxicity we associate with the anonymous internet, with a side of gaming-specific triggers, like losing competitive matches, that made gamers lose their cool.

Games introduced technologies to handle this toxicity, including player reports, message filtering, moderation of messages based on keywords, and eventually phrase-level toxicity detections. While these helped, they were easy to get around and slow to implement, which has meant that the small minority of toxic players (<5% of most communities) have been able to poison the experience for the majority who were just trying to have fun. Without the context that humans have when they review a chat log, these systems just couldn’t catch most problems and are still the main approach used today.

While filtering and human chat moderation certainly help, the problems with toxicity in chat, especially for large games, are truly beyond human scale. Among our customers, anywhere between 1% and 10% of chat in a game can be toxic, depending on genre, gameplay characteristics, and the community that game has fostered.

The basic “filter and human review” approach just can’t solve a problem that size. At GGWP, we believe that the best way to catch toxicity is the way a human moderator would – by looking at the context of the conversation.

The way GGWP tackles the problem with chat is 5-fold: we look at each message, the conversation it’s in, the game context surrounding it, the reputation of the player who sent the message, and the relationships between the players involved. This makes it possible to both act in the moment to mute players who are becoming toxic and make it easier for systems and moderators to accurately and efficiently sanction the right players.

1. Message level

At the most basic level, GGWP’s system scans each individual message sent in chat. Our AI uses a range of technologies, including natural language processing, age-specific wordlists, sentiment and sophisticated transformer models, to understand and identify any potential harmful content in a single message and assess its severity.

This includes not only direct insults or explicit language but also coded language, slang, and even veiled threats or harassment. We aim to catch gamer-specific language and subtle insults that traditional tools might miss, understanding that toxicity can take many forms beyond just explicit language.

 

2. Conversation level

Looking at individual messages is just the starting point. To fully comprehend the meaning and tone of a message, it’s crucial to understand the context in which it’s sent. GGWP’s system takes into account the entire conversation surrounding a potentially toxic message.

Conversation level context includes a variety of factors: the things each player has said so far, the mood of each player and the conversation as a whole, and any turning points in the conversation that mark a shift to toxicity that needs to be de-escalated with a mute or warning to one or several players.

By assessing past and subsequent messages, the AI is better able to detect sarcasm, innuendos, or heated arguments that might escalate into toxicity and take action.

3. Game event level

When we determine the meaning of an ambiguous comment on gameplay or detect the severity of an event, what’s happening within the game can be at least as helpful as the chat messages themselves.

For instance, a player might be more likely to lash out after a tough loss, and is especially likely to target the player responsible for the loss. Players who queued together are more likely to engage in bullying when playing with a stranger. A player who is on the cusp of dropping down in the rankings might be more on edge and likely to get triggered when a game is going poorly.

GGWP’s AI system considers this type of in-game context when moderating chat. By tying chat messages to in-game events, the system can make more informed decisions. Being able to do this during the game, and not while reviewing chat logs after the fact, can mean that action is taken before a tilted player gets out of control and a competitive spirit doesn’t evolve into a hostile environment.

4. Player Level

We think of the player level as the heart of our system – ultimately, it is players rather than messages that take actions and chats and are potentially sanctioned or rewarded. Thus, it’s essential to consider the history and reputation of the player sending the message, especially in ambiguous messages where the intent is not immediately clear.

To do this, GGWP’s system takes a player’s complete long term chat and behavior history into account and creates comprehensive reputation scores. Players with a history of toxicity may be flagged more quickly, while a generally well-behaved player might get the benefit of the doubt in borderline cases. This allows for a more nuanced and fair moderation process, holding consistent offenders accountable while ensuring occasional missteps do not overly penalize good community members.

5. Relationship Level

Finally, understanding the relationship between players can significantly enhance the accuracy of toxicity detection in chats. Relationships can change the entire context of a conversation and what could be considered toxic or playful banter – our “criticizing gameplay” detections, for example, would feel very different to a friend vs. a stranger.

Our system understands that friends often engage in playful teasing to prevent unnecessary sanctions being placed on players who are merely joking with friends. Inversely, the system acknowledges that a stranger using the same language could be highly toxic, especially if it’s unwanted and persistent, and even more so if it has a high radius of impact (e.g. is seen by many players in a group and degrades the overall quality of the group conversation).

By determining the relationships between players — whether they are friends, acquaintances, strangers, or antagonists — the system gets an extra layer of context, allowing it to accurately discern toxicity and assess its severity and impact on the community.

Benefits

There are many benefits to AI-driven, context-aware moderation. Adding the extra context beyond each line of chat leads to more accurate identification of toxic behavior, resulting in a more enjoyable environment for all players. It can adapt to evolving language and behavior patterns, so the system changes as new memes and expressions enter the gaming lexicon.

It also extends the impact of your team by allowing human moderators to focus on the most serious cases, improving their efficiency. With quicker response times to emerging toxic patterns, the system can prevent escalation and improve overall community health.

In the end, our aim is to restore the original promise of multiplayer gaming: fun, connection, and a welcoming community. GGWP’s AI moderation system, with its focus on context, is a powerful tool in the battle against toxicity, ensuring that every player can enjoy the game as it was intended.

Click here to learn more about GGWP’s chat solution.