Toxic content — like spam, misinformation and posts from trolls — can be damaging to media companies for a host of reasons. As a result, publishers are gradually beginning to recognize the importance of moderating their digital properties.
Google even has an API that’s now being widely used by moderation providers to assess toxic content.
But here’s the problem: just assessing a platform for toxicity isn’t enough. Not when over 40% of people claim they’ve directly experienced online harassment.
Detecting incivility in your digital community is undoubtedly a necessary step in the right direction. However, every publisher that hopes to have a civil and profitable online community requires a moderation system that can also accomplish the following tasks:
Reinforce Community Guidelines
Truly effective moderation systems should be trained to support a media company’s community guidelines. There are different kinds of communities, after all. While some are designed to spark heated debates, like in sports or gaming, others are geared toward setting a peaceful environment.
Be sure to check whether or not your company’s automatic moderation platform can mold itself around the nuances of your community. Because what works for one media company may not work for yours.
“Rather than moderating the notion of toxicity, you can moderate around the guidelines that the publisher has actually set for their communities,” says Dan Seaman, VP of product at Viafoura.
At Viafoura, our moderation experts take an existing algorithm that best represents a publisher’s audience and then adapts it to fit their community standards.
Detect All Offensive Words, No Matter Their Form
Trolls are intelligent and will do everything in their power to outsmart a moderation system. While some may write offensive words with spaces between the letters, others may disguise their words with numbers or symbols.
Like Google’s API, most moderation systems focus on finding common patterns in toxic posts rather than the variations of jumbled words. Google even advises against using its API for automated moderation.
“The problem with using a basic toxicity rating is that it isn’t going to detect specific terminology,” Seaman explains. “If you can obfuscate words efficiently, you can get around toxic ratings.”
And each word can be obfuscated 6.5 million times. So no matter what automatic moderation system you use, make sure it’s capable of understanding the 6.5 million variations of each word.
Publishers with proper moderation systems in place experience thriving communities, resulting in 62% more likes and 35% more comments from users.
At the end of the day, analyzing root words in user comments can make the difference between a successful and unsuccessful moderation system.
Manage Evolving Language in a Community
Using a general toxicity rating or detection system isn’t effective enough to enforce civil conversation within each unique community. Especially not when the trolls within a community begin developing new ways to spread offensive messages.
This was the case for one publisher when Viafoura’s moderators noticed that trolls were posting a recurring phrase in community social spaces: “BOB crime.” Our moderators quickly realized that this phrase was being used in offensive contexts, and after investigating, found out that it stood for “Black-on-Black crime,” which challenges the Black Lives Matter movement.
The moderation algorithm was quickly adjusted to prevent relevant comments from being posted within that publisher’s community. However, this is just a single example of many where new phrases are created within a community to maneuver around basic moderation systems.
The bottom line is that language evolves.
Companies can reinforce their community guidelines by ensuring their moderation strategies can detect toxicity as language evolves. To reinforce community standards successfully, it’s also essential that algorithms are updated quickly as new, offensive language is discovered.
Unfortunately, not all moderation companies can provide this service successfully. This is because they focus mainly on disabling patterns or character sets that are toxic — not context or changing language.
To support a publisher’s online environment, moderation must go beyond addressing toxicity.
Although assessing incivility is an essential part of moderation, the nuances of each community and word must be addressed, and guidelines need to be enforced. The overall health and engagement of your digital community depend on it.