All the Different Types of Moderation for Your Digital Properties

Updated June 22, 2020.

With trolls and toxicity running wild all over the internet, moderation has become practically mandatory for any publisher hoping to build and engage a profitable community.

Nearly 50% of Americans who have experienced incivility online completely remove themselves from the situation. Which means, if you allow toxicity to go unchecked on your properties, half of your audience is likely to abandon your platform if they see anything offensive.

Not to mention that Google’s starting to ban media companies with toxic comments around their content from its Ads platform. ZeroHedge, for instance, was recently banned for allowing offensive and false information to exist on its website. The Federalist also received a warning that it would be banned for the same reason if protective actions aren’t taken. 

In a joint statement with other big tech companies, Google explained that it will be focusing on “helping millions of people stay connected while also jointly combating fraud and misinformation about [COVID-19], elevating authoritative content… around the world.” 

Google’s global move to tighten moderation restrictions on media content comes after several countries, like France and Australia, began challenging the tech giant’s dominance over media.

If you want to keep your environment protected, moderating user interactions will help your visitors feel safe enough to engage in conversations and build relationships on your platform.

But while many engagement tool vendors claim to use fancy moderation algorithms, most of them are nothing more than ban-word lists. To protect the environment on your platform so users actually want to return, you’ll need to go beyond a simple ban-word list.

Instead, sift through the different forms of moderation so you can select a tool that will best support your community guidelines.

From more traditional user self-policing and manual moderation, to fully automated and full-service approaches, we’ve broken down the different types of moderation to help you understand what will work best for your business.

User-to-User Moderation

You know how on social media, you have the option to flag or report anything that distresses you so moderators can review? Well that’s what user-to-user moderation is — users monitoring and reporting other users for bad behavior.

As an extra layer of protection, consider giving your community the power to flag offensive or concerning posts that may slip through the cracks. Facebook moderation has some flaws and gaps when it comes to moderation, which is why you need a complete, specialized platform like Viafoura.

To minimize the amount of user-to-user moderation needed on your property, it’s important to have strong community guidelines. Read the best practices for creating community guidelines here.

Human Moderation

Moderation is an extremely complex topic that can be broken down into different forms. But here’s the problem: many publishers think that human moderation where people manually go through users’ posts and block any offensive onesis the only kind of moderation.

Human moderation is incredibly time-consuming and expensive on its own. Publishers who can’t afford to hire external human moderators and don’t have the time for it in-house tend to give up on building an engaged community altogether.

Sound familiar?

Consider this: when paired with an automatic solution, the need for human moderation is minimized, reducing associated time and cost investments.

The great part about human moderation is that humans are able to catch harassment or incivility that can’t always be picked up by existing algorithms as they learn. So instead of having humans do all of the work, reduce their moderation scope to focus on what technology can’t catch or understand on its own. 

Read more about human vs machine moderation here: Human vs. Machine: The Moderation Wars

Automatic Moderation

Forget simple ban-word lists. A truly effective automated moderation solution should instantly prevent the majority of toxic posts from being posted from the very second a user submits a comment.

Intelligent automatic moderation lightens the workload for human moderators by minimizing the number of comments pushed to humans, allowing them to focus on truly questionable posts, or those that are flagged by other users.

Quick tip: Many service providers claim to have AI or automatic moderation, but don’t actually leverage natural language processing or machine learning to understand variations of words and sentence structures. Check in with your moderation provider to make sure your tool can learn as moderators approve or block comments, further training the algorithm, which should be customized to your guidelines. 

Intelligent Auto Moderation of Live Chat

Full-Service Moderation

What’s a publisher to do when they just don’t have enough time on their hands to worry about moderation?

Answer: Outsource the complete solution to a service provider.

Choose a vendor that can bundle a cost-effective package for you, which should include human, sophisticated automatic and user-to-user moderation services to get the best bang for your buck.

Remember, folks: a protected community is a happy community. And that translates to revenue growth.

Hungry for more knowledge on moderation? Check out seven tips that will help your moderation team survive a national election.

Week of Oct. 19th-25th: Your Media News Update

The last week has seen a flurry of activity around topics related to automated content moderation and product design as well as ways to develop communities and keep them engaged. The many news stories and reports that have been discussed are a treasure trove of best practices we can learn from, including:

  • Twitch’s decision to amp up its live-streaming moderation and establish a “three strikes” rule before suspending a streamer’s channel
  • Microsoft’s testing of new content filters for its Xbox Live messaging system as a way to reduce the amount of toxicity on its platform
  • The Telegraph’s ability to achieve a 49% growth in subscriptions by optimizing its homepage

To continue learning and staying up-to-date with the latest and greatest industry news from the past week, read on.

Content Filtering and Moderation Reaches a New Level (for Some)

Content filtering and moderating live-streamed content have significantly evolved over the years. Twitch, one of the most popular video live-streaming services in the world, is leading the charge by providing a space for online communities to develop in a positive way. CEO Emmett Shear has been a big proponent of stream moderation as a way to empower streamers in creating the type of community they want. 

Twitch isn’t an “anything goes” type of platform. It’s very explicitly not a free speech platform, which differs from Twitch’s competitors. Shear says, “We’re a community. And communities have standards for how you have to behave inside that community. And so we think that it’s not anything goes.” 

When it comes to the digital world, community guidelines need to be set and enforced to keep platforms safe and productive.

More and more companies are also taking steps to adopt automated content moderation systems as a way to create safer online communities for the masses. For example, Microsoft is now testing content filters for its Xbox Live messaging system to reduce toxicity on its platform.

Microsoft has managed moderation on Xbox Live for almost 20 years, including the ability to report messages, usernames, and photos. Their new content filters empower players to have control over what kinds of messages are instantly hidden. The company also aims to protect live audio calls with real-time bleeps, similar to broadcast TV. Microsoft is now trying to be more open and transparent about how it moderates Xbox Live and the choices it makes to enforce these filters across the community.  

Facebook, on the other hand, still lacks in its commitment to content moderation based on its current automated filtering tools. The company has now publicly announced its shortcomings in an attempt to challenge a European Court of Justice ruling. 

When the top EU court decided earlier in the month that Facebook must use automated content moderation to detect “defamatory content,” the company responded by saying its tech was simply not good enough. Facebook described their own moderation tools as a “blunt instrument,” unable to properly understand the context, and therefore, make correct decisions.

Publishers Tap Into Creating Better Product Designs to Increase Subscriptions

Many publishers are trying to move away from relying on advertising income as the main source of revenue. To be financially sustainable in the long-run, more companies like The Telegraph are continuing to think about a subscriber-based revenue model. Consequently, there is a new thought emerging about how products reflect and align with this type of model. 

All of The Telegraph’s products, for example, now revolve around subscribers. Mathias Douchet, Director of Product at The Telegraph, says that “we had to move away from an ad strategy to a more user-engagement, user-centric strategy with our own products.”  

To do this, The Telegraph rebuilt its website’s homepage. Its old homepage offered hundreds of stories but very little in the way of editorial curation. It was also difficult to group content around a theme and make content stand out.

Since the Telegraph’s homepage is a key product that many subscribers turn to on a daily basis, it was revamped to offer top-notch user experience and ongoing engagement. 

The homepage redesign has resulted in very positive outcomes for the publisher. All consumption and engagement KPIs are up, subscriptions have increased by 49% and advertising revenue has increased but with fewer ads.

The Guardian has also released a daily app for paying subscribers as part of its quest to reach two million financial supporters by 2022. The appeal of the new app is that it won’t carry ads and will offer news in a streamlined way.

The new app lets users scroll horizontally through different news sections in depth. It also lets users read the previous week’s worth of papers. 

Publisher app users are typically highly valuable because they consume more content more regularly and for longer periods of time. For these reasons, publishers are beefing up their apps for subscribers. This includes The Economist, which last year launched an app to help drive retention. The app design takes its cue from the digital user experience of music streaming apps like Spotify. Similar to publishers, music streaming apps also face the challenge of displaying massive amounts of content grouped by genre in an intuitive way.

Successful businesses are no longer making products and decisions that revolve completely around advertising; they revolve around subscribers

Seven Tips That will Help Your Moderation Team Survive a National Election

One of the biggest challenges for publishers during a national election is, without a doubt, keeping conversations around their content civil and preventing misinformation from tarnishing their platforms.

Whether your company plans to run live updates or craft a few blog posts during a significant political event — such as an election in Canada, the U.S. or anywhere else in the world — your moderation team will have their hands full with an extraordinary volume of opinionated comments.

A recent study conducted by the Center for Media Engagement found that moderators who focused on preventing uncivil comments were affected “on a very personal level, leading to emotional exhaustion and less positive work experience.” This means that an effective moderation team needs to protect more than just the domains they’re assigned to monitor — they also need to protect themselves.

And yet, comments are still essential to your brand’s success.

To help your moderation team maintain civility and accuracy on your platform while keeping their cool, it’s important to empower them as much as possible well in advance of an election.

We spoke with Leigh Adams, the product manager of Viafoura’s moderation services, to help arm your moderation team with the best practices and tips to make it through the election period. Adams also holds over 10 years of experience moderating and developing guidelines for news commenting forums. Read on to discover her must-know election survival tips.

1. Predict probable misinformation

Before moderators can begin battling misinformation, they first need to have a clear, consistent understanding of the kinds of misinformation that are likely to come up. Moderators can then brainstorm different types of rumors and topics that should not be spread on the domains they’re protecting.

For example, according to Canadian Prime Minister Justin Trudeau, you can expect to see misinformation that generates “fear, intolerance and misinformation about immigration across Canada” during the upcoming 2019 election period. 

“Create a shared document that everyone can print featuring keywords or names to watch for within those categories of misinformation you’ve identified,” Adams says.

This will make it easier for moderators to scan through comments and quickly identify problematic statements.

Viafoura’s moderation team also uses a unique search tool that allows moderators to search comments by specific keywords.

“The quicker you can find and shut down those conversations, the better,” she states.

2. Identify your biases

When was the last time you spoke to a human that was truly neutral towards the political landscape? Everyone has their biases, which can influence their day-to-day actions. Not even moderators are immune.

A study from Carnegie Mellon University in Pennsylvania found that “users who consistently express minority viewpoints are more likely to be moderated than users who consistently express majority viewpoints.”

To ensure your moderation team isn’t enforcing any political bias unintentionally, each moderator must understand what their own biases look like in order to avoid censoring opposing viewpoints.

3. Don’t be afraid to ban users

In the digital world, the general belief is that the more eyeballs a piece of content can get, the better. The end goal for media executives is typically to gain and engage more site visitors in order to maximize subscriptions; however, visitor quantity isn’t always better than quality.

“Don’t be afraid to ban users,” says Adams. She goes on to explain that “a lot of newspapers are afraid to ban users because they want the audience, but when you allow trolls and other toxic users to take over, you’re actually scaring away more valuable visitors.”

Fewer quality commenters offer more value to brands than many commenters that destroy the safety and trust between an organization and its loyal followers.

4. Leverage user account history as a moderation resource

User account history is an extremely useful resource for moderators. Access to information like past comments posted by users and account registration date can help moderators prevent spam and make decisions on what to do with questionable comments.

Adams explains that “if a user posts a couple hundred comments within a few days, chances are, they aren’t posting valuable comments.”

5. Create a thorough emergency procedure

Make sure your moderation team has thoroughly outlined a procedure for comment-related emergencies.

“Let’s say someone threatens to be an active shooter at your headquarters. How do you deal with that type of threat?” Adams asks.

There are a few crucial questions you can ask your team to help them prepare for these types of threats: 

  • Is there a clear chain of command in an emergency? 
  • When do you alert the police versus the organization you’re protecting?

Adams recommends distinguishing between different types of non-urgent, semi-urgent, general and specific threats, and outlining how moderators should react to each of them. 

6. Keep team communication open

Whether your moderation team prefers Slack, Google Hangouts or any other communication tool, it’s best to have a shared chatroom where they can ask each other questions or flag any important information instantly.

“To ensure sanity and consistency, create a shared space where your team can feel supported enough to ask for help,” Adams suggests. “When someone needs to make a judgment call on a comment, having open communication with the rest of the team is very empowering.”

7. Take Breaks

If you need a break as a moderator, you need to ask for one. Don’t feel like you need to power through the rush of comments until the end of your shift. Maintain visibility over everyone’s workload as well so team members can assist one another when needed. That way, your moderation team will be well-equipped to prevent the volume of comments from getting out of control.

Adams elaborates on how this can be accomplished: “We use a moderation tool that was created in-house, which lets other moderators on the team see one another’s workload. It can also alert others when you’re away from the keyboard so that someone else can take over.”

A moderator’s role can be mentally draining, so if you need a break for the sake of your mental health, you owe it to yourself to take one. After all, you need to protect yourself before you can effectively protect others.

5 Ways to Decrease Trolling and Improve the Quality of Your Comments

With the prevalence of online trolls, some organizations have put up their hands and given up on the comment section. But doing so, even temporarily, has major drawbacks for organizations…

Last updated October 28th, 2019

Highlights:

  1. Reward users to encourage desired conversations
  2. Offer moderation tools to your users
  3. Use artificial intelligence in conjunction with your human efforts
  4. Quiz your users to weed out those who haven’t read the full story
  5. Stop anonymous comments

With the prevalence of online trolls, some organizations have put up their hands and given up on the comment section altogether. But doing so, even temporarily, has major drawbacks for organizations and their users.

As Carrie Lysenko Head of Digital for The Weather Network pointed out in an RTNDA Canada panel on engagement, turning off comments can result in a significant drop in pageviews and attention time. This echoes Viafoura’s own findings that brands with commenting can increase pageviews by 248% and attention time by 364%. This increased engagement leads to higher registrations and subscriptions since engaged users are more likely to pay for premium services.

And while managing online communities has traditionally been cumbersome and expensive, today there are many cost-effective ways to reduce or eliminate trolling. For media companies, these new tools allow you to not only keep your comment section open, but also to capitalize on your user-generated content.

Reward Users to Promote Civil Comments

Trusted-user badge

Encourage users to submit thoughtful comments by rewarding your best commenters with a trusted-user badge. With this status, an icon will appear beside the user’s name for others to see. These trusted users are also able to publish their comments in real time without being moderated.

Editor’s pick

Another way to reward users is by giving their comment the editor’s pick status. These comments can be featured in prominent positions on your website to model the types of comments you want to receive.

This is also beneficial for SEO, because comments that are placed higher on your webpage will get indexed by Google, and the keywords in those comments may be a closer match to users’ own search terms than those used by a journalist.

Create articles from users’ comments

Many organizations today including The New York Times, The Washington Post, and the Canadian Broadcasting Corporation (CBC) are creating stories entirely from their users’ comments. These stories not only reward commenters for their insightful posts, but are cost-effective, quick to publish and receive a surprisingly high amount of attention time and comments. Some even attract more comments than the original piece from which they were taken.

To see the impact of these articles, we tracked the number of comments for eight user-generated blog posts in CBC’s Revenge of the Comment Section, comparing those to the number of comments for their original articles.

The results are depicted in the chart below:

It’s significant to note that while almost all of the original stories received more comments, the user-generated articles often weren’t far behind. And in one instance, for Story 2, there were more comments for the user-generated article (601,000) than for its original article (343,000). Readers also spent approximately 2.3x more time on the former page.

That’s pretty fascinating since these articles can be created at a fraction of the time and cost it takes a journalist to create a new article from scratch.

Offer Content Moderation Tools to Your Users and Managers

Flagging

Allow users to easily flag comments that they find offensive, using a noticeable red flag icon. When a comment receives a predetermined amount of flags, it will enter a queue for review with a moderator who will decide the appropriate action.

Timed user banning

Give short “timeouts” as little as a few hours, days or months and notify users as to why they are being banned to help them improve the quality of their comments. Alternatively, users can be permanently banned for repeated offenses.

Dislike button

The dislike button allows users to express their dislike for a comment, without having to flag it (which requires a moderator’s time and resources). We found that this button can reduce flagging by 50% in as little as two weeks upon implementation.

Gamification

Both The New York Times and The Guardian have created games that allow readers to try moderating content. Users are tasked with approving or rejecting comments and providing reasoning for their decisions. This is not only enjoyable for users, but eases some of the burden on moderators.

Use AI Moderation to Eliminate Online Harassment

Whether your organization employs dedicated moderators or tasks other employees with removing the “trash,” you could be saving countless hours and dollars with automated moderation.

Automated moderation uses natural-language processing and artificial intelligence to automatically categorize and eliminate trolling, spam and online harassment.

Viafoura’s Automated Moderation is programmed with over six million variations of problematic words or phrases. This means that it’s able to determine both the subject matter and the sentiment behind users’ comments, detecting and eliminating spam, foul language, abuse, personal attacks and other uncivil comments before other users can even see them.

If the system encounters a new word or sentence that it’s unsure of, it flags the instance for a moderator to review. As a moderator approves or rejects new words, through the power of machine learning, the algorithm will learn the new rules and get smarter over time.

On average, our studies have found that automated moderation has a higher accuracy rate (92%) than human moderation (81%), and reduces 90% of the time and cost it takes to moderate a community manually.

Quiz Your Users

The Norwegian tech news website, NRKbeta, encourages thoughtful comments by asking their readers to prove they read the whole story by taking a quiz. Their organization believes that this quiz can weed out users who haven’t read the story, while also giving users time to reflect on how they will comment instead of just typing a response to a shocking headline.

Their reporter, Stale Grut, comments, “When a lot of journalists hit ‘publish’ I think that they see themselves finished with a story. But we see that you’re only halfway through with the article when you’ve published it.” Their goal is to improve articles through collaboration.

Many commenters agreed that this tactic would promote insightful comments. Here’s what they had to say:

“It WILL raise the discourse, and it will improve the journalism too. And why should some poor intern have to sit and delete all the trash? Let a computer do it.”
—Moira

“I would not object to that if it reduced the uninformed and off-topic as well as useless comments”
—Annette

End Anonymous Commenting

By allowing users to register for your website through one of their social media accounts, with the use of social login, they are less likely to post harassing comments because they can easily be identified.

The social login button also generally increases conversion rates by 20% to 40%, while giving you access to user information that can be used to create targeted messaging.

Increased Engagement = Higher Revenue

If you’re committed to improving the quality of interactions on your website, you may find that using moderators alone can be expensive and time-consuming. Luckily, today we can count on technology to encourage quality comments and eliminate the number of personal attacks. And by improving the quality of interactions on your site, you can look forward to increased engagement, improved brand loyalty and enhanced lifetime value from your users.

Need more help?

If you’re looking to drive engagement and leverage user-generated content, let’s connect.

Connect Now

Viafoura Automated Moderation Changes the Game for Community Moderation

Don’t Sacrifice the Flowers for the Weeds Have you ever had the pleasure of digging through the comments that pollute the web? If you have…

Last updated June 14th, 2018

Don't sacrifice the flowers for the weeds

Have you ever had the pleasure of digging through the comments that pollute the web? If you have, then you are no stranger to the spam and hostility that overwhelm the comment boxes that are a huge effort for teams to manage.

While spamming and trolling are challenges faced by many organizations, top media companies and brands know that community is everything, and that it’s crucial to be able to listen to and engage with customers online in real time. Unfortunately, that means constantly sifting through the many hateful comments in order to nurture a healthy online community.

Community Growth

It’s not just frontline digital teams that want to foster a healthy online environment – it’s important to their audiences and customers as well. In fact, when the quality of conversations increases, so does their audience’s engagement.

35
Increase in comments per user
34
Increase in replies per user
62
Increase in likes per user
22
Increase in likes per comment

*Analyzed data gathered from 600+ media organizations, compiled both before and after the introduction of Viafoura Automated Moderation.

Cutting Through the Noise

With smart technologies like Viafoura Automated Moderation, content producers can manage, moderate and listen to their communities, with the protection of pre-moderation in real time.

Automated Moderation automatically eliminates up to 90% of the time and effort spent moderating communities, analyzing comments and responding to customers.

How does it work? Our team of linguists teamed up with our engineers to build an engine that automatically looks for patterns in language. It determines the topic, how the person felt when they wrote it, and also its context. They did this by programming every 6.5 million variation of each word in English, Spanish, Portuguese and French, with more on the horizon.

This engine is then used to moderate and listen across all owned and third-party social networks to manage engagement, provide insights into urgent customer complaints, and display data and insights in one dashboard. It immediately removes comments outside of your community guidelines and sends suspect comments to a queue for resolution in real time.
That means that community managers don’t need to spend their resources looking over each comment or manually monitoring social networks. When a moderator logs in, they can easily review what needs their attention, focusing quickly on issues that really matter. By cutting through the clutter and allowing the most important comments to get addressed in real time, it allows you to deliver the best customer experience.

Creating Meaningful Relationships

By flagging and removing inappropriate comments, Viafoura Automated Moderation allows authors, community managers and social media managers to spend their time addressing important inquiries quickly and creating meaningful conversations with their audiences.
And when your teams are empowered to engage with audiences in a timely and meaningful way, it leads to the best customer experience, higher engagement and ultimately a higher lifetime value for each customer.

Interested in learning more?

Connect with us today to learn how Viafoura can help you build, manage and monetize your audience.

Connect Now
Exit mobile version