Share this post

Human Vs. Machine: The Moderation Wars

Moderation is top of mind as media organizations aim to drive engagement, increase civility and create a positive user experience.

In a study done by Pew Research Center in 2010, 37 per cent of internet users in the U.S. participated in the creation of news by commenting, social sharing and emailing. For perspective, that’s 85 million people — more than twice the population of Canada! This number has been on the rise ever since, as social networking and messaging apps have gained popularity. Additionally, research by The Engaging News Project also shows that a large portion of an audience goes online specifically to engage in dialogue and participate in a community. For a media organization, this is great news — having an audience interested in spending time on your properties and sharing your content is essential for growth. But this increased engagement does come with a caveat. As more people frequent your comments section and website, the likelihood of getting a troll infestation rises. Trolls are those bad commenters attracted to the scent of a thriving community, whose main goal is to sow discord and unrest in your digital kingdom. Once introduced, these pests can not only derail conversations and drive away loyal subjects, but also be libellous for you if their poison spreads elsewhere.

How do you stop them? Moderation.

So far, media organizations have relied on human moderation to weed out negative comments on their site, so audiences can participate in valuable conversations. This type of moderation has its inherent benefits, since humans can analyze context to make decisions, but it’s also very expensive and inefficient. That’s where Smart automated algorithmic Moderation (SaaM) comes in. Let’s take a look at both below, and see which one triumphs in a war on trolls — SaaM or full human moderation.

Beleaguered Knights: Human Moderators

Human moderation can occur either pre- or post-publication of a comment. Pre-moderation involves putting all reader comments in a queue, to be reviewed by a human before being published. This can be helpful on stories or in environments that incite a lot of heated opinions, where abusive comments are more likely. As Gulf News, UAE told WAN-IFRA, “The environment compels you to make sure there is not offensive content because we are in the Middle East. We don’t want content that is offensive or inflammable.” Post-moderation on the other hand, involves allowing all comments to publish to your site immediately, and then using humans to sift through and remove any inappropriate posts. This supports real time conversations around time-sensitive stories and as a result, users are inclined to spend longer on your page.

The problem with human moderation is primarily cost and moderator bias.

If you’re a publication with a large community, you’ll need a lot of moderators to sift through all the user generated content and fight all the negative commenters. If you choose to selectively moderate, you open yourself to having potentially libellous material on your website. Additionally, while human moderators can account for context while moderating comments, their intrinsic biases may affect their decisions, leading to audience frustration. As a result, human moderation is not sustainable or scalable as your publication and community grows.

Steadfast Sentinel: Smart automated algorithmic Moderation (SaaM)

With over 600 media brands leveraging our Audience Development Platform, we heard from our clients repeatedly about the need for a better means of moderation in the war against trolls. That’s why we jumped into the fray with SaaM.

SaaM allows you to moderate all comments as they’re submitted (in real-time), and learns from post-moderation changes. This is done through automated moderation which parses comments as they’re made, and publishes or flags them based on predefined criteria (such as word filters and fingerprint technology). Once flagged, these comments can either be deleted automatically or reviewed and approved/deleted by an in-house moderator. As a result, you can support real-time dialogue, since the automated system monitors every post to ensure that no hostility or vulgarity is published.

SaaM also learns from any of the post-moderation actions taken by your team, with self-learning algorithms re-deployed every day. Because of this your comment-monitoring capabilities increase the more posts you generate. Each comment provides a new chance to learn the nuances of your community guidelines and adapt to fit your publication. Additionally, our recent research has shown that SaaM has an average accuracy rate of 92 per cent, compared to the human accuracy of 81 per cent. This is supported by our highly adaptable algorithms that can be set to err on the side of caution or openness based on your publication needs.

If you do have in-house moderators, SaaM allows them to focus on messages with content that you have set to be flagged but not deleted. This vastly reduces their time spent wading through spam, greatly increases the quality of your publication (without incurring large staffing costs), and gives you 24/7 reliable coverage on your stories. Want to emerge the victor in the war against trolls? Find out more about SaaM by downloading our information sheet now or contact us to discover how you can increase civility, reduce costs and provide a positive user experience with SaaM.

Connect with an Audience Development Strategist today and see how 600+ media brands are engaging, discovering, and growing their audiences.