Top Eight Best Practices to Setting Effective Community Guidelines

For media companies, comment sections offer users a place to participate and engage with your journalists and each other. However, comment sections can easily turn toxic without moderation tools in place to rein in spam and abuse…

Last updated October 28th, 2019

Highlights:

  • There are many tools out there to help with moderation
  • Remember to keep personal information, personal
  • Have an escalation plan
  • Set expectations with your users on your community guidelines and don’t be afraid to enforce them

For many organizations, opening up a comment section offers their users a place to participate and engage with content and one another. 

However, commenting sections can easily turn toxic. We found that 68% of audiences spend more than 15% of their time on-site reading comments. The more toxic the comments*, the less likely people are to engage with your content, turning away potential subscribers.

Keep your community guidelines easily visible to visitors!

There are a number of moderation tools and moderation services out there to rein in spam and abuse. When creating your own community guidelines, we recommend following these tried and true best practices for building a safe, productive community:

*At Viafoura, all our tools come with automated content moderation, creating a safe space for users to interact in.


 

Keep Your Users’ Personal Information Safe

With the proliferation of social media, one of the basic fundamentals of online safety not giving out personal information has been lost. Include emphasis in your community guidelines to stay safe online, and a reminder to your users not to post personal information about themselves or others. Make a mandate to eliminate any posts that include this information on their behalf.

Don’t F@!*% with the Law

This should go without saying, but your guidelines should make it clear that comments that appear legally objectionable or encourages/condones a criminal offense or any form of violence or harassment, will NOT be tolerated.*

*Note: If you do see something on your site that could potentially violate the law, make sure you have an escalation policy.

Proactively Whitelist and Blacklist Websites

Once your community has been commenting for a while, it’s easier to recognize sites to be safe or spam. Let users know that anything that looks/acts/quacks like spam will be removed and blacklisted.

This will save time in your moderation efforts by whitelisting the safe sites and blacklisting the spam.

Enforce Community Guidelines: Banning Users

Community guideline violations are and should be enforceable through user bans. Don’t make this an empty threat; ban users.

You could make a tiered system, with the first interaction resulting in a short ban and each following infraction resulting in a longer ban time. For example:

  • A first ban for serial flagging might be one hour.
  • The next ban for personal attacks could be one day. 
  • And the following ban for repetitive posting could be three days. 

Make sure the messaging accompanying the ban explains the violation, with a link to your guidelines for more information and a concrete amount of time before the user’s account reactivates. This ensures nothing gets lost in translation, sets expectations and provides additional resources for the banned user.

Delete Repetitive Posts

Similar to blacklisting sites that appear to be spam, comments that appear to be spam (i.e. repetitive posts) should be deleted. Automatically identify word-for-word posts, hide them from view, then choose whether or not you would like to ban the user for a preset amount of time.

Abuse Is More Than Name-Calling

Users can abuse each other and the platform by more than just calling each other names within the comments themselves. One example of this is serial flagging: when one user flags another user’s content as a violation when it is clearly not. If serial flagging is a violation on your site, you may choose to ban users when more than 50% of the content they flag does not violate community guidelines. Leverage a user’s historical information to make a judgment call.

Make Unacceptable Content Crystal Clear

Sometimes we have a concept of what is or isn’t allowed in comment content. But creating a clear, unassailable description in your community guidelines can help prevent initial violations and give your moderators a reference point that clearly defines unacceptable content. Examples of content to explicitly define as unacceptable include:

  • Personal attacks
  • Vulgar or obscene content
  • Libelous or defamatory statements
  • Anything that can be described as threatening, abusive, pornographic, profane, indecent or otherwise objectionable

You Reserve the Right to Review and Moderate all Comment Content

Ultimately, you are in control of your online community. Remind users in your community guidelines that you reserve the right to remove or edit comments and permanently block any user in violation of your terms and conditions. This umbrella statement gives you complete control over the content your community produces, guaranteeing discourse will remain positive and productive.

Click here for more information on Viafoura Content Moderation, which provides you with all the tools you need to ensure conversations remain civil in even the largest online communities.

Interested in learning more about content moderation?

Contact us today to learn how Viafoura Automated Moderation is empowering media companies to manage their communities in real-time.

Connect Now
Exit mobile version