Ready your content moderators because 2.3 million active users — many of whom are eager to encourage violence, racism, antisemitism, antifeminism and conspiracy theories — have lost their home base on Parler.
As a social platform that encourages free speech with practically no moderation or fact-checking, Parler has gained a massive user base of people with radical views.
At least that was until Apple and Google booted Parler from their app stores in response to how it was used to organize the January 6th attack on Capitol Hill. Even Amazon Web Services (AWS), which hosted Parler, has abandoned the company, pushing the platform mostly offline.
“We’ve seen a steady increase in this violent content on [Parler’s] website, all of which violates our terms,” reads a letter that AWS sent to Parler’s chief policy officer. “It’s clear that Parler does not have an effective process to comply with the AWS terms of service.”
Though a bare-bones version of Parler has recently popped up on a Russian-hosted site, the platform will likely continue to be banned from app stores on mobile devices, which account for most of their users.
With Parler practically scrubbed from the internet, its extreme users will be searching for other media platforms they can use to amplify their radical perspectives. Digital media companies and online publishers will need to prepare for a possible frenzy of visitors with loud, destructive voices, who believe content moderation is a threat to free speech.
Leaving your digital properties vulnerable to these toxic commenters can scare away your loyal community members and damage positive conversations.
Instead, here’s what you can do to prevent ex-Parler users, or any other radical and offensive voices, from wreaking havoc on your digital social spaces:
Make Sure You Have Clear, Easy-to-Access Community Guidelines
Sometimes we have a concept of what is or isn’t allowed in comment content. But creating a clear, unassailable description in your community guidelines can help prevent initial violations and give your moderators a reference point that clearly defines unacceptable content.
Examples of content to explicitly define as unacceptable include:
- Personal attacks
- Vulgar or obscene content
- Libelous or defamatory statements
- Anything that can be described as threatening, abusive, pornographic, profane, indecent or otherwise objectionable
Be sure to post your guidelines in a visible area of your website so that your digital visitors can access them with ease.
On-Site Engagement Actions
Not all registered users offer the same amount of value to media organizations.
“Some users register to a website in order to use social tools, and others may register just to access content,” Liang explains. “Those who register to participate in a conversation — whether that be through comments, likes, replies or dislikes — contribute to a media company’s community with meaningful interactions.”
Have an Escalation Plan
In the case of an emergency — like the threat of an active shooter at your headquarters — your team must have a clear procedure in place. There are a few crucial questions you can ask your team to help them prepare for these types of threats:
- Is there a clear chain of command in an emergency?
- When do you alert the police versus the organization you’re protecting?
Distinguish between different types of non-urgent, semi-urgent, general and specific threats and outline how moderators should react to each of them.
Update Your Banned Word List/Moderation Algorithm
Did you know that users within a community can develop new phrases to spread offensive and dangerous messages?
This was the case for one publisher when Viafoura’s moderators noticed that trolls were posting a recurring phrase in community social spaces: “BOB crime.” Our moderators quickly realized that this phrase was being used in offensive contexts, and after investigating, found out that it stood for “Black-on-Black crime,” which challenges the Black Lives Matter movement.
The moderation algorithm was quickly adjusted to prevent relevant comments from being posted within that publisher’s community. However, this is just a single example of many where new phrases are created within a community to maneuver around basic moderation systems.
The bottom line is that language evolves.
To reinforce community standards successfully, it’s essential that moderation algorithms and ban word lists are updated quickly as new, offensive language is discovered.
Be Prepared to Block IP Addresses
In the digital world, the general belief is that the more eyeballs a piece of content can get, the better. The end goal for media executives is typically to gain and engage more site visitors to maximize subscriptions. However, visitor quantity isn’t always better than quality.
“Don’t be afraid to ban users,” says Leigh Adams, director of moderation solutions at Viafoura. “A lot of newspapers are afraid to ban users because they want the audience, but when you allow trolls and other toxic users to take over, you’re actually scaring away more valuable visitors.”
Fewer quality commenters offer more value to brands than many commenters that destroy the safety and trust between an organization and its loyal followers.
Ultimately, you are in control of your online community.
Just remind users in your community guidelines that you reserve the right to remove or edit comments and permanently block any user in violation of your terms and conditions. This umbrella statement gives you complete control over the content your community produces, guaranteeing discourse will remain positive and productive.
At the moment, we are living in a time of unpredictable change and misinformation. Whether or not any of Parler’s users make their way onto your website or app, it’s important to be prepared to handle and discourage any toxic behavior. Maintaining positive and productive social spaces will help to strengthen engagement around your brand while protecting its reputation.
Need help identifying and stopping trolls? Check out our guide written by our head of moderation services on troll hunting.