Finally: Facebook is Going to Ban White Nationalist and Far-Right Pages | lovebscott.com

Finally: Facebook is Going to Ban White Nationalist and Far-Right Pages

Facebook has announced that it will block “praise, support, and representation of white nationalism and separatism” on Facebook and Instagram starting next week.

The company has also pledged to improve its moderation strategy to identify and delete material published by terrorist groups. In addition, Facebook users found searching for terms associated with terrorism and the far-right will be directed towards Life After Hate, a charity that combats extremism.

Previously, Facebook had allowed some white nationalist pages it categorized as not overtly racist. However, some unmoderated groups included extremist content like the call for the creation of a white ethnostate. When called out on this policy, the social media giant had previously stated that this kind of rhetoric was similar to “things like American pride and Basque separatism, which are an important part of people’s identity”.

Despite their leniency in the past, a three-month of consultation with “members of civil society and academics”, led the social network to identify that white nationalism could not be distinguished from white supremacy and other forms of far-right extremism. In the statement, Facebook said:

“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism.”

Social networks have come under fire after the live streaming of the recent New Zealand mosque attack spread rapidly across several networks. Facebook has admitted that since the video was posted, it was viewed over 4,000 before it was removed. Within 24 hours, they had to block 1.2 million copies at upload stage and delete another 300,000 that made it past moderators.

To manage this problem, Facebook has implemented new machine learning tools to filter out far-right content. In the statement, Facebook admitted that they needed to get better and faster at removing violent or hateful content. However, the company admitted that although it’s making progress, there is still a lot of work to do.

In the wake of the shootings earlier this month, several leaders and activists have called on social media companies to take responsibility for the content shared on their platforms. The Prime Minister of New Zealand, Jacinda Ardern, said social media networks were “the publisher, not just the postman”, the highlight their potential liability.

Meanwhile, other tech companies are attempting to crack down on the spread of far-right content. Reddit banned a disturbing subreddit titled “watchpeopledie” after clips of the New Zealand attack were shared on the forum, and the Steam gaming network removed “tributes” to the killer.

Share This Post