EXPOSED: Internal Documents Uncover Facebook’s Actual Guidelines for Censorship

Free Thought Project – by Claire Bernish

Facebook’s reasons for deleting posts seem to be an irrational morass of complexities frustrating its millions of users to no end, given the social media platform doesn’t explain itself — but internal documents disclosed in a new report finally reveal why posts are taken down.

German outlet SZ-Magazin obtained internal Facebook documents mapping how the platform determines which posts deserve the hate-speech ban-hammer, but — while certainly enlightening — its formulas will be no less infuriating to advocates for free speech.  

“We remove speech that targets people based on a longstanding trait that shapes their identity and ties them to a category that has been persistently discriminated against, oppressed, or exploited,” one document states.

“We do not allow hate speech on our platforms because it creates an environment of intimidation and exclusion that limits people’s willingness to share freely and represent themselves authentically.”

1

But as SZ keenly points out, “This goes against Facebook’s business model: the company wants to offer people a space in which they will share contents and interact with one another as much as possible – all the while looking at a lot of advertising.”

Discrepancies don’t end there, either, as content moderators — who attend internal training workshops on the finer points of, essentially, censoring people’s speech — are taught not all hate speech is hate speech deserving of deletion.

Crowning itself arbiter of hate speech, Facebook took it upon itself to determine which criteria merit the status of “protected category” (PC) — currently those categories include sex, religious affiliation, gender identity, national origin, race, ethnicity, sexual orientation, disability, or serious illness. According to SZ, Facebook will delete any post in which one of these ‘protected categories’ comes under attack.

2

But it’s more complex than just that — there are subcategories, as well.

  • Age: youth, senior citizens, teenagers, etc.
  • Employment status and occupation: unemployed, doctor, teacher, etc.
  • Continent of origin: European, South American, but certain origins are considered special risks — American and Australian; and Asian, protected under the race category.
  • Social status: rich, poor, middle class, etc.
  • Appearance: hair color, size, height, etc.
  • Political affiliation: Republican, Democrat, Communist, Socialist, etc.
  • Religion: Muslim, Catholic, Jewish, etc.

In these subcategories, condemning someone individually based on their religion or national origin would be grounds for post deletion — but condemning a religion or a country would not be.

In practice, this system of categorization becomes yet more complicated, as one protected category combined with a second makes a third. SZ explains:

“A protected category (PC for short) combined with another protected category results in yet another protected category. Take Irish women, for instance. Here, the ‘national origins’ and ‘sex’ categories apply. So if someone were to write ‘Irish women are dumb,’ they would be breaking the rules and their post would be deleted.

“However, combining a protected category with an unprotected category results in an unprotected category. Irish teenagers are the example. While they are protected under the national origin category, the term teenager does not enjoy special protection (the same applies to terms such as ‘retiree’ or ‘youth,’ for instance). For this reason, the sentence ‘Irish teenagers are dumb’ does not need to be deleted.”

3

4

In addition to hate speech, Facebook has a strict policy against bullying — but not just bullying of individuals. Posting an image of several women in bikinis would be kosher — but asking people to rate them based on physical appearance could cause the post to be removed. SZ notes, for example, President-Elect Donald Trump’s statement, “a small-chested woman has a hard time getting ten out of ten points,” would be censored.

Oddly located in the bullying section of the manual are bodily functions — people urinating, defecating, and pictures of menstruation can be allowed if the person depicted isn’t being humiliated or if the image isn’t accompanied by a bullying caption.

9

Self-destructive behavior would not be censored as long as certain vaguely-defined conditions apply — if a person self injures, the post wouldn’t be censored as long as their wound isn’t too gory, apparently — so other Facebook users could see the ‘cry for help.’ Content moderators must then offer the distressed person the appropriate hotline numbers.

A section of the document dealing with migrants seems rather fraught, as that category hasn’t been decisively defined. As HeatStreet explains, “writing ‘fucking Muslims’ is prohibited because religious affiliation is a protected category. However, writing ‘fucking migrants’ is allowed, as ‘migrant’ isn’t a category Facebook wants to protect.

5

“The rule claims that advocating hate against migrants is allowed under certain circumstances: posts saying ‘migrants are dirty’ are allowed, while ‘migrants are dirt’ aren’t.”

image: http://pixel.watch/qut7

Interestingly, Facebook has a more lax set of standards for public figures, including those “elected to public office,” “people with more than 100,000 followers on social media,” those “employed by broadcast or news media outlets and make public statements,” or people “who have been mentioned in news reports five times or more in the past two years.”

10

Facebook’s notorious censorship has expanded greatly over the years — and content moderators tasked with applying its intricate and confusing rules have a particularly stressful job under pressure from the company. According to SZ, 600 people in Berlin are paid only minimum wage and have anonymously complained they are ill-prepared, under-trained, and overloaded trying to comprehend the conflicting strict and lax guidelines for policing posts.

Some reported being traumatized by having to face graphic pictures of torture, child pornography, and even murder.

However, while removing posts depicting murder and torture could generally be justified, it’s necessary to remember contextualization is imperative. Consider the recent assassination of Russia’s Ambassador to Turkey Andrei Karlov and the soon-to-be iconic image taken by an Associated Press photographer of the killer pointing to in the air while standing feet from Karlov’s prone body — granted, he died at the hospital later, but the photograph definitively depicted the result of a shooting.

That image, thus far, hasn’t been censored by Facebook’s content moderators; but another iconic image — the Vietnam War photograph of naked nine-year-old Phan Thi Kim Phuc fleeing a napalm attack on her village — was deleted, repeatedly, to the consternation and condemnation of the world. Bizarrely, the famous picture of Rosa Parks being fingerprinted after her arrest for refusing to move from the front of a bus was also removed for failing to meet Facebook’s Community Standards.

Advocates against all censorship wisely see that, once a single item is taken banned from public scrutiny, it opens the door for more. Maybe, just maybe, it would behoove CEO Mark Zuckerberg to heed the advice of free speech advocates and shut the lid on this censorship Pandora’s Box once and for all.
Read more at http://thefreethoughtproject.com/facebook-censorship-guidelines/#relZyWTQpH1j4atY.99

2 thoughts on “EXPOSED: Internal Documents Uncover Facebook’s Actual Guidelines for Censorship

  1. Just ditch Facebook already, and none of this matters. Send them to the trash heap, because that’s what they deserve for their betrayal of their customers to spies, and their censorship of people’s opinion.

  2. I GUESS MY DEMANDS FOR THE Arrest and trial of the jew heir sukerberg is what got me banned…… good riddance……..

Join the Conversation

Your email address will not be published. Required fields are marked *


*