Leaks “expose peculiar Facebook moderation policy” (bbc.co.uk).
“It said the manuals revealed the criteria used to judge if posts were too violent, sexual, racist, hateful or supported terrorism. The Guardian said Facebook’s moderators were ‘overwhelmed’ and had only seconds to decide if posts should stay. The leak comes soon after British MPs said social media giants were ‘failing’ to tackle toxic content.”
Leaks “expose peculiar Facebook moderation policy” (bbc.co.uk).
To cut a long story short the more than 100 manuals used internally by F’book cover a vast array of sensitive subjects, including hate speech, revenge porn, self-harm, suicide, cannibalism and threats of violence but the decision making process by moderators on whether a content should “stay or go” is “inconsistent” and “peculiar”—or perhaps just a damn cop out.
Basically moderation seems to come down to its it’s a name that may cause a fuss like a post suggesting America’s Presidential Orange Don should be un-alt shot or else something that may get them sued or bad press then the couple of seconds attention a moderator gets to give it attention is given to take it down else, for those less important mortals the post should be ignored and swept under the virtual rug unless someone without enough voice complains.
“The decision-making process for judging whether content about sexual topics should stay or go were among the most “confusing”, they said.”
Which may have a lot to do without them not wishing to have its remaining millennial demographic not think it hip ’n’ sexy and move on to whatever the new next big thing whatever it finally comes.
“As well as human moderators that look over possibly contentious posts, Facebook is also known to use AI-derived algorithms to review images and other information before they are posted. It also encourages users to report pages, profiles and content they feel is abusive.”
“Community moderation” which more often than not is used for agenda (techcrunch.com, Oct. 2014) or indeed just reported before someone else is getting more likes than you.
“In early May, the UK parliament’s influential Home Affairs Select Committee strongly criticised Facebook and other social media companies as being "shamefully far" from tackling the spread of hate speech and other illegal and dangerous content. … Soon after, Facebook revealed it had set out to hire more than 3,000 more people to review content.”
Recent/related stories
- Facebook admits flaw in image moderation after BBC report (Latest Picks 15th March 2017)
- Extremist AdSense? Extremists and terrorists profit off ads on YouTube, and so does Google (thisisnocave.blogspot.co.uk, 23rd March 2017)
- Facebook’s annus horribilis: 2016—the year Facebook became the bad guy (thisisnocave.blogspot.co.uk, 18th December 2016)
- Facebook can hear you, uses people’s phones to listen to what they say (Latest Picks 1st June 2016)