Following last week’s terrorist attack and Facebook’s struggle to keep a viral video of the event off its platform, we revist this story from 7 August 2018, which took a look at the lives of those who moderate content on the social media site.

An undercover reporter has lifted the lid on how Facebook decides what you see, training secretly as a Facebook moderator going undercover as a moderator to be told to leave offensive content on the site.

The documentary aired on ABC’s Four Corners revealed graphic videos of child abuse, school bullies, and posts and videos showing self-harm and hate speech were being left on the site by moderators who were seemingly encouraged to mark the toxic content as “disturbing” rather than delete them entirely.

Inside Facebook: Secrets of the Social Network saw a British reporter pose as an employee of UK-based CPL Resources, undergoing training for the Facebook contractor.

The moderators review content reported for possible breaches of Facebook’s community standards, and are given three options: ignore, delete, or mark as disturbing.

Via secret interviews and secretly-recorded footage he revealed found “serious problems” with how Facebook’s guidelines were applied to what content is published on its platform.

A new BBC documentary sheds light on the darker side of Facebook. Photo/Getty Images.
A new BBC documentary sheds light on the darker side of Facebook. Photo/Getty Images.

And he found some pages were subject to “shielding” which allowed offensive content to remain on their sites, and subject to another level of review, even if guidelines had already been breached.

When the documentary aired in the UK, Facebook released a statement saying the report revealed practices “that do not reflect Facebook’s policies or values and fall short of its high standards”.

“We take these mistakes incredibly seriously and are grateful to the journalists who brought them to our attention.”

Among the “mistakes” highlighted in the documentary was the undercover reporter being shown video of a man kicking and beating a boy, and told it should be marked as “disturbing”, not deleted.


That video had been reported several years ago by child abuse campaigner Nicci Astin, but Facebook told her the video did not breach its guidelines, so the video remained on the site.

“Initially you see a little tiny boy, must be about two or three in the video, with a man talking to him and shouting at him and then he was hitting him and punching him.” Ms Astin told Four Corners.

Facebook's steps to protect against the distribution of hate speech and misinformation still lets some content sneak though. Photo/Getty Images.
Facebook’s steps to protect against the distribution of hate speech and misinformation still lets some content sneak though. Photo/Getty Images.

“He was throwing him about and then he was stamping and kicking on him and then obviously the video cut. So, you’re left with knowing absolutely nothing apart from a sickening feeling that you’ve just … you’ve seen some man beating up a tiny little boy.

“You know yourself from watching that video that that child’s not just got up and skipped off out to play. You know he’s hurt.”

But she said when she reported the video in 2012 “we received a message back saying while it was disturbing, it did not have a celebratory caption, so it was not removed.”

Richard Allan, Facebook’s vice-president of public policy, told Four Corners it “should have been taken down”.

Asked why Facebook’s allowed content like that on the site, Allen said “in order to aid in the possible identification and rescue of victims of physical child abuse, we may not immediately remove this content from Facebook.”

He said companies like CPL were “frontline reviewers, but behind them sits a team of child safety experts, they’re actually Facebook full-time staff – they will make an assessment of whether the child is at risk, they will make a decision about what to do with the content, including referring it to law enforcement agencies where that’s appropriate”.

Back at CPL, when the journalist asked colleagues and trainers why graphic violence would be marked as disturbing rather than removed, replies ranged from “we’d just mark it as disturbing so you can still share them” to deleting being “too much like censorship” and, in the case of teens fighting “if a young kid sees another kid getting the shit kicked out of them, it’s for their safety”.


Facebook says it treads a fine line between unacceptable content and freedom of speech, but

Roger McNamee, a former mentor to Facebook founder Mark Zuckerberg, said toxic content was the “crack cocaine” of Facebook.

“When they say freedom of speech, what they’re really saying is: ‘We really want to permit people to do whatever they want on this platform, and we will do the bare minimum to make that socially acceptable’,” he said.

“From Facebook’s point of view this is, essentially … the crack cocaine of their product.

“It’s the really extreme, really dangerous form of content that … attracts the most highly-engaged people on the platform.

“If you’re going to have an advertising-based business, you need them to see the ads, so you want them to spend more time on the site.

“And what Facebook has learned is that the people on the extremes are the really valuable ones, because one person on either extreme can often provoke 50 or 100 other people

and so they want as much extreme content as they can get.”

Richard Allan, Facebook's vice president of policy, denied harmful content was left on the platform to make money. Photo/Getty Images.
Richard Allan, Facebook’s vice president of policy, denied harmful content was left on the platform to make money. Photo/Getty Images.

One CPL staffer told the reporter violent content was left on Facebook because “if you start censoring too much, then people lose interest”.

But Allen said shocking content “does not make us more money — that’s just a misunderstanding of how the system works.

“That’s not our experience of the people who use our service round the world,” he said.

“There is a minority who are prepared to abuse our systems and other internet platforms to share the most offensive kind of material.

“But I just don’t agree that that is the experience that most people want and that’s not the experience we’re trying to deliver.”


Video of a two teenage girls fighting and shared more than 1000 times wasn’t removed because it had a caption condemning the violence.

“The other girl gets up and basically just goes to town on my daughter, and just repeatedly knees and kicks her in the head,” the mother, who tried to have the video removed, told Four Corners.

“She looks like a wild animal. To wake up the next day and find out that literally the whole world is watching … it was humiliating for her.”

“You see the images and it’s horrible, it’s disgusting. Why was it a discussion whether to take that down? I don’t get it that. You know, that’s someone’s child being battered in the park. It’s not Facebook entertainment.”

“There are other ways to spread awareness without putting a video out there with someone’s daughter being battered. If they were watching a video of their own daughter, what decision would they make about that video?”

Allan said if a parent or guardian saw a video of their child in circumstances that they object to, “they do have the right to insist that we take it down and we do take it down where we’re made aware”.

“If the content is shared in a way that praises or encourages that violence, it’s going to come down. But where people are highlighting an issue and condemning the issue, even if the issue is painful, there are a lot of circumstances where people will say to us, ‘Look, Facebook, you should not interfere with my ability to highlight a problem that’s occurred’.”


The documentary also covered the issues of moderating images of self-harm, underage users, and alleged “shielding” of far-right pages with large followings.

Asked why any such images of self harm would be left on the site, Allan said “There’s actually a very strong valid interest from that person, if they’re expressing distress, to be able to express their distress to their family and friends through Facebook and then get help”.

“If we took it down, the family and friends would not know that that individual was at risk.”

Facebook is facing fierce criticism over allowing hate sppech to be published on the platform. Photo/123RF.
Facebook is facing fierce criticism over allowing hate sppech to be published on the platform. Photo/123RF.

The undercover reporter is also told racially abusive comments against ethnic or religious communities are allowed provided they are described as immigrants.

“Shielded” pages received special protection, with deletions of content or complaints referred upwards to another moderator for review rather than disappearing, because they have lots of followers, it was claimed.

The far-right Britain First Facebook page had more than two million followers when it was deleted in March after its leaders were convicted of racially aggravated harassment.

“They had 8 or 9 violations and you’re only allowed five, but they had a lot of followers so were obviously making a lot of money for Facebook,” one CPL worker said.

“To reach the actual violation you have to jump through a lot of hoops to get there,” was one trainer’s explanation.

Allan said it was not a “discussion about money, this is a discussion about political speech. And I think people would expect us to be careful and cautious before we take down their political speech.”

Both Facebook and CPL Resources said they were reviewing policies and retraining staff in the wake of the report.

– News.com.au




Please enter your comment!
Please enter your name here