Earlier this month Facebook released a report that answers the question “How does Facebook measure its effort to keep bad content off Facebook”.

The report looks at six pieces of content that breach its community guidelines.

  1. Graphic Violence
  2. Adult Nudity and Sexual Activity
  3. Terrorist Propaganda
  4. Hate speech
  5. Spam
  6. Fake Accounts

According to Facebook, the measure used the most is the impact, to determine whether the content is abusive or harmful.

The formula to achieve this is:

Total impact = views of violating content x impact of violating content per view

Because Facebook cant actually gauge the impact, the formula is just a theory and they instead use it within a triage setting.

This was their example

Suppose someone posts a naked image on Facebook. That’s against FB policies and they would work to remove it.

But suppose that image was posted by a man seeking revenge on the woman who broke up with him? We would consider that to have a greater negative impact and we would escalate it in our removals queue – somewhat like triage in an emergency room.

Every patient matters. But the most critical cases go first.

But I spot a very big problem.

First, it uses impact (which Facebook admits as being hard to determine) as part of the equation to determine the total impact.

Facebook admits it uses an element of subjectivity by categorizing and prioritizing, hence the triage method.

Second, based on this equation, a revenge post of a victim with a few hundred friends and the image is seen by only say 10-40 of the victim’s friend, still would not be classified as having a large negative impact.

Assuming that the number of views is what really determines a negative impact and attributes this to the triage, they could leave the offending post for total views to escalate to hit a “critical” setting.

Since Facebook still refuses to measure how long it takes to remove an offending post the impact would naturally be greater the longer it stays, so to not have the length of time, a measure in this equation of total impact, seems somewhat well stupid af.

Fake Accounts

Fake accounts are a personal interest of mine and not for the latest propaganda fuelling initiated by countries but because of its use of scamming the elderly. 

Facebook said in the report that fake accounts were created in large volumes automatically using scripts or bots, with the intent of spreading spam or conducting illicit activities such as scams.

In Q1 2018, Facebook disabled 583 million fake accounts, down from 694 million in Q4 2017.

Odd considering that online scams are increasing.

Despite FB’s ‘flagging’ system which allows FB users to ‘flag’ a fake account FB said that 99% of accounts were caught by FB’s own system before being flagged.

Facebook shows the percentage of accounts we took action on that we found and flagged before users reported them.

And that those who were flagged were then forced to go through a rigorous process to access their account to prove it was them.

This is where I call bullshit.

I call bullshit on the metric, I call bullshit on finding fake accounts before people flag accounts and I call bullshit on the ‘rigorous’ process.

Below is a report I made on an account proposing to be Stephen Townsend, a United States Army four-star general who apparently was no longer married and looking for love on Facebook.

He had sent multiple requests to many women and many fell for the illusion of the highly decorated, highly influential and extremely high profile General looking for his special one.

Of course, this is fake account and military profiles are one of the most prolific accounts but Facebook’s response had me wondering wtf.

Fake accounts used for scams tend not to post a lot but photos, but as the account technically met ‘community guidelines’ they actually did nothing.

Nothing… over a very obvious, very fake account.

My problem with this entire report is that it feels disingenuous.

A bit of a combination of ‘there is just so much bad stuff” mixed with “pat me on the back for trying”.

And while I appreciate the volume of content that Facebook is required to NOW go through, I find this report reactionary, lacking sincerity of effort to combat harmful content and pays a large part of the report providing lip service.

 The report in full can be viewed here, take a look and tell us what you think.

Charis McAwesome, co-founder and manager of hashtagme.co.nz – Charis has worked in digital for over 10yrs specialising in digital management, analytics, digital advertising, social media, digital pathways, and CRM services.

She’s also most likely to call bullshit and get Hashtag in the shit.

Leave a Reply

Your email address will not be published. Required fields are marked *