Published: Wed, May 16, 2018
Business | By Kate Woods

Nudity, hate speech and spam: Facebook reveals how much content it kills

Nudity, hate speech and spam: Facebook reveals how much content it kills

Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017.

The social media platform revealed the figures in its updated transparency report, which included details for the first time on how much content it removed from Facebook and the type of content it was taking down. Which is to say, this doesn't mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.

The amount of content moderated by Facebook is influenced by both the company's ability to find and act on infringing material, and the sheer quantity of items posted by users.

The company previously enforced community standards by having users report violations and trained staff then deal with them.

More news: Donald Trump says Melania 'doing really well' after kidney procedure

"We have a lot of work still to do to prevent abuse", Facebook Product Management vice president Guy Rosen said.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

In four of the six categories, it increased deletions over the previous quarter: spam (up 15% from the previous quarter), violent content (up 65%), hate speech (up 56%), and terrorist content (up 73%), while deletions of fake accounts were down 16%, and deletions of nudity and sexual activity saw no change. The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight, its new methods are evolving and aren't set in stone.

In the area of adult nudity and sexual activity, between 0.07-0.09% of views during the first quarter were of content that violated standards.

More news: Twitter to implement changes meant to crack down on trolls

Facebook took action on 1.9 million pieces of content over terrorist propaganda.

Facebook also disclosed that it disabled almost 1.3 billion fake accounts in the six months ending in March.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem. Facebook hopes to continue publishing reports about its content removal every quarter.

The company has been using artificial intelligence to help pinpoint the bad content, but Rosen said the technology still struggles to understand the context around a Facebook post pushing hate, and one simply recounting a personal experience. But a recent report from the Washington Post found that Facebook's facial recognition technology may be limited in how effectively it can catch fake accounts, as the tool doesn't yet scan a photo against all of the images posted by all 2.2 billion of the site's users to search for fake accounts. In this case, 86% was flagged by its technology.

More news: Russian Federation qualify for Rugby World Cup 2019 in place of Romania

Like this: