Facebook reports spike in takedowns of hate speech, terrorism

Facebook Inc reported a sharp increase in the number of posts it removed for promoting violence and hate speech across its apps. File picture: IANS

Facebook Inc reported a sharp increase in the number of posts it removed for promoting violence and hate speech across its apps. File picture: IANS

Published May 13, 2020

Share

San Francisco - Facebook Inc on Tuesday reported a sharp increase in the number of posts it removed for promoting violence and hate speech across its apps, which it attributed to technology improvements for automatically identifying text and images.

The world's biggest social media company removed about 4.7 million posts connected to hate organizations on its flagship app in the first quarter, up from 1.6 million in the 2019 fourth quarter. It also deleted 9.6 million posts containing hate speech, compared with 5.7 million in the prior period.

That marks a six-fold increase in hateful content removals since the second half of 2017, the earliest period for which Facebook discloses data.

The company also said it put warning labels on about 50 million pieces of content related to COVID-19, after taking the unusually aggressive step of banning harmful misinformation about the new coronavirus at the start of the pandemic.

"We have a good sense that these warning labels work. Ninety-five percent of the time that someone sees content with a label, they don't click through to view that content," Chief Executive Mark Zuckerberg told reporters on a press call.

Facebook released the data as part of its fifth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms. These include Facebook's Messenger and WhatApp mobile apps.

It expanded the report last year to include information about how it enforces rules on photo-sharing app Instagram and said on Tuesday it would begin releasing the data on a quarterly basis.

In a blog post announcing the report, Facebook highlighted improvements to its "proactive detection technology," which uses artificial intelligence to detect violating content as it is posted and remove it before other users can see it.

"We're now able to detect text embedded in images and videos in order to understand its full context, and we've built media matching technology to find content that's identical or near-identical to photos, videos, text and even audio that we've already removed," the statement said.

Improvements to that technology also enabled the proactive removal of more drug-related and sexually exploitative content, the company said.

With fewer moderators available during the pandemic, Facebook has relied more on automated tools to police content as conspiracy theories about the coronavirus have spread online.

On the call, Zuckerberg said contractors in some parts of the world were starting to return to their offices, but cautioned that the coronavirus adjustments were likely to have a heavier impact on the data in the second quarter.

Cindy Otis, a disinformation researcher and former CIA analyst, noted that coronavirus-related abuse likewise spiked in April, after the period covered in the report.

She urged Facebook to disclose how quickly it removes posts, a key indicator of the effectiveness of its systems, as it often appears to act only after content has gone viral and spread to other platforms.

"The pandemic has been the largest event in disinformation and misinformation history," she said, "and that does not appear to show in their numbers they provide."

Reuters

Related Topics: