In April Google said more than half of the YouTube videos it removes for violent extremism have fewer than 10 views. Facebook said the same month that in the first quarter of this year it either removed, or in a small amount of cases flagged for informational purposes, a total of 1.9 million pieces of Islamic State and al-Qaeda content. Photo: AP
INTERNATIONAL - Google, Twitter and Facebook have taken significant steps to expunge Islamic State propaganda and other terrorist content from their platforms.

But taking no chances, the EU is set to propose a tough new law anyway - threatening internet platforms, big and small, with fines if they fail to take down terrorist material, according to people familiar with the proposals that could be unveiled as soon as September.

While the details of the measures are still being thrashed out, they would likely be based on the EU guidance from earlier this year, said the people, who asked not to be identified because the details aren’t yet public.

The EU in March issued guidelines giving internet companies an hour from notification by authorities to wipe material such as gruesome beheading videos and other terror content from their services, or face possible legislation if they fail to do so.

“It’s true that the positive role that some of the big companies are playing today is incomparable to the situation three years ago,” said Gilles de Kerchove, the EU’s anti-terrorism czar. “But so is the scale, breadth and complexity of the problem.” An additional step in the response is “essential,” he said, given the diverse online aspects of the recent attacks in Europe.

Big strides

Large tech firms have been making big strides in the fight to wipe terror propaganda, videos and other messages from their sites, partly thanks to automated tools that in some cases can detect such content before users even see it.

“We haven’t had any major incidents to rush legislation,” said Siada El Ramly, head of Edima, a European trade association representing online platforms including Google, Facebook and Twitter.

Online services take the fight against terrorist content extremely seriously, said Maud Sacquet, senior manager for public policy at the Computer & Communications Industry Association, an industry group that includes Google and Facebook as members.

“This proposal seems rushed, and its publication in the autumn much too early to take into account the outcomes of already ongoing EU initiatives,” she said.

A commission spokesperson declined to provide more details on the proposals.

In April, Google said more than half of the YouTube videos it removes for violent extremism have fewer than 10 views. Facebook said the same month that in the first quarter of this year it either removed, or in a small amount of cases flagged for informational purposes, a total of 1.9million pieces of Islamic State and al-Qaeda content. Twitter says it has suspended a total of more than one million accounts, with 74percent of accounts suspended before their first tweet.

Some European member states have been vocal about the dangers of online radicalisation and the spread of terror propaganda, particularly in the wake of deadly terror attacks in some European capitals in recent years. In a speech in April, French President Emmanuel Macron called on internet giants to speed up their process to remove terror content.

Germany didn’t wait around and last year pushed ahead with new rules that threaten social networks with fines of as much as 50 million (765.68m) if they fail to give users the option to complain about hate speech and fake news or refuse to remove illegal content.

For companies, detecting harmful content is a constant battle as some groups continue to try to game their systems to spread their messages online as widely as possible. One tool that’s helped: a shared industry database, among Google, Twitter, Facebook and other companies, of known terrorist videos and images. 

- BLOOMBERG