Tech firms face hefty fines under new EU terror rules

The Facebook Inc. application icon is seen on an Apple Inc. iPad Air in this arranged photograph in Washington, D.C., U.S. Photographer: Andrew Harrer/Bloomberg

The Facebook Inc. application icon is seen on an Apple Inc. iPad Air in this arranged photograph in Washington, D.C., U.S. Photographer: Andrew Harrer/Bloomberg

Published Sep 12, 2018

Share

INTERNATIONAL -  Alphabet Inc.’s Google, Twitter Inc., Facebook Inc. and other tech firms could be slapped with fines as high as 4 percent of annual revenue if they fail to remove terror propaganda from their sites quickly enough under new European Union legislative proposals unveiled Wednesday.

The European Commission, the bloc’s executive body, proposed new legislation forcing internet companies to wipe Islamic State videos and other terror content from their services within an hour of notification for removal by national authorities. Companies would be fined by national governments in the event of systematic failures to remove content.

Wednesday’s proposal follows similar guidelines the EU issued in March. The EU at the time threatened to issue the regulation should the tech firms fall short of expectations.

While large tech platforms have made rapid improvements in their efforts to tackle terror content, partly thanks to automated tools, the EU says some of the platforms have failed to meet the one-hour deadline and need to do more.

"While we have made progress on removing terrorist content online through voluntary efforts, it has not been enough," said Julian King, European commissioner for security policy. "We need to prevent it from being uploaded and, where it does appear, ensure it is taken down as quickly as possible -- before it can do serious damage."

A 4 percent fine would only occur in the event of "systematic failures" in removing content. For Google parent Alphabet would amount to more than $4.4 billion and more than $1.6 billion for Facebook, according to the 2017 revenues.

Automated and machine-learning tools can help tech firms catch any malicious posts. But web firms typically also use human reviewers to look over posts to try to ensure they don’t remove terror content when it’s used in a neutral context, such as by news outlets, whistle-blowers or non-governmental agencies. That can make tight turnaround times a challenge if companies want to ensure they aren’t overly censoring users.

The EU on Wednesday also called on the companies, member states and Europol to increase their cooperation, including by ensuring that a point of contact at each company and each national authority is reachable 24/7. The tech firms and the EU’s member states will also be required to report back to the commission regularly on the removals of terror content.

The commission proposal still needs approval from the EU’s member states and the European Parliament before it becomes law. EU member states said in late June they welcomed the commission’s intentions to present a legislative proposal in the area.

The move is part of a wider shift by legislators in Europe and the U.S. to hand more legal responsibility to tech firms for the content that appears on their sites. Also on Wednesday, the European Parliament is set to vote on new rules that could require web platforms to actively prevent copyrighted content from appearing on their platforms if rights holders don’t grant them a license. In the U.S., President Donald Trump signed a law in April making websites liable if they knowingly facilitate sex trafficking.

- BLOOMBERG 

Related Topics: