Twitter has a digital bouncer

Twitter CEO Dick Costolo. Picture: AP

Twitter CEO Dick Costolo. Picture: AP

Published Mar 26, 2015

Share

London - Twitter has hired a digital bouncer.

It's the equivalent of the guy who stands outside your party, keeping out the nasty people who've turned up to call you a waste of oxygen and throw peanuts at you.

Technology entrepreneur Anil Dash noticed it lurking in his Twitter app on Monday: a "quality filter" which "aims to remove all tweets from your notifications timeline that contain threats, offensive or abusive language… or are sent from suspicious accounts".

Only a privileged few have, thus far, been granted access to the filter, which can be activated and deactivated with the flick of a switch. The rest of us either have to remain on our guard, blocking abuse manually, or alternatively say "sod this" and stop using Twitter altogether.

The latter is what many believe is the solution to the problem of online abuse; if you're too feeble to handle it, walk away. But in recent months Twitter has been forced to recognise that bullied people walking away isn't that good for the service's reputation. In December it simplified the reporting process, and allowed third parties to flag cases of abuse. In February, following an admission from Twitter's CEO, Dick Costolo, that "we suck at dealing with abuse… and we've sucked at it for years", it introduced a way of reporting "doxing" (where bullying surges offline via the posting of personal information). It also began demanding phone numbers from people suspected of harassment. This month has seen the banning of revenge porn and non-consensual nude images, along with new measures to make it easier to report abuse to law enforcement officials.

All of these measures have been criticised for either going too far, or not going far enough, or being misguided in their intent. Twitter's problem might be simple - human beings can behave abominably - but there's no perfect solution. Over 250 million people ladle their unfettered thoughts on to Twitter every day, and no support team will ever be capable of handholding those people through the evidently tricky mental process of not being a scumbag.

The quality filter is the result; an automated tool which aims to remove abusive posts from your line of sight. But it's hard enough for humans to detect tone and nuance, so what chance for an algorithm? The filter will, again, be deemed to go too far (because people will be shut out of conversations by accident) and not far enough (because keyword filters can be easily circumvented with imaginative, florid or misspelled death threats.)

Some will also say that it's misguided, that it's attempting to control free speech. Automated blocking tools, created by Twitter users to try and screen abuse, have been criticised for the same reason; by determining what users can see and what they can't, they're inherently evil. But does free speech actually equal the right to be heard?

 

Of course, bullies don't like being ignored; if someone chooses not to look at their tweets there's not a lot they can do, but if someone has built a tool to facilitate that, it's a different story. They'll start to worry that their abuse is being hurled into a void, and will demand to know precisely what the filter deems "abusive" or "suspicious". Twitter, meanwhile, will continue to make increasingly frenetic but ultimately unsatisfying attempts to keep everyone happy. Who'd run a social media service?

The Independent

Related Topics: