Washington - When someone calls you an unprintable, gender-based slur on Twitter and invites you to “get raped,” you presume — or at least, I presumed — that the network would do something about it.
Twitter's policies on this type of thing are pretty clear. The site has long forbidden its 284 million users from “targeted abuse” — the repeat, often one-sided sending of harassing or threatening messages to a particular user. And since a spate of ugly, high-profile harassment cases earlier this year, Twitter has vowed to enforce its policies more effectively — even promising, in a series of heralded changes made earlier this month, that a “safer Twitter” was on the horizon.
The issues of cyberbullying and harassment are, of course, nothing new; nor are they unique to Twitter and its users. But our awareness of these problems, and their wide-ranging social and psychological impact, is very recent.
Per an October study by the Pew Research Centre, four in 10 Internet users have experienced online harassment, most of them through a social network. And despite the efforts of Del Harvey, Twitter's secretive Head of Trust & Safety (and her attendant moderation team), Twitter remains one of the highest-profile — and most mainstream — social networks for harassment.
“Because of Twitter's open nature — any user can send a message to any other user, in public — it's especially vulnerable to mass harassment,” the tech writer Robinson Meyer explained.
In other words, Twitter's progress on this issue is way more than an isolated case study; it is, instead, an early battleground, and a crucial weathervane that indicates if the war for a safe, inclusive social Web can be won.
I'll be honest: I've followed this issue very closely, and not only because it's part of my beat. Like virtually every woman with any kind of public Internet profile, I regularly receive threats, slurs and other typed invective in the course of doing my job. Sometimes they're fairly benign: “get raped,” while definitely not the first thing you want to see on a Friday morning, doesn't prompt a serious chat between me, my editor and building security.
But there have been other messages, too, messages that had me leaving work early, or consulting with The Washington Post's lawyers, or calling my dad out of a business meeting in New York to explain what he and my mom should do if someone calls a bomb squad on them, as someone on Twitter promised.
It is very, very difficult to explain to a parent why people who don't know you hate you so fiercely, all because of something you wrote on a blog. It is extra-difficult to explain that these people also hate them, my parents, by association.
“More of those Internet loonies?” my dad asked — which I guess approaches understanding.
But it also, alas, underestimates the importance of the issue — an importance that we, as a society, are still only beginning to recognise on a mass scale.
When Pacific Standard ran Amanda Hess' seminal story on the online harassment of women last January, it was called the “civil rights issue of our time.” That's not because women are “crybabies,” to quote a common argument, or because there exists some new, modern interest in “legislating feels.” It's because many women, simply as a consequence of being women, face constant, systemic intimidation and aggression every time they sign online.
This year has only proved Hess' thesis. In May, after a college student named Elliot Rodger killed six people in Isla Vista, California — and published a deranged, misogynistic manifesto to explain his spree — Twitter was rocked by waves of backlash, first from women sharing stories of sexual violence on the #YesAllWomen hashtag ... and later by men disputing them, sometimes violently.
Less than three months later, following the death of comedian Robin Williams, his daughter, Zelda Williams, was driven off Twitter by a network of trolls who claimed she was somehow responsible for his suicide. (“Deleting this from my devices for a good long time, maybe forever,” she wrote. “Time will tell. Goodbye.”)
Even that incident would soon be upstaged by the antics of #Gamergate, a seething, vitriolic pseudo-movement that, within a span of months, used Twitter to drive at least three women in the gaming industry from their homes. Shortly thereafter, Monica Lewinsky joined Twitter as part of her campaign against cyberbullying — and she was met with a wall of, you guessed it, sexist cyberbullies.
Those, notably, are just the high-profile names — the big, extreme cases that made the news. Women tend not to talk about the the steady, inevitable trickle of lesser threats, the things that are “just wallpaper to me now,” as one feminist writer told The Post's Alyssa Rosenberg in August. Few achieve the notoriety that Zelda Williams or Anita Sarkeesian or Monica Lewinsky do.
“Zelda has become this poster child,” Jennifer Pozner, the head of the advocacy group Women in Media and News, told The Post in August, “but what that overlooks is that Twitter, in particular, has become a place for abuse, and for women and people of colour in particular. The company knows it and has done precious little.”
Is that true or fair to say of Twitter? I honestly couldn't say, myself, which only confounds the issue further. A Twitter spokesman declined to comment specifically for this story; on previous occasions, the company has insisted that it does everything currently in its power to protect harassment victims — and that, in the future, it will do even more. (It's telling, perhaps, that when Take Back the Tech slammed Twitter's handling of women's issues in September, “transparency around reporting redress” was one of the areas where Twitter needed to improve.)
The company did partner in November with the non-profit Women, Action and the Media on a project to research the harassment of women on Twitter and escalate their reports. Weeks later, in early December, Twitter announced some small changes to its abuse-reporting policies, including the ability to report on behalf of other users.
And yet, dozens of people have told me that they don't even bother reporting abusive tweets anymore, because it seems to them, at least, that Twitter never takes action. Instead, they're forced to turn outside the network: BlockTogether, a Web app that automatically hides messages from new or sparsely followed accounts, is an oft-invoked tool; others sign over their accounts to partners or friends until a particularly bad wave of abuse washes itself out.
I still report accounts, although my personal experience has also been uneven. I've become accustomed to seeing those automated form emails, always sweetly condescending, telling me that Twitter has ruled the account in question A-OK, or that the site would like me to “review (its) abusive behaviour policy” and reply back that I have done so before they review my claims.
Last Friday, for instance, I reported four accounts — including the one that sent the aforementioned gem “get raped, (expletive)” — and got three matching, boilerplate rejections back.
“We understand that you might come across content on Twitter that you dislike or you find offensive,” the message reads, in part. “However, Twitter is a global platform that lets us participate in broader conversations and connect with people from many corners of the world.”
I am not naive on these issues: I understand that Twitter's toeing a very difficult line, trying to provide a constructive, useful service to its users while also upholding the all-important virtues of free speech. Since both those things are critical to Twitter's success, and since they often appear to act in opposition to each other, Twitter's basically damned either way: Whatever it does, whoever it privileges, somebody will be unhappy. It's really not an enviable position to be in.
And yet, there still seem to be so many holes Twitter could fill without controversy: There is still no way for victims to report multiple people at once; no way to stop an account, once suspended, from simply starting up again elsewhere; no way to prevent someone from making a whole bunch of fake accounts for the sole purpose of attacking someone else. Then there's the issue of Twitter's moderation team, which has reportedly not scaled at the same pace as the network. It's telling, for instance, that important research around gender-based abuse was outsourced to Women, Action & the Media.
“I don't think we should have to do this work,” a “frustrated” Jaclyn Friedman told the Atlantic in November. “It's a scandal that a tiny, under-resourced nonprofit with two staff members is having to do free labor for them.” - The Washington Post
* Dewey writes The Post's The Intersect web channel covering digital and Internet culture. http://www.washingtonpost.com/news/the-intersect/