Are Twitter and Facebook warning labels enough to save democracy?

The Twitter and Facebook logo along with binary cyber codes are seen in this illustration. File picture: Reuters/Dado Ruvic

The Twitter and Facebook logo along with binary cyber codes are seen in this illustration. File picture: Reuters/Dado Ruvic

Published Nov 9, 2020

Share

By Geoffrey A. Fowler

It was the equivalent of Big Tech slapping the "PARENTAL ADVISORY" labels from album covers on the president of the United States.

President Donald Trump tweeted that America's election was being stolen, and Twitter put labels over his lies over a dozen times and counting. "This tweet is disputed and might be misleading," it warned.

Facebook joined in, flagging Trump posts with the line: "Ballot counting will continue for days or weeks."

Were the labels a win for democracy? They were a win for Twitter and Facebook public relations, which got to look just responsive enough to avoid being blamed for botching another election.

But as tech products, the labels were too little, too late. There's scant evidence that labels make a lick of difference to viewers. Moreover, they didn't stop the flow of toxic election content on social media. That's because social media's business model is toxic content.

Silicon Valley can't be blamed entirely for disinformation tearing at our democracy - TV and newspapers are part of the problem, too. But when we look back on the 2020 election, we'll remember it for the domestic disinformation campaigns and alternate-reality bubbles that grew, in part, because of technology designed to amplify them. This was the year where some 70 candidates for office embraced at least parts of the wacky QAnon online conspiracy theory, and one of them - Marjorie Taylor Greene of Georgia - got elected to Congress.

Online, election week devolved into a mess of false claims that the results were fraudulent. As traditional news networks stepped in to correct the president's misstatements, his allies turned to a network of new and existing Facebook pages, groups and events to rally people and spark real-world intimidation of poll workers. (Facebook shut down some of the activity but only after damage was done.)

Sure, social media companies have grown marginally more responsible since 2016. Back then, Facebook chief executive Mark Zuckerberg refused to acknowledge fake news on Facebook could even affect elections. The companies now coordinate to identify foreign groups seeding discord. And it has been haphazard, but this summer social media companies started taking a harder line on hate speech and misinformation clearly linked to physical harm, including banning the debunked 'Plandemic' video about the origins of the coronavirus pandemic.

Silicon Valley positioned election week as a kind of test of its commitment to democracy. America was already a powder keg: Citizens are deeply polarized, and the coronavirus pandemic shifted people's political lives online. Facebook, Twitter and YouTube's owner, Google, made a number of accommodations, including limiting some political ads and encouraging voter registration.

Turns out, it was very hard to contain the spread of misinformation because of the way they designed social media.

- - -

Big Tech's most visible shift in 2020 is that it began to directly counter misinformation coming from the White House. Twitter and Facebook tried to thread a needle: Loath to fuel even more complaints about censorship from Republicans, they turned to labeling Trump posts rather than shutting his accounts.

On these narrow terms, Twitter appeared to do the most. It set the most stringent rules and quickly hid posts from the misinformer in chief behind warning labels. It was the first to label a Trump tweet, containing misinformation about the coronavirus, in May.

Facebook labeled some election week posts with a milquetoast notice, along the bottom, that you could X out to make it go away.

YouTube had labels, too, but curiously put them on all videos mentioning the election - conspiracy theories and legitimate news outlets alike.

Organizations that have been fighting online misinformation and hate for years called the labels barely adequate. "The addition of disclaimers to several of Trump's posts on Facebook and Twitter is the lowest bar possible for these companies, although it does represent progress that has taken years," said Arisha Hatch, executive director of the civil rights group Color Of Change PAC.

Largely missing from Big Tech's response was some basic product design feedback: Do labels actually work?

Seeing Trump's online profile filled with so many warnings certainly makes a visceral impact. The argument for labels is that they at least add a bit of external context into a medium that otherwise strips it away.

But flagging falsehoods could also make matters worse. Labels can draw in people out of curiosity. Researchers have also found fact-check labels on misinformation can erroneously imply that posts without labels have been vetted as true.

And ideologues have a tendency to just attack the people doing the labeling. Trump on Thursday posted "Twitter is out of control." (That one didn't get flagged.)

Kate Starbird, an associate professor at the University of Washington who studies the art of disinformation, says only the companies themselves know how effective labeling election misinformation was at making sure it was seen by fewer people.

The companies, believe it or not, say they also don't know. Facebook pointed me to a 2019 study that found flagging fake news on Facebook did reduce people's intention to share it, though about a very different context. YouTube spokeswoman Ivy Choi said, "It's still too early to say; we will disclose at some point in time."

There is one way labels could definitely be effective, disinformation experts agree: by making it physically harder to share misinformation - adding speed bumps to the information superhighway.

Facebook said Friday it had added a mini speed bump: forcing people to look at an additional message before they could share a flagged post.

Twitter was the only one that made a significant speed bump effort on election night. Trump's tweets covered by warning labels had to be clicked on to be seen, and didn't show retweet and like counts. And they couldn't be shared without adding your own context on top.

Twitter spokesman Nicholas Pacilio said the company didn't know how effective these steps had been. He said there was evidence from the Election Integrity Partnership that a label and preventing retweets on one of Trump's prior attacks on mail-in voting reduced some of the reach of the misinformation.

But on Saturday, after Trump had been projected to lose the election, the president was back on Twitter claiming election fraud. And this time Twitter labeled his claim as "disputed" without any of the prior speed bumps to limit its spread.

"We will no longer apply warnings on tweets commenting on the election outcome," Twitter spokesman Brandon Borrman said. "We will continue to apply labels to provide additional context on tweets regarding the integrity of the process and next steps where necessary."

In other words, Silicon Valley did very little that might imperil its most potent product feature: how information goes viral.

- - -

Let's be clear. Silicon Valley did make an important shift between 2016 and 2020: It finally acknowledged there is such a thing as truth.

But election week also revealed that social media companies still haven't figured out how to build truth into their business.

Even though they had years to prepare - and clear signaling of intent to misinform by Trump - their responses during election week were largely tactical. They made up the rules in the past few months to deal with whatever might help them escape that day's negative news cycle. Even moves to limit political ads on election week, while likely helpful, were temporary.

In leaked remarks to employees at an internal town hall in October, Zuckerberg suggested that Facebook would be back to business as usual after the U.S. election spotlight is over, with fewer bans and less content moderation. "Once we're past these events, and we've resolved them peacefully, I wouldn't expect that we continue to adopt a lot more policies that are restricting of a lot more content," he said, according to BuzzFeed News.

(Asked about Zuckerberg's comments, Facebook spokesman Andy Stone said: "Meaningful events in the world have led us to change some of our policies, but not our principles.")

"There is a lack of recognition that the problem is their incentivization structure, the engagement structure that is algorithmically driven in order to keep more eyeballs on more content for a longer period of time," says Nina Jankowicz, author of the book "How to Lose the Information War."

Facebook, Twitter and YouTube's most valuable asset is our attention, and generating controversy helps them keep it. The more time we spend engaging with their apps and websites, the more ads they can show us. This season of lies has been good for business. (Literally: Facebook's ad revenue was up 22% in the most recent quarter.)

Among the tech companies, Twitter seems to be the most willing to at least experiment with product changes to slow the spread of information, even if it cuts our engagement with its service. But it hasn't committed to keeping any of the speed bumps to sharing it put in place in October.

YouTube's Choi said the company has worked to "raise up" authoritative news publishers in search results and recommendations. "We are limiting recommendations on a broad range of content with misleading claims, for example baseless claims of voter fraud or premature calls of victory," she said. But authentic content still seamlessly mingles on YouTube with videos from conspiracy theorists and other nonauthoritative sources.

Instead of labeling lie after lie, perhaps social media should focus on figuring out how to reduce the audience for liars.

How about ending the policy of infinity strikes and you're out? Laura Gómez, a former Twitter employee and founder of diversity advocacy group Project Include, has been advocating for more than a decade for a more severe policy of removing accounts for repeat violations - "especially if you're the president of the United States," she says.

Or, how about stopping promoting liars? Boston University business professor Marshall Van Alstyne has a modest proposal: Tech companies should announce that people caught lying will have their social networks trimmed and their messages delayed. If you've got 500,000 followers today, you'd be cut to 250,000 and have your messages go out next week. Lying again would cut your audience further.

"You're still on the platform, and you're still actually allowed to say anything you want, including lies. The difference, however, is that it can't be promoted," Van Alstyne said. "In order to be exposed to disinformation, people would have to go looking for it as opposed to having it pushed into their news feed."

There are lots of ideas like this, if the tech companies are willing to listen. Misinformation will continue to fly online until Silicon Valley tackles its next big engineering problem: how to make truth travel faster than lies.

The Washington Post

Related Topics: