A look down the data rabbit hole

Turkle has interviewed plenty of people who have reported that their social media use is unsustainable.

Turkle has interviewed plenty of people who have reported that their social media use is unsustainable.

Published Jul 5, 2014

Share

Washington - Facebook's explosive study on user feelings and the newsfeed reads like something straight out of a bad sci-fi movie. There's “emotional contagion.” Dire ethical qualms. The cold, thoughtless manipulation of hundreds of thousands of vulnerable humans by the oppressive Facebook machine.

It's terrifying, right? Overwhelming? Confusing, even? But fortunately, rather than let you stew in your manipulated feels, we have synthesised a quick explainer. It's not entirely exhaustive, by any means - but it does come in the form of nine vaguely conversational questions that you may or may not have been embarrassed to ask. (The Post has not done the A/B-testing to back this up, but we hear people like that.) Here goes.

 

1. What is Facebook?

The world's largest social network is many things: a technological revolution, an advertising mammoth, the star of a hit movie scripted by Aaron Sorkin. But many casual users don't realise that Facebook is also a profound source of data for academic researchers, both inside and outside the company. After all, think about all the information that millions of people freely give to the site: not only their names and schools and locations, but their political affiliations, emotional states, and detailed maps about their social and information networks. Taken in bulk, this data represents some of the best information social scientists have about human behaviour.

Facebook knows all that, of course. Since at least 2008, the network has actively run its own research hub and employed a number of top social scientists to harvest and analyse user data. While much of that analysis goes toward improving the network's bottom line, Facebook has also used its data to fuel a range of fascinating academic studies on important topics - like how social networks aid in information diffusion.

 

2. How do you run an experiment on Facebook? What does that even mean?

Well, there are several different ways that Facebook's data team (and other independent researchers) can study Facebook data. Perhaps the most obvious - and least controversial - way is to collect anonymised data on a specific population and parse it for patterns. For instance, when Facebook wanted to research how visually impaired people use the site, it collected anonymised status updates, comments and other information from a sample of the 285 million people who access the site with screen readers. From that vast pool of data, researchers were able to pluck a few nuggets of insight: people with visual impairments are, for instance, more likely to be friends with each other.

In that type of experiment, researchers never manipulate what users see or how they experience the site. That makes it a good way to establish correlations - an apparent relationship between two things that doesn't necessarily show which caused the other. But what if researchers want to study a cause-and-effect relationship? To study that kind of direct change, they need to cause the change themselves by manipulating what users see on the site. This is actually not terribly uncommon. To test design or interface updates, for instance, Facebook first introduces those changes to a small group of users to see how they react. Likewise, since algorithms determine what posts show up in your newsfeed, Facebook can change those algorithms and change what you see.

 

3. So what was the big “emotional contagion” experiment that everyone's talking about?

That experiment was conducted by Facebook data scientists, in conjunction with a Cornell University professor, and it belonged - controversially! - to research type #2. In other words, it involved manipulating what users see when they use Facebook.

Basically, researchers at Facebook wanted to know if the stuff you see in your newsfeed affects the way you feel. If your newsfeed is all doom and gloom, will you also feel really depressing things? And if you see happier posts, will you feel the opposite?

To test that, Facebook data scientists tweaked the newsfeed algorithms of roughly 0.04 percent of Facebook users, or 698,003 people, for one week in January 2012. During the experiment, half of those subjects saw fewer positive posts than usual, while half of them saw fewer negative ones. To evaluate how that change affected mood, researchers also tracked the number of positive and negative words in subjects' status updates during that week-long period. Put it all together, and you have a pretty clear causal relationship. The results were published last month in the Proceedings of the National Academy Sciences, which, for what it's worth, is a pretty prestigious journal.

 

4. What did the study end up finding?

When you see more positive things, you post more positive things. When you see more negative things, you post more negative things. It's actually pretty intuitive.

 

5. So did Facebook manipulate my feelings?

In all likelihood, Facebook didn't manipulate your feelings personally. First off, the experiment only affected a tiny fraction of users. And even that tiny fraction didn't really feel the change: the study documents only small changes in user behaviour - as small as one-tenth of a percent. Here's how Adam D.I. Kramer, one of the study's authors, defended his methodology in a public Facebook post Sunday:

“At the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it - the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.”

 

6. Did I opt into all this?

Technically, yes - by logging on. Since May 2012, the site's small print has included a little section that explicitly gives Facebook permission to use your personal information for “internal operations,” including “data analysis,” “testing” and “research.” Prior to that (i.e., when this study actually took place), Facebook's terms of service didn't explicitly include the word research. But it's hard to see, frankly, why that would make any difference, since it did include the line about data analysis and testing.

 

7. But no one reads the terms of service! Is that even legal?!

Thus far, there have been only the vaguest suggestions that it is not - most of them premised on the (untrue) idea that Facebook inflicted some kind of psychological harm on its research subjects.

Whether the study was ethical, on the other hand, is definitely up for debate. Some critics have argued that one word in a lengthy TOS doesn't constitute informed consent. Others have argued that, even if it does, Facebook should have to make that case to an independent institutional review board, just as researchers from academia do. (While it was initially reported that Cornell's IRB reviewed this study, Cornell has since said that isn't true.)

Ultimately, of course, ethics are a matter of social opinion. What looks ethical to an IRB may not look ethical to the public, at large. That discrepancy rarely comes up when Facebook runs experiments for its own internal use, of course, but when the network publishes research - as in this case - other standards may have to apply. Some researchers have already called for reforms on the way tech and academia play together, with most of the outrage centring on this issue of IRBs.

“There's not an absolute answer,” the study's editor, Susan Fiske, told The Atlantic. “And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done. . .I'm still thinking about it and I'm a little creeped out, too.”

 

8. What's stopping Facebook from doing this all again?

Well, nothing, for all the aforementioned reasons: it's legal, it benefits Facebook, and users do (technically!) opt in. But even if Facebook's experimentation “creeps you out,” a la Susan Fiske, you realistically, as a user, don't want it to stop.

Why? It makes Facebook better for you. In the two hours it's taken me to write this, Facebook has probably run dozens of “experiments” on new features and design changes that will make the site more user-friendly, by showing different versions of the site to different users and seeing how they react. This is called A/B testing, in industry parlance. Framed another way, it looks a whole lot like psychological research.

“These tests are pervasive, active and on-going across every conceivable online and offline environment from couponing to product recommendations,” Brian Keegan - a computational social science researcher at Northeastern - wrote on his blog. “. . . every A/B test is a psych experiment.”

 

 

9. I'm still mildly horrified. I should delete my Facebook ASAP, right?

No, that's a total overreaction. You don't need to delete your Facebook, but you do need to consider this a wake-up call on a number of pretty critical fronts.

First: Facebook owns your data, and your data is valuable. It's tempting to see Facebook as some kind of confection or distraction - a quick thing to scroll through on your phone while you're waiting in line for coffee, or a place to share a #hilarious inside joke. But in reality, everything you do on Facebook is being recorded and, potentially, observed.

Second: Facebook can, and does, manipulate what you see in your newsfeed. That isn't necessarily malicious, and it can improve your experience on the site. (Have you noticed fewer Buzzfeed quizzes lately?) That said, Facebook's control over the news feed gives it a profound amount of influence over the information you consume, particularly if you use Facebook as your main source of news. Recently, an article in the New Republic theorised that Facebook could swing entire national elections through newsfeed, if it wanted to.

Third: Facebook is not the only one doing all this. Almost every Web site you visit collects data on you, and many of the sites you use everyday - Google, Amazon, Netflix, online dating - operate according to impenetrable algorithms that can be, and are, changed at any time. Frequently, those algorithms are intentionally deployed to manipulate your behaviour, whether that's clicking an ad, watching a movie, or “liking” a particular post.

Of course, that reality is easy, even pleasant, to ignore. But to paraphrase The Matrix, this whole controversy also represents a bit of a red pill/blue pill moment. You can certainly shrug the experiment off as a bunch of technophobic hysteria - which would be fair! But if you pay attention to some of the bigger issues at play here - big data, big algorithms, the profound influence of corporations over ordinary people - this is an important case. The rabbit-hole, it turns out, goes pretty deep.

Washington Post

Related Topics: