The Net’s bubble-burster

Published Jul 12, 2011

Share

You say the era of web personalisation began on December 4, 2009. Can you tell me what web personalisation is and why December 4 is a significant date?

Eli Pariser: That was the date that Google abandoned any notion of a standard or objective Google.

On December 4, they announced that even if you were logged out of Google services, you were going to get your own personalised search results. Two people googling the same thing might get very different results. Because Google is a central tool for navigating the internet these days, it seems that was the tipping point.

Why is it time to be worried?

The technology that was used to target ads is now being used to target content. It’s one thing being shown products you might be interested in, but when that’s shaping what information you see, it has a much broader effect. My main concerns are that it gives individuals a distorted view of the world because they don’t know this filtering is happening or on what basis it’s happening, and therefore they don’t know what they’re not seeing.

It’s a problem, more broadly, for democracy because it’s like auto-propaganda. It indoctrinates us with our own beliefs and makes it less likely that we’ll run into opposing viewpoints or ideas.

You identify Google and Facebook as being the chief culprits. Are there other firms that use personalisation or have plans to do so?

Nearly every major company is racing to incorporate this into their software. Bing, Microsoft’s search engine, just signed a deal integrating Facebook into its results, so now Bing results are personalised based on your Facebook data. Yahoo News shows different people different news.

Even more venerable news institutions such as The New York Times are investing in start-ups that are taking this approach.

Do you think that all websites, including the likes of Wikipedia, Twitter and the BBC, will embrace content personalisation or is there a limit to what can be personalised?

It’s true that if you carefully seek out sources that are not personalised, like Wikipedia, you can still do that (have content that isn’t personalised) but the trend is very rapid. Since I wrote the book, Twitter started integrating a personalised recommendation service into what it does, so now when you search for tweets, you are not getting the top tweets, you are getting the recommended tweets for you.

Isn’t it paternalistic to tell people “you should be reading about the war in Afghanistan not a piece about Justin Bieber”, even if their click history shows that that’s what they’re interested in?

I think the paternalism argument is a canard because these companies are already making value judgements about what information to show you.

Every time you’re providing a ranked set of information in response to something like (Barack) Obama’s birth certificate, you’re making value judgements about what is valuable and true and useful. I think paternalism is inherent in any system that is controlling what you see. It’s impossible to do that without making decisions on your behalf.

Another downside to personalisation you talk about is advertising. But surely targeted ads – even though they’re sometimes a bit wide of the mark – are better than non-targeted ads?

The targeting of ads is less of a concern for me than the targeting of information. But in a way it’s a similar question. To what degree are we in control of the experience that results? In the most pernicious cases of targeted ads, you have some 14-year-old girl who looks up “obese” at Dictionary.com and is suddenly hounded all over the internet: “Are you feeling fat? Do our crazy weight-loss programme.”

Isn’t the solution quite simple: use an anonymous proxy, such as Tor, so that these companies can’t trace your searches?

I don’t think so. It’s a partial solution but: (a) these companies are getting very good at targeting based on actual machine fingerprinting, and (b) why should we have to make an either/or choice between the benefits of using these tools and being able to have some measure of transparency and control?

It’s a bad trade-off. The technology exists to find some middle ground where people have some control.

In your view, what’s the worst-case scenario if the filter bubble continues unchallenged?

It would be the Brave New World scenario in which we are increasingly captured in these bubbles full of entertaining, compulsive content and lose sight of the big, common problems we all face. Things like climate change, global poverty or Aids are going to come back to bite us because we’re too busy watching cat videos recommended by Facebook.

Do you feel confident that people will listen to your message and put pressure on companies to be more open about how they personalise?

I haven’t run into many people who weren’t shocked that Google search results were personalised and who, once they found out, wanted a way out, a way of changing it or being in control of it.

There was a first generation of these technologies – the term at the time was intelligent agents – like the Microsoft paperclip.

They were horrible. They were computer programs that just didn’t understand who you were. So that flamed out.

Now we have the same technologies, but they’re hidden. – The Independent

l The Filter Bubble: What the Internet is Hiding from You is published by Viking

Related Topics: