How social networks scan for sex pests

The man who posted Facebook messages that made jokes about April Jones, and Madeleine McCann, and transit vans, and April fools, didn't cry in court.

The man who posted Facebook messages that made jokes about April Jones, and Madeleine McCann, and transit vans, and April fools, didn't cry in court.

Published Jul 12, 2012

Share

San Francisco - On March 9 of this year, a piece of Facebook software spotted something suspicious.

A man in his early thirties was chatting about sex with a 13-year-old South Florida girl and planned to meet her after middle-school classes the next day.

Facebook's extensive but little-discussed technology for scanning postings and chats for criminal activity automatically flagged the conversation for employees, who read it and quickly called police.

Officers took control of the teenager's computer and arrested the man the next day, said Special Agent Supervisor Jeffrey Duncan of the Florida Department of Law Enforcement. The alleged predator has pleaded not guilty to multiple charges of soliciting a minor.

“The manner and speed with which they contacted us gave us the ability to respond as soon as possible,” said Duncan, one of a half-dozen law enforcement officials interviewed who praised Facebook for triggering inquiries.

Facebook is among the many companies that are embracing a combination of new technologies and human monitoring to thwart sex predators. Such efforts generally start with automated screening for inappropriate language and exchanges of personal information, and extend to using the records of convicted pedophiles' online chats to teach the software what to seek out.

Yet even though defensive techniques are now available and effective they can be expensive. They can also alienate some of a site's target audience - especially teen users who expect more freedom of expression. While many top sites catering to young children are quite vigilant, the same can't be said for the burgeoning array of online options for the 13- to 18-year-old set.

“There are companies out there that are doing a very good job, working within the confines of what they have available,” said Brooke Donahue, a supervisory special agent with an FBI team devoted to Internet predators and child pornography. “There are companies out there that are more concerned about profitability.”

Two recent incidents are raising new questions about companies' willingness to invest in safety.

Last month the maker of a smartphone app called Skout, designed for flirtation with strangers in the same area, admitted its use had led to sexual assaults on three teenagers by adults. The venture-backed firm had not verified that users of its now-shuttered teen section were under 20, giving predators easy access.

Also in June, a teen-oriented virtual world called Habbo Hotel, which boasts hundreds of millions of registered users, temporarily blocked all chatting after UK television reported that two sex predators had found victims on the site and that a journalist posing as an 11-year-old girl was bombarded with explicit remarks and requests that she disrobe on webcam.

Former employees said site owner Sulake of Finland laid off many in-house workers earlier this year, leaving it unable to moderate 70 million lines of daily chat adequately. Sulake said it had kept 225 moderators and is still investigating what went wrong.

The failures at Skout and Habbo shocked child-safety experts and technology professionals, who fear they will lead to a renewed panic about online safety that is not justified by the data.

By some measures, Internet-related sex crimes against children have always been rare and are now falling (as are reports of assaults on minors that do not involve the Net). Most sex crimes against children are committed by people the children know, rather than strangers.

The National Centre for Missing and Exploited Children processed 3,638 reports of online “enticement” of children by adults last year, down from 4,053 in 2010 and 5,759 in 2009.

Even those companies with state-of-the-art defenses spend far more time trying to stop online bullying and attempts to sneak profanity past automatic word filters than they do fending off sex predators.

Still, as the Skout case showed, there are several recent trends that have heightened the concerns of child-safety experts: the rise of smartphones, which are harder for parents to monitor; location-oriented services, which are the darling of Net companies seeking more ad revenue from local businesses; and the rapid proliferation in phone and tablet apps, which don't always make clear what data they are using and distributing.

A solid system for defending against online predators requires both oversight by trained employees and intelligent software that not only searches for improper communication but also analyses patterns of behaviour, experts said.

The better software typically starts as a filter, blocking the exchange of abusive language and personal contact information such as email addresses, phone numbers and Skype login names. But instead of looking just at one set of messages it will examine whether a user has asked for contact information from dozens of people or tried to develop multiple deeper and potentially sexual relationship, a process known as grooming.

Companies can set the software to take many defensive steps automatically, including temporarily silencing those who are breaking rules or banning them permanently. As a result, many threats are eliminated without human intervention and moderators at the company are notifed later.

Sites that operate with such software still should have one professional on safety patrol for every 2,000 users online at the same time, said Sacramento-based Metaverse Mod Squad, a moderating service. At that level the human side of the task entails “months and months of boredom followed by a few minutes of your hair on fire,” said Metaverse Vice President Rich Weil.

Metaverse uses hundreds of employees and contractors to monitor websites for clients including virtual world Second Life, Time Warner's Warner Brothers and the PBS public television service.

Metaverse Chief Executive Amy Pritchard said that in five years her staff only intercepted something terrifying once, about a month ago, when a man on a discussion board for a major media company was asking for the email address of a young site user.

Software recognised that the same person had been making similar requests of others and flagged the account for Metaverse moderators. They called the media company, which then alerted authorities. Other sites aimed at kids agree that such crises are rarities.

Sites aimed at those under 13 are very different from those with large teen audiences.

Under a 1998 law known as COPPA, for the Children's Online Privacy Protection Act, sites directed at those 12 and under must have verified parental consent before collecting data on children. Some sites go much further: Disney's Club Penguin offers a choice of viewing either filtered chat that avoids blacklisted words or chats that contain only words that the company has pre-approved.

Filters and moderators are essential for a clean experience, said Claire Quinn, safety chief at a smaller site aimed at kids and young teens, WeeWorld. But the programs and people cost money and can depress ad rates.

“You might lose some of your naughty users, and if you lose traffic you might lose some of your revenue,” Quinn said. “You have to be prepared to take a hit.”

There is no legal or technical reason that companies with large teen audiences, like Facebook, or mainly teen users, such as Habbo, can't do the same thing as Disney and WeeWorld. - Reuters

Related Topics: