Google changes YouTube ad rules

A picture illustration shows a YouTube logo reflected in a person's eye

A picture illustration shows a YouTube logo reflected in a person's eye

Published Apr 3, 2017

Share

San Francisco - Two weeks into a YouTube advertising

boycott over hateful videos, Google is taking more steps to curb a crisis that

escalated further than the company anticipated.

Alphabet Inc.'s main division is introducing a new

system that lets outside firms verify ad quality standards on its video

service, while expanding its definitions of offensive content. 

A slew of major marketers halted spending on YouTube

and Google's digital ad network after ads were highlighted running alongside

videos promoting hate, violence and racism. Google's initial response, a

promise of new controls for marketers, failed to stymie the boycott. The

crisis ignited a simmering debate in digital advertising over quality

assurance, or "brand safety," standards online. 

Google has since improved its ability to flag offending

videos and immediately disable ads, Chief Business Officer Philipp Schindler

told Bloomberg News in a recent interview. Johnson & Johnson, one of

largest advertisers to pull spending, said it is reversing its position in

most major markets.

Library

Since the boycott began, Google has allocated more

of its artificial intelligence tools to deciphering YouTube's enormous video

library. The company is a pioneer in the field and has used machine

learning, a powerful type of AI, to improve many of its products and services,

including video recommendation on YouTube and ad-serving.

Automatically classifying entire videos, then flagging

and filtering content is a more difficult, expensive research endeavour --

one that Google hasn't focused on much, until now.

Read also:  LISTEN: YouTube's fight with internet's dark corners

"We switched to a completely new generation of our

latest and greatest machine-learning models," said Schindler. "We had

not deployed it to this problem, because it was a tiny, tiny problem. We have

limited resources." 

In talks with big advertising clients, Google discovered

the toxic YouTube videos flagged in recent media reports represented about one

one-thousandth of a percent of total ads shown, Schindler said. 

Still, with YouTube's size, that can add up quickly. And

the attention on the issue coincided with mounting industry pressure on Google,

the world's largest digital ad-seller, for more rigid measurement standards. A

frequent demand has been for Google to let other companies verify

standards on YouTube.

Google is allowing this now, creating a "brand

safety" reporting channel that lets YouTube ads be monitored by external

partners like comScore Inc. and Integral Ad Science Inc., according to a

company spokeswoman. 

Google has made quick progress on its own, he said. Using

the new machine-learning tools,  and "a lot more people,"

the company in the last two weeks flagged five times as many videos as

"non-safe," or disabled from ads, than before. 

"But it's five [times] on the smallest denominator

you can imagine," Schindler said. "Although it has historically

it has been a very small, small problem. We can make it an even smaller,

smaller, smaller problem."

Vocal critics suggest Google has ignored this problem.

Some publishers and ad agencies have called on Google and rival Facebook Inc.

to more actively police the content they host online. In a speech last

week, Robert Thomson, Chief Executive Officer of News Corp., a frequent Google

critic, said the two digital companies "have prospered mightily by

peddling a flat earth philosophy that doesn't wish to distinguish between the

fake and real because they make copious amounts of money from both."

The YouTube ad boycott has pushed Google to beef up its

policing. In its initial response, Google expanded its definition of hate

speech to include marginalised groups. Now it's adding a new filter to disable

ads on "dangerous and derogatory content," the company said. That

includes language that promotes negative stereotypes about targeted groups or

denies "sensitive historical events" such as the Holocaust. 

Some researchers argue digital platforms should rely on

humans to make these editorial decisions. Schindler said he has devoted more

manpower to oversee brand safety issues, but stressed that only machine

intelligence could contend with YouTube's size. "The problem

cannot be solved by humans and it shouldn't be solved by humans," he said.

Nor is the company willing to alter YouTube's fundamental

formula. Google lets any user upload videos and sets thresholds for which ones

can run ads. Tight restrictions on ads could cut funding for independent video

creators and step between advertisers and consumers, Schindler said. Google has

long pitched YouTube as a digital alternative to television for marketers,

making the video service one of its fastest growing sources of ad revenue.

 "Cutting away the ability for brands to truly

interact with consumers by asking for one hundred percent safety is very, very,

very unrealistic," Schindler said.

The executive likened Google's ad business to an airline:

Each faces long-tail risk beyond its control. "Can I guarantee you if I

sell an airline ticket that the plane won't come down in the first million

miles?" he asked. "You can't guarantee it. You can just depress the

error rate to the lowest level." 

BLOOMBERG

Related Topics: