Frontline Marketing

YouTube’s Brand Safety Team Expansion Marks Shift From Automation

YouTube will hire thousands more content reviewers to protect brand safety.

By | December 5, 2017 |

As part of the Alphabet-Facebook advertising duopoly, YouTube’s heavy reliance on algorithmic suggestions and policing have drawn both high-spending advertisers and creators seeking to game the system without actually producing quality content. In response to “bad actors” jeopardizing brand safety on its platform, YouTube is making sweeping changes to its content-review team.

YouTube CEO Susan Wojcicki announced Monday that they will enlist 10,000 employees to moderate and review policy-violating content on YouTube, ensuring that that video platform will have stricter criteria on the channels that can earn from ads. It marks a shift from a mostly automated system that was previously in place.

“We are planning to apply stricter criteria and conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should,” Wojcicki said. “We want to give creators confidence that their revenue won’t be harmed by bad actors while giving advertisers assurances that their ads are running alongside content that reflects their brand’s values.”

Wojcicki announced plans to also expand the network of academics, experts and industry groups YouTube consults in making its policy decisions, but did not give any further details.

YouTube has been on the hot seat for its monetization and advertising practices for months now, doing its best to balance corporate concerns about brand safety with creator concerns about revenue stability. Up until now, however, the video platform heavily favored the former, leading to confusing and seemingly arbitrary demonetization of innocuous content by gung-ho bots.

Despite the expansion of Google’s human team to hunt bad actors, Wojcicki affirmed the company’s commitment to relying on machine learning to handle the bulk of its content reviewing. According to YouTube’s internal metrics, its algorithms have flagged and removed 98 percent of violent extremism videos, 70 percent of which were flagged and removed within eight hours of being uploaded. In the future, the company will expand its algorithmic flagging to other areas, including child safety and hate speech.

“As challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them,” Wojcicki said.

Since YouTube is the top-rated platform for video ad viewability and Alphabet is the world’s largest advertiser, brands and marketers have little choice other than to wait and see when Wojcicki’s words will begin to hold serious weight.