An edited transcript of Google Europe boss Matt Brittin’s comments to the Advertising Week Europe conference about extremist content on his company’s Youtube video platform. His words follow ongoing fallout over a Times investigation
You have probably read stories recently about some brands appearing against content that they did not want to, and in the spotlight in particular has been Youtube. So I want to start by saying sorry. We apologise. When anything like that happens, we don’t want it to happen, you don’t want it to happen, and we take responsibility for it.
- May 27, 2020
- May 6, 2020
- April 30, 2020
I’ve spoken to, and our team have spoken to many of you, some brands that are affected and some brands that are concerned. When I’ve had conversations with some of the advertisers, in general, what they’ve found is that it’s been a handful of impressions, and pennies not pounds. But however small or big the issue, we need improve and we need to get better.
So I’ll just spend a couple of minutes telling you where we are and what we’re doing on that to give you a bit more insight.
I’d say that firstly, our policies and tools work really well in the vast majority of cases. So we have millions of dollars invested and thousands of people whose job it is to ensure that bad advertising doesn’t get through, and that things work well.
That works well in the vast majority of cases. But we have a review under way on how we can improve. It’s been underway for some time, and last week we went public to explain what we’re doing. And we’re accelerating that review.
There are three key areas that we’re focused on:
The first is our policies. Within Youtube, within the Google display network, what do we categorise as being safe for advertising? So we’re reviewing those policies and we’re raising the bar. And that involves looking at things like how do we define hate speech, or inflammatory content, how do we define those things more clearly so that we can raise the bar and make it safe for everyone.
And that’s not straightforward, so for example you might say you say why don’t you just rule out commentary on politics or on war. Well actually in many cases news organisations, documentary makers are reaching audiences and making money from that content quite legitimately. So we want to be thoughtful about how we do that. So the first thing, policy, raise the bar.
The second thing, what we’ve found in some cases is advertisers have the controls, but weren’t fully using them. The controls are quite granular. So within what’s classified as being safe for advertising, you choose and control what you’re appearing against. And I think if the controls are there but they’re too complex, that’s our problem.
So we’re simplifying the controls, and making it easier for people to control what they’re doing. And we’re also looking at setting the defaults to a higher level of safety. So that’s the second thing we’re looking at.
The third thing is enforcement. So as I’ve said we’ve got a lot of money and people invested. Within 24 hours, 98 per cent of flagged content is reviewed. We can go further, and faster and expedite things more in that respect. So that’s all about how we increase the controls, improve the policies and increase the enforcement for what we do on both Youtube and on the Google display network.
And I think all of this in the context of 400 hours of content uploaded onto Youtube every minute, thousands of sites added to our Adsense network every day, and, you know, anyone with a smartphone can be a content creator, can be an entrepreneur, can be an app developer, and that’s wonderful.
And that also shows you the scale of the issue and the opportunity for advertisers, and for us and, for all of you and the challenge we have to manage.
So I want to say here and now that we’re sorry to anybody that’s been affected, we’re working hard to improve things in those three areas that I mentioned, policy, controls and enforcement, we’ll have more specifics very soon.