The Shift: YouTube’s Product Chief on Online Radicalization and Algorithmic Rabbit Holes

It’s been called “one of the most powerful radicalizing instruments of the 21st century,” “a petri dish of divisive, conspiratorial and sometimes hateful content,” and a tool that “drives people to the internet’s darkest corners.”

I’m talking, of course, about YouTube — and, specifically, the recommendation algorithm that determines which videos the site plays after the one you’re watching. That algorithm is YouTube’s beating heart, keeping users hooked to the platform for hours on end. (The company has said recommendations are responsible for about 70 percent of the total time users spend on the site.)

The recommendation engine is a growing liability for YouTube, which has been accused of steering users toward increasingly extreme content. After the recent mass shooting in Christchurch, New Zealand — the work of a gunman who showed signs of having been radicalized online — critics asked whether YouTube and other platforms were not just allowing hateful and violent content to exist but actively promoting it to their users. YouTube’s biggest competitor, Facebook, said this week that it would ban white nationalism and white separatism on its platforms.

I recently spoke with Neal Mohan, YouTube’s chief product officer, about criticism of the company’s algorithms and what it is doing to address radicalization and violent extremism on the platform. We spoke about the things YouTube has already done to rein in extreme content — hiring additional reviewers, introducing a “breaking news shelf” that kicks in after major news events, altering the recommendation algorithm to reduce the distribution of conspiracy theories and other “borderline content” — and about the company’s plans for the future.

The conversation was edited for length and clarity.

I’m wondering what you think about the conversation happening around radicalization on YouTube.

I think what might be useful is for me to just take one minute to take a step back, and I can give you my perspective on how we think about this.

I think some of it has to do with the fact that, as you know and as you’ve written about, YouTube was started as, and remains, an open platform for content and voices and opinions and thoughts. Many of them being, you know, really across the entire spectrum, many of which you or I or others may or may not agree with. I wouldn’t be at YouTube, working on what I work on, if I didn’t believe in the power of diversity of voices and opinions.

Having said that, we do take this notion of dissemination of harmful misinformation, hate-filled content, content that in some cases is inciting violence, extremely seriously.

I hear a lot about the “rabbit hole” effect, where you start watching one video and you get nudged with recommendations toward a slightly more sort of extreme video, and so on, and all of a sudden you’re watching something really extreme. Is that a real phenomenon?

Yeah, so I’ve heard this before, and I think that there are some myths that go into that description that I think it would be useful for me to debunk.

The first is this notion that it’s somehow in our interests for the recommendations to shift people in this direction because it boosts watch time or what have you. I can say categorically that’s not the way that our recommendation systems are designed. Watch time is one signal that they use, but they use a number of other engagement and satisfaction signals from the user. It is not the case that “extreme” content drives a higher version of engagement or watch time than content of other types.

I can also say that it’s not in our business interest to promote any of this sort of content. It’s not something that has a disproportionate effect in terms of watch time. Just as importantly, the watch time that it does generate doesn’t monetize, because advertisers many times don’t want to be associated with this sort of content.

And so the idea that it has anything to do with our business interests, I think it’s just purely a myth.

So, why do people talk about this rabbit hole effect — you know, I went to watch one video about President Trump and now I’m just getting a stream of recommendations of increasingly more partisan content. Why do you think there’s this perception that this is what happens on YouTube?

This is one of the things that we looked at closely as we were developing the technology that went into that recommendation change that I described to you from a few weeks back.

We really looked at this to see what was happening on those “watch next” panels, in terms of the videos that were being recommended. And the first thing that I should say is that when we make recommendations after a video has been consumed, we don’t take into account any notion of whether that’s less or more extreme.

So when we looked at the data, we saw that a lot of the videos that were being recommended, as you would expect, had to do with the context of the video that was being consumed. That’s obviously no surprise, but the videos that you saw on the panel, there were videos that you might consider to be maybe a little bit more extreme than what you had just consumed.

But you’ll also see videos that were less extreme, or that you could call more toward the quote-unquote mainstream. It’s equally — depending on a user’s behavior — likely that you could have started on a more extreme video and actually moved in the other direction.

That’s what our research showed when we were looking at this more closely. Now, that doesn’t mean that we don’t want to address what we talked about, which is just —

Sorry, can I just interrupt you there for a second? Just let me be clear: You’re saying that there is no rabbit hole effect on YouTube?

I’m trying to describe to you the nature of the problem. So what I’m saying is that when a video is watched, you will see a number of videos that are then recommended. Some of those videos might have the perception of skewing in one direction or, you know, call it more extreme. There are other videos that skew in the opposite direction. And again, our systems are not doing this, because that’s not a signal that feeds into the recommendations. That’s just the observation that you see in the panel.

I’m not saying that a user couldn’t click on one of those videos that are quote-unquote more extreme, consume that and then get another set of recommendations and sort of keep moving in one path or the other. All I’m saying is that it’s not inevitable.

In the case of breaking news, you guys made a decision that showing authoritative information to people who were looking for it was important enough to radically shift the way recommendations and search results work, by moving to an approved or “authoritative sources” model rather than using the regular recommendation algorithm. Why not do that for everything?

Let me say a few things about that. The first is that using a combination of those tools of authoritative content and promoting authoritative content is something that can apply to other information verticals, not just breaking news.

Having said that, as you continue to broaden the application of something like that, it’s quite a blunt hammer. And so it does come with trade-offs. For example, how do you define something authoritative across the broad swath of YouTube when many of the use cases, as you know, are outside of the information-seeking realm? They’re entertainment, they’re oftentimes driven by people’s personal tastes, like music and comedy and the like.

Right, but you could do it just for politics, hypothetically, and say that for any political video, we’re going to move to this “authoritative sources” model.

I think that even when you go to something that broad, it comes with real trade-offs. And I’m just raising the fact that there are considerations there, which is that you are then limiting political discourse to a set of preordained voices and outlets and publications. And I think that especially when it comes to something as charged and societally impactful as politics, there needs to be room for new voices to be heard.

Since the New Zealand shooting, we’ve heard this question about “Well, the platforms worked together to take down ISIS content. Why haven’t they done the same for white supremacy or violent right-wing extremism?” What’s the answer there?

The first thing that I would say, just as a matter of fact, is that there were two sets of challenges when it came to the New Zealand shooting. One was everything that we just talked about in terms of surfacing authoritative, high-quality information — not showing, you know, conspiracies or harmful misinformation. That was one bucket.

The other bucket had to do with the velocity at which re-uploads were coming to these various platforms, and that is an area where we collaborated. We worked closely with other platforms in terms of making sure we had fingerprints of these videos, just like they did, and we shared those.

The other thing I would say, just more generally, in the case of violent extremism and limiting those videos on the platform, the reason it’s different than what we’re talking about here is that those [ISIS] videos took on a particular form. They were often designed for propaganda purposes and recruitment purposes. So they had things like branding and logos, both visually and in terms of the music they might use. Those formed a set of finite clues we could use to bring that content down. And, of course, we collaborated with other platforms to do that.

In the case of something like this, the challenges are harder because the line, as you can imagine, is sometimes blurry between what clearly might be hate speech versus what might be political speech that we might find distasteful and disagree with, but nonetheless is coming from, you know, candidates that are in elections and the like.

So much of what YouTube has become over the years is this kind of alternative form of media. People don’t go to YouTube because they want the same stuff they would see on TV. They go because they’ve built relationships with creators that they trust, and when Logan Paul puts out a flat-earth documentary or Shane Dawson questions whether 9/11 happened, there’s a sense that YouTube is the place where these “real” explanations are being offered, and maybe that makes this all very hard to undo.

I do think a lot about this, but I think everything that we’ve talked about for the last half-hour fits into that bucket for me. And the way I would describe it is that there’s nearly two billion people that come to our platform every month. Every one of them is coming for some unique reason, whether it’s the latest and greatest music video or a YouTube original, or their favorite creators.

Everybody has reasons of their own, and I think one of the reasons people are coming to our platform that has been growing over the course of the last couple of years — partly as a reflection of what’s happening in the world at large — is people are coming to YouTube for information.

And I think when people come to YouTube looking for information, it has resulted in a shift in the way that we think about the responsibility of our platform. As a result of that shift, our product teams here are thinking of all of these solutions, many of which we’ve talked about here, as a means of addressing that responsibility for making sure that when users are looking for information, YouTube is putting its best foot forward in terms of serving that information to them. But YouTube is also still keeping users in power, in terms of their intent and the information that they’re looking for.

It’s an ongoing effort. I think we’ve made great strides here. But clearly there’s more work to be done.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


five + one =