Who Gets the Banhammer Now?

In a typical year, a good way for a company to handle an announcement it hoped to bury is to schedule it before a nice long holiday weekend. This is one way to interpret the last week of June online, when several internet giants took decisive action against mostly far-right communities and users, many of whom who had been sources of controversy for years.

Reddit banned The_Donald, the site’s main hub for Trump supporters, and a source of near-constant complaints from other users and communities about hate speech and harassment. The platform also banned more than 2,000 other groups, most much smaller, as part of a revision of the site’s rules that prohibit “communities and people that incite violence or that promote hate based on identity or vulnerability.”

YouTube, which hardened its rules around hate speech in 2019, banned a group of popular accounts for, among other things, promoting “supremacist content.”

Twitch, the livestreaming site, temporarily banned an account associated with President Trump’s campaign for “hateful conduct.”

And Facebook banned hundreds of accounts, groups and pages associated with the “boogaloo” movement for violating a prohibition against users and organizations “who proclaim a violent mission or are engaged in violence.”

Another way to understand these bans is as a calculated response to globe-spanning demonstrations against white supremacy, with which each of these platforms has been accused of being, at minimum, complicit. That the platforms acted in the same week suggests that some were waiting for others to act.

The move also suggests a desire to have it both ways: to get credit for the bans, most of which were late and represented small fractions of infringing users or groups, and to enact them when any backlash would have a hard time getting traction online.

The bans have been described as a “reckoning” for social media companies that have, as The Associated Press put it, “fueled political polarization and hosted an explosion of hate speech” and that are now, public relations calculations aside, “upping their game against bigotry and threats of violence.”

“Reckoning,” however, is an odd word to describe a situation in which social media companies were finally asserting control after years of either pretending not to have any. It looked, instead, like buying time. But for what?

What might real change look like for the social media giants? The week of bans suggests one specific vision. “When platforms tout the banning of these big names, it’s an important step in the right direction,” said Becca Lewis, a researcher at Stanford University who studies online extremism. “But it doesn’t address underlying issues and incentives that have led to the flourishing of white supremacist content in general.”

In some cases, social platforms have taken action against figures and groups with roots and power elsewhere, who have found audiences on YouTube or Reddit, in effect banishing them to where they came from. Often, though, these bans are more like corrections, shutting down accounts and groups that were conspicuously successful on the service’s own terms.

The content and behavior of extremists may run afoul of particular YouTube rules, but those users are very much examples of success on the platform. They have cultivated large audiences, are easy to find in searches and seem to perform well in YouTube’s automated recommendation system.

They are practiced in the formats, styles and subjects that YouTube seems to reward not just as a marketplace full of autonomous viewers, but as a complex and assertive system with its own explicit and implicit priorities. (A representative from YouTube declined to comment on the record about the proliferation of extremist content on the platform, but pointed to a recent blog post about the company’s commitment to curbing it.)

YouTube, Ms. Lewis, a Ph.D. candidate, said, made early commitments to a relatively hands-off style of governance, and has gradually adjusted the “shape of its marketplace” over the years, often in response to controversy. Like many platforms of its era, it characterized its commitment to openness and “free speech” as a democratizing force, giving cover to the realities of living and coexisting within strange and materially limited new space.

What’s popular on YouTube is a reflection of what its users want to see, but also what YouTube wants them to see, what YouTube wants them to want to see, and what advertisers want them to see.

“So much of the confusion about YouTube comes from the fact that we use these public-square metaphors for what is fundamentally a commercial space,” Ms. Lewis said. “Thinking about it through market frameworks is more accurate.”

YouTube isn’t so much the marketplace of ideas as a marketplace for some ideas, if those ideas work well in video format, in the context of a subscription-driven social environment consumed mostly on phones, in which compensation is determined by viewership, subject matter and potential for sponsorship.

The less abstract and idealized that platforms are, the less complicated their decisions seem. (This is true for any industry. See: the media!) If we understand early commitments to openness and loose moderation as stances rooted in a desire for growth and minimal labor expenditure, then the recent wave of bans is quite easy to grasp.

These areas of previously unfettered growth — in far-right content, in groups with a tendency to harass and intimidate other users and in certain political circles — are, finally, more trouble than they’re worth. Maybe they alienate a platform’s own employees, making them uncomfortable or ashamed. Maybe they’ve attracted the wrong kind of attention from advertisers or even a critical mass of users.

ImagePresident Donald Trump speaks at a campaign rally in June in Tulsa, Okla. Reddit and Twitch recently banned groups and individuals whose support of the president has manifested in hate speech and harassment.
Credit…Sue Ogrocki/Associated Press

Social platforms, in defense of moderation decisions, are afraid to state the most obvious truth, which is that they can host or not host whoever or whatever they want. Instead they hide behind rules and policies, or changes in enforcement.

This is understandable. To say, simply, “we banned a bunch of white supremacists because we decided it was expedient in 2020” is to say, more or less, “we hosted and supported a bunch of these same white supremacists because we decided it was expedient in every year before 2020.”

When the rules aren’t adequate to a platform’s needs, they are changed, or exceptions are created. Aggrieved parties — even neo-Nazis — are handed the customary parting gift of a clear, correct argument that a platform has been inconsistent or hypocritical, which they can make as they choose (free speech!).

They are, however, subsequently reminded that their argument is with a company that very much knows this already. For loss of faith in the consistent application of law to become a crisis of legitimacy, this sort of legitimacy needs to matter in the first place.

Understanding platforms as successful but unexceptional businesses and their users as varieties of freelance consumers and suppliers is clarifying, if only by making it easier to fuse criticism of the company’s governing principles with criticism of its actions: staffing, political engagement, leadership or environmental impact.

This, however, is a form of self-assessment major corporations are used to heading off or ignoring, and tech companies are no exception.

In a post this week, the Facebook C.O.O. Sheryl Sandberg said that the company would soon release the results of an “independent civil rights audit” described as “a two-year review of our policies and practices.”

Ms. Sandberg wanted to assure readers of two things. One: “We are making changes — not for financial reasons or advertiser pressure, but because it is the right thing to do,” she wrote. (In recent weeks, Facebook has been subject to the largest advertiser boycott in its history, as well as an employee walkout) And two: “While we won’t be making every change they call for, we will put more of their proposals into practice soon.”

The next day, the contents of the audit were made public. The assessment was harsh, in particular around its application of rules. If “powerful politicians do not have to abide by the same rules that everyone else does,” the report warns, “a hierarchy of speech is created that privileges certain voices over less powerful voices.”

It also addressed underlying incentives: “The Auditors do not believe that Facebook is sufficiently attuned to the depth of concern on the issue of polarization and the way that the algorithms used by Facebook inadvertently fuel extreme and polarizing content.” Facebook doesn’t enforce its rules; Facebook will continue to encourage people to break them anyway; Facebook has commissioned a report to tell them this, which it can ignore if it likes.

There is nowhere else for this conversation to go, really, and truthfully there hasn’t been for years. Facebook’s rules are for Facebook. This isn’t something the company has to “reckon” with, but rather a question about intention that it can answer however it needs to, forever.

The gap between how social media companies talk about themselves and how they actually operate is contained within a single word they’ve leaned on for years, and have been using a lot lately: community.

Executives use it in keynote speeches. Terms of service agreements refer to the “Facebook community” and the “YouTube community.” Reddit’s leadership tends to use the word to describe its various subreddits, but speaks broadly about “community governance.”

In banning the Trump’s campaign Twitch channel, the company said that “politicians on Twitch must adhere to our Terms of Service and Community Guidelines.” Invocations of democratic language, or of legalistic concepts like “policies” or “governance” or “appeals,” distract from an uncomfortable truth about social media companies: Their rules are arbitrary and can be changed at any time. Anything that may feel like a right or an entitlement — the ability to share certain content, or to gather and act in certain ways, or to sell certain products, or to log on without being terrorized — is provided by and subject to the whims of a private company with no hard obligation to provide them.

Governance-wise, social platforms are authoritarian spaces dressed up in borrowed democratic language. Their policies and rules are best understood as attempts at automation. Stylistically, they’re laws. Practically, and legally, they’re closer to software updates.

What polite fictions, then, do platforms that use the word “community” expect users to uphold? Writing in Cyborgology, in 2017, the researcher Britney Gil took issue with Mark Zuckerberg’s frequent use of the word:

Community is one of those words that gets applied to so many social units that it becomes practically meaningless. Facebook is a community. The city you live in is a community. The local university is a community. Your workplace is a community. Regardless of the actual characteristics of these social units, they get framed as communities.

“Facebook is not a community,” Ms. Gil said in an interview in July, “but people form communities on it.” Reddit is explicitly built around thousands of subreddit “communities,” which in many cases function as such.

During periods of intense activism and social change, social networks can provide space and amplification for pre-existing communities and as well as providing tools for the creations of new ones — in the case of Black Lives Matter in 2020, most visibly on Twitter and Instagram.

Hosting actual communities, and in particular providing spaces for activism, only sharpens the difference between how platforms use the word and what it actually means.

“The story of the last century has been dissolving public spaces and replacing them with private spaces,” Ms. Gil said. This is much larger than and predates social media and the internet, of course.

But it’s difficult to imagine a better symbol of the trend than commercial social networks, which function simultaneously as vital venues for activism and social change and, unabashedly, as services that extract value from the time and energy of their users, and the groups the congregate there. (Ms. Gil drew a parallel to the workplace. “It’s like when people say, ‘Beware the boss who says family,’” she said. “Beware the platform that says it’s a community.”)

The platforms’ circular diversions about rules and policies smooth over the harsh but obvious reality of how commercial spaces deal with the people, content and groups they say they don’t want around anymore, after years spent elevating and cultivating them. It’s a way to avoid responsibility for the worst of what happens on their platforms. “Community” is an attempt to take credit for the best.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


five × 1 =