News Analysis: Countries Want to Ban ‘Weaponized’ Social Media. What Would That Look Like?

SYDNEY, Australia — What if live-streaming required a government permit, and videos could only be broadcast online after a seven-second delay?

What if Facebook, YouTube and Twitter were treated like traditional publishers, expected to vet every post, comment and image before they reached the public? Or like Boeing or Toyota, held responsible for the safety of their products and the harm they cause?

Imagine what the internet would look like if tech executives could be jailed for failing to censor hate and violence.

These are the kinds of proposals under discussion in Australia and New Zealand as politicians in both nations move to address popular outrage over the massacre this month of 50 people at two mosques in Christchurch, New Zealand. The gunman, believed to be an Australian white nationalist, distributed a manifesto online before streaming part of the mass shootings on Facebook.

If the two countries move ahead, it could be a watershed moment for the era of global social media. No established democracies have ever come as close to applying such sweeping restrictions on online communication, and the demand for change has both harnessed and amplified rising global frustration with an industry that is still almost entirely shaped by American law and Silicon Valley’s libertarian norms.

“Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists,” Scott Morrison, Australia’s prime minister, said Saturday. “It should not just be a matter of just doing the right thing. It should be the law.”

The push for government intervention — with a bill to be introduced in Australia this week — reflects a surge of anger in countries more open to restrictions on speech than in the United States, and growing impatience with distant companies seen as more worried about their business models than local concerns.

There are precedents for the kinds of regulations under consideration. At one end of the spectrum is China, where the world’s most sophisticated system of internet censorship stifles almost all political debate along with hate speech and pornography — but without preventing the rise of homegrown tech companies making sizable profits.

No one in Australia or New Zealand is suggesting that should be the model. But the other end of the spectrum — the 24/7 bazaar of instant user-generated content — also looks increasingly unacceptable to people in this part of the world.

Prime Minister Jacinda Ardern of New Zealand argues that there must be a middle ground, and that some kind of international consensus is needed to keep the platforms from limiting public protection only to certain countries.

“Ultimately, we can all promote good rules locally, but these platforms are global,” she said Thursday.

Image
Prime Minister Jacinda Ardern of New Zealand argues that there must be a middle ground in regulating social media.CreditKai Schwoerer/Getty Images

Even in the United States, frustration has been building as studies show that social media’s algorithms and design push people further into extremism even as the platforms are protected by the Communications Decency Act, which shields them from liability for the content they host.

Some social media companies are starting to say they are willing to accept more oversight and guidance.

In an op-ed in The Washington Post on Saturday, Mark Zuckerberg, Facebook’s chief executive, called for government help with setting ground rules for harmful online content, election integrity, privacy and data portability.

“It’s impossible to remove all harmful content from the internet, but when people use dozens of different sharing services — all with their own policies and processes — we need a more standardized approach,” he wrote.

At the same time, Facebook and the other major platforms insist they are doing everything they can on their own with a mix of artificial intelligence and moderators.

Google, the parent company of YouTube — which declined to comment on the proposals in Australia and New Zealand — has hired 10,000 reviewers to flag controversial content for removal. Facebook, too, has said it will hire tens of thousands more employees to deal with finding and removing content that violates its rules.

Those rules may be getting tougher. On Wednesday, Facebook announced that it would ban white nationalist content because “white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.”

But critics say it’s too little, too late.

Facebook has “been on notice for some time that their policies and enforcement in this area were ineffective,” David Shanks, New Zealand’s chief censor, said in an email on Sunday. Since the mosque killings in Christchurch, Mr. Shanks has made it a crime to possess or distribute the video of the attack and the suspect’s manifesto.

Experts say social media companies still take as a given that users should be allowed to post material without advance vetting. Neither the communications laws that govern broadcast nor the ratings systems applied to movies and video games affect social media, leaving a frictionless, ad-driven business model built to encourage as much content creation (and consumption) as possible.

From a business perspective, the system works. On YouTube, 500 hours of video are uploaded every minute. In 2016, Facebook said viewers watched 100 million hours of video every day, while Twitter handles 500 million tweets a day, or nearly 6,000 every second.

“The more speech there is on these platforms, the more money they can make,” said Rebecca Lewis, a doctoral student at Stanford and researcher at Data & Society who has studied radicalization patterns on YouTube. “More speech is more profit.”

Europe is already trying to rein in the free-for-all. On Tuesday, the European Parliament passed a law that will make companies liable for uploaded content that violates copyright. It follows a tough privacy law, the General Data Protection Regulation, and an online hate speech law in Germany, the Network Enforcement Act, both of which took effect last year.

A makeshift memorial for victims of the mass shooting in Christchurch, New Zealand.CreditAdam Dean for The New York Times

The laws represent a significant setback for social media behemoths that have long argued that their platforms should be treated as neutral gathering places rather than arbiters of content.

[For more Australia news with global context, get the Australia Letter in your inbox.]

The hate speech law in particular is being closely studied in New Zealand and Australia.

It tries to hold platforms liable for not deleting content that is “evidently illegal” in Germany, including child pornography and Nazi propaganda and memorabilia. Companies that systematically fail to remove illegal content within 24 hours face fines of up to 50 million euros, or around $56 million.

Australian officials said Saturday that they were also planning hefty fines.

And yet, it is far from clear that stiffer penalties alone are the solution.

One problem, according to experts, is that banned posts, photos and videos continue to linger online. The mix of human moderation and artificial intelligence that platforms have deployed thus far has not been enough to monitor and drain the swamp of toxic content.

“The automation is just not as advanced as these governments hope they are,” said Robyn Caplan, a researcher at Data & Society and a doctoral candidate at Rutgers University.

“It’s a mistake,” she added, “to call these things ‘artificial intelligence,’ because it makes us think they are a lot smarter than they are.”

At the same time, legitimate expressions of opinion, including a satirical magazine, have been deleted because of the law.

“We have to be incredibly careful and nuanced when we draw these lines,” Ms. Caplan said.

Even the criticism of live-streaming — which Facebook has said it is taking seriously — needs to be carefully considered, she added, because “there’s a lot of good coming out of live-streaming,” including transparency and scrutiny of the police.

Officials in Australia and New Zealand are trying to work through these issues. After meeting last week with executives from Facebook, Google and Twitter, Australian lawmakers said Saturday that the new bill would make it a criminal offense punishable by three years in prison for social media platforms not to “remove abhorrent violent material expeditiously.”

They made it clear that the tech world’s self-image of exceptionalism needed to end.

“Mainstream media that broadcast such material would be putting their license at risk, and there is no reason why social media platforms should be treated any differently,” Attorney General Christian Porter said.

John Edwards, New Zealand’s privacy commissioner, agreed but pointed to a different example: the Boeing 737 Max plane that has been grounded worldwide after two crashes believed to be tied to a software problem.

“I would say Facebook’s ability to moderate harmful content on its live-streaming service represents a software problem that means the service should be suspended,” he said. “I think that’s just the right thing to do.”

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


four × 4 =