In New Zealand, Spreading the Mosque Shooting Video Is a Crime

CHRISTCHURCH, New Zealand — A lone white supremacist is the suspect in the Christchurch mosque killings. But under New Zealand law, many others could face charges for spreading or perhaps even possessing all or part of the 17-minute Facebook Live video streamed by the killer as he methodically shot the victims.

As of Thursday, at least two people had been charged with sharing that video via social media, under a law that forbids dissemination or possession of material depicting extreme violence and terrorism. Others could face related charges in connection with publicizing the terrorist attack, under a human-rights law that forbids incitement of racial disharmony.

While freedom of expression is a legal right in New Zealand, the parameters are more restrictive than the First Amendment guarantees in the United States. New Zealand’s Department of Internal Affairs includes a chief censor, an official who has the authority to determine what material is forbidden.

The restrictions mean New Zealanders could face legal consequences for intentionally looking at the Christchurch killer’s video, which may have been seen millions of times around the world.

Facebook and other social media platforms also could face new legal issues because of the video, and not only in New Zealand. Prime Minister Jacinda Ardern of New Zealand has vowed to investigate the role that social media played in the attack and to take action, possibly alongside other countries, against the sites that broadcast it.

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” she told Parliament on Tuesday. “They are the publisher, not just the postman.”

Ms. Ardern has not specified what measures she would propose. But social media applications clearly empowered the online activity of others who spread the killer’s message.

Image
Philip Neville Arps, right, appeared in court on Wednesday on charges related to reposting the gunman’s live video.CreditPool photo by Mark Mitchell

One man, Philip Neville Arps, appeared in court in Christchurch on Wednesday on two charges related to reposting the killer’s video. Mr. Arps was denied bail and is facing almost a month in custody until his next court appearance.

A Christchurch teenager, whose name has not been released, was denied bail on Monday over charges that he had posted a photograph of Al Noor Mosque, one of the two that were attacked, a week before the shootings, with the caption “target acquired.” He was also charged with reposting the video.

Each could spend as much as 14 years in jail if found guilty.

And a woman in Masterton, on the North Island of New Zealand, was arrested over comments she made on her Facebook page after the attacks. The police told The New Zealand Herald that they had yet to decide whether to charge her under the Human Rights Act, a rarely used provision that prohibits writings that incite racial disharmony. If charged and convicted, she would face a fine of 7,000 New Zealand dollars, or about $4,800.

Criminal charges were not the only possible consequence of having publicized the attack. An Auckland medical clinic said on Thursday that it had suspended a senior doctor, pending an investigation, after being alerted to anti-Islamic comments he made on a blog several years ago.

Andrew Scott-Howman, a New Zealand employment lawyer, said he had seen a growing number of cases in which workers were accused of “bringing their employers into disrepute,” which he said was a more subtle charge than outright criminal activity.

“It’s hard to know where the line is drawn,” he said, adding that employment law was still developing in the area.

The cases underscore the challenge that social media companies face in thwarting and deleting objectionable activity on their platforms. Analysts said there was often a mistaken assumption that white supremacist material is hidden away on parts of the internet that are difficult to reach.

“Much — I would say even most — extreme-right content is easily accessible in open online spaces so that it can be consumed by as many people as possible,” said Maura Conway, a senior lecturer in international security at Dublin City University in Ireland.

A community-led vigil for victims near Al Noor Mosque on Wednesday.CreditAdam Dean for The New York Times

She added that such spaces included social media platforms and the comments sections of news websites, as well as dedicated white supremacist forums.

Facebook, the platform used by the Christchurch killer to broadcast the attack on one of its marquee products, Facebook Live, has been under pressure to explain its role in how the video proliferated.

On Wednesday evening, Facebook gave an explanation for some of the concerns about the spread of the video in a blog post. Fewer than 200 people watched the killer’s shooting spree live as it occurred, according to Guy Rosen, vice president of product management at Facebook. And no users reported the post to Facebook’s content moderators during the live stream, an important signal for the company to catch and take down harmful content before it spreads virally across the site.

Facebook said it had removed the attacker’s video minutes after the New Zealand police reached out to the company after the shootings. The original video was viewed about 4,000 times on Facebook before removal.

But at least one person was able to record the livestream of the video before the company could remove it. Someone posted the video to 8chan, a social message board website that hosts offensive content banned on many mainstream platforms like Facebook, YouTube and Instagram. From there, the video spread quickly, and millions of people began trying to re-upload the video to Facebook to further fan the viral flames.

Facebook said that during the 24 hours after the shooting, the company blocked more than 1.2 million attempts to upload the video. It took down more than 300,000 copies of the video that had been uploaded.

“People shared this video for a variety of reasons,’’ Mr. Rosen, the Facebook vice president, said. “Some intended to promote the killer’s actions, others were curious and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends.”

Ben Elley, a doctoral student at the University of Canterbury in Christchurch who studies online radicalism, said mainstream social media companies had generally succeeded in suppressing content from groups like the Islamic State on their platforms, but that far-right groups had not received the same treatment.

A prayer mat at a memorial for the attack victims.CreditAdam Dean for The New York Times

The spread of online material propagating white supremacist views was often unintentional, Mr. Elley said, with algorithms on websites like YouTube constantly offering viewers more and more “extreme and strange” videos to keep them watching, which often leads viewers of particular kinds of videos from one conspiracy theory to another.

“Conspiracy theories seem exciting — they purport to give you a view of the world that you don’t get to see,” Mr. Elley said. “So a lot of this happens by accident, and once you’re a few conspiracy theories deep, it tends to end up supporting others.”

While Ms. Conway said social media platforms had cracked down on overt uses of white supremacist symbols and language, she said her research suggested that accounts displaying them appeared to be growing in popularity. Of such accounts that remained after a purge by Twitter between 2016 and 2018, she said, half had increased their followers by 50 percent or more.

There are other challenges, she said: It can be difficult to define what constitutes white supremacist activity, some policymakers are reluctant to change laws regulating such behavior and there are “very different attitudes, even in Western liberal democracies, as to legitimate and appropriate responses.”

The office of New Zealand’s chief censor, David Shanks, has acknowledged that many people may have viewed the Christchurch mosque video unintentionally — especially during and immediately after the attack. Mr. Shanks did not officially classify the video as objectionable until Monday, three days later.

“It is clear that this video was ‘pushed’ to many innocent New Zealanders by various apps,” he said. “We have had reports that it also ‘auto-played’ to some people who did not even know what it was.”

While he said that those who spread the video in New Zealand risked arrest and imprisonment, he warned all New Zealanders that even innocent possession of the video was a crime.

“If you have a record of it, you must delete it,” he said. “If you see it, you should report it. Possessing or distributing it is illegal and only supports a criminal agenda.”

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


18 − 8 =