Supreme Court Takes Up Challenge to Social Media Platforms’ Shield

WASHINGTON — The Supreme Court agreed on Monday to decide whether social media platforms may be sued despite a law that shields the companies from legal responsibility for what users post on their sites. The case, brought by the family of a woman killed in a terrorist attack, argues that YouTube’s algorithm recommended videos inciting violence.

The case, Gonzalez v. Google, No. 21-1333, concerns Section 230 of the Communications Decency Act, a 1996 law intended to nurture what was then a strange and nascent thing called the internet. Written in the era of online message boards, the law said that online companies are not liable for transmitting materials supplied by others.

Section 230 also helped enable the rise of huge social networks like Facebook and Twitter by ensuring that the sites did not assume new legal liability with every new tweet, status update and comment.

Legal experts said that the court’s decision to explore whether the immunity conferred by the law has limits could have vast significance.

“This could be a very big deal for internet law, because it’s the first time that the Supreme Court has agreed to hear a case that would allow it to interpret Section 230,” said Jeff Kosseff, an associate professor at the United States Naval Academy who wrote a book about the protections. “I could envision any number of outcomes reached by any number of combinations of justices across the ideological spectrum.”

The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during the November 2015 terrorist attacks, which also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers, using the information that the company had collected about them.

“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” lawyers for the family argued in their petition seeking Supreme Court review.

In a brief urging the justices to deny review, lawyers for Google said that the 1996 law gave the company complete protection.

“Section 230 bars claims that treat websites as publishers of third-party content,” they wrote. “Publishers’ central function is curating and displaying content of interest to users. Petitioners’ contrary reading contravenes Section 230’s text, lacks a limiting principle and risks gutting this important statute.”

A growing group of bipartisan lawmakers, academics and activists have grown skeptical of Section 230 and say that it has shielded giant tech companies from consequences for disinformation, discrimination and violent content that flows across their platforms.

In recent years, they have advanced a new argument: that the platforms forfeit their protections when their algorithms recommend content, target ads or introduce new connections to their users. These recommendation engines are pervasive, powering features like YouTube’s autoplay function and Instagram’s suggestions of accounts to follow. Judges have mostly rejected this reasoning.

In one case, the family of an American killed in a terrorist attack sued Facebook, claiming that its algorithm had bolstered the reach of content produced by Hamas, which said the attacker was a member of its group. A federal district judge rejected that lawsuit, citing Section 230.

The U.S. Court of Appeals for the Second Circuit ruled against the family, too. But a dissenting judge said Facebook’s algorithmic suggestions should not be protected by Section 230. Justice Clarence Thomas cited the opinion in a 2020 statement calling for the Supreme Court to reconsider the protections.

Members of Congress have also called for changes to the law. But political realities have largely stopped those proposals from gaining traction. Republicans, angered by tech companies that remove posts by conservative politicians and publishers, want the platforms to take down less content. Democrats want the platforms to remove more posts, like false information about Covid-19.

The court also agreed on Monday to hear a second case, Twitter v. Taamneh, No. 21-1496. The question in that case is whether Twitter, Facebook and Google may be sued on the theory that they abetted terrorism by letting Islamic State use their platforms. That case was brought by the family of Nawras Alassaf, who was killed in a terrorist attack in Istanbul in 2017.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


8 − three =