In Israel-Hamas War, Truth and Fiction Is Hard to Discern on Social Media

The major social media platforms, once heralded for their ability to document global events in real time, face a crisis of authenticity — one of their own making, critics say.

The war between Israel and Hamas has spawned so much false or misleading information online — much of it intentional, though not all — that it has obscured what is actually happening on the ground.

In turn, people are turning to sources that mirror their feelings, deepening social and political divisions. There are so many untrue claims that some people question the true ones. And it is not just on X, formerly known as Twitter, which has removed many of its guardrails in recent months. The recent advances in artificial intelligence — with programs that can produce virtually unlimited amounts of content — are already compounding that digital cacophony.

The authenticity crisis, though, is broader than the social networks that have come to dominate public discourse.

Trust in mainstream news outlets has eroded, too, with news organizations regularly accused of refracting state, corporate or political interests. That has helped propel a profusion of alternative sites online. Many hew to a particular point of view, shared by users online and boosted by algorithms that reward shocking or emotional content over nuance or balance.

“We have distorted the information ecosystem,” said Nora Benavidez, the senior counsel for Free Press, an advocacy organization.

A survey by the Pew Research Center last year showed that people under 30 trusted social media almost as much as traditional news outlets. Roughly half of them expressed having little trust in either. (Among all age groups, trust in traditional news organizations remains higher, though declining steadily since 2016.)

“The connection that I’m always trying to make is between major forces that want to confuse and distract us, and the end result always being that people will be less engaged,” Ms. Benavidez said. “People will be less sure of what issues they care about, less aware of why something might matter, less connected from themselves and from others.”

Not that long ago, social media was heralded as a powerful tool to democratize news and information.

In 2009, when mass demonstrations broke out in Iran over a rigged election, protesters used social media to break the information stranglehold of the country’s authoritarian rulers. They were able to post texts, photographs and videos that challenged government claims. Some called it a Twitter revolution.

Virtually every major event since then — from sporting events to natural disasters, terrorist attacks and wars — has unfolded online, documented viscerally, instantaneously, by the devices that billions of people carry in their hands.

The ubiquity of social media in most parts of the world still serves that function in many cases, providing evidence, for example, to document Russian war crimes in Ukraine.

As the conflict in Israel has shown, however, the same tools have increasingly done more to confound rather than illuminate.

In any war, discerning fact from fiction (or propaganda) can be exceedingly difficult. The antagonists seek to control access to information from the front. No one person can have more than a soda-straw view at any one moment. Now, though, false or misleading videos have gone viral faster than fact checkers can debunk them or the platforms can remove them in keeping with company policies.

Often the problem lies in the details. Hamas killed dozens of Israelis, including children, in an attack in Kfar Aza, a kibbutz near Gaza. A French television correspondent’s unverified report that 40 babies were beheaded in the attack went viral on social media as if it were fact. The report remains unconfirmed. It even seeped into a statement by President Biden that he had seen photographs of that particular horror, prompting the White House to walk back his remarks a bit, saying the information had come from news accounts.

Hamas has adroitly exploited social media to promote its cause the way Al Qaeda and the Islamic State once did. It used the Telegram app, which is largely unfiltered, as a conduit to push celebratory and graphic images of its incursion from Gaza into broader circulation on social networks that have barred terrorist organizations.

Increasingly, our digitized lives have become an information battleground, with every side in any conflict vying to offer its version. Old images have been recycled to make a new point. At the same time, actual images have been disputed as fakes, including a bloody photograph that Donald J. Trump Jr., the former president’s son, shared on X.

Reliable news organizations used to function as curators, verifying information and contextualizing it, and they still do. Nevertheless, some have sought to question their reliability as gatekeepers, most prominently Elon Musk, the owner of X.

The day after the fighting in Israel erupted, Mr. Musk shared a post on X encouraging his followers to trust the platform more than mainstream media, recommending two accounts that have been notorious for spreading false claims. (Mr. Musk later deleted the post but not before it had been seen millions of times.)

X has faced particularly sharp criticism, but false or misleading content has infected virtually every platform online. Thierry Breton, an official with the European Commission overseeing a new law governing social media, sent letters this week warning X, TikTok and Meta, the owner of Facebook and Instagram, of the prevalence of false and violent content from the conflict.

European regulators took the first step toward an inquiry of X on Thursday under the new law, citing the prevalence of content posted by extremists, including gory images. X’s chief executive, Linda Yaccarino, sought to head off the inquiry by claiming that the platform had in fact removed “tens of thousands” of posts.

Imran Ahmed, the head of the Center for Countering Digital Hate, which faces a lawsuit from Mr. Musk because of its criticisms of the platform, said the war had become an “inflection point” for social media. The flood of disinformation since the war began meant the platforms were “not as relevant a place to get information” during a major event.

“Social media should not be trusted for information — full stop,” he said. “You cannot trust what you see on social media.”

Mr. Ahmed, who was in London, said he had grown so frustrated in the early days of the war that he switched from the internet to the BBC for reliable information. “When was the last time I switched on a telly?” he said.

He noted that social media companies had rolled back resources to police what appeared online.

Mr. Musk has instituted a number of changes since acquiring the company last year that researchers say have resulted in a surge of harmful content, including racist and antisemitic remarks. They include a subscription that allows anyone to pay for a blue check mark, which once conveyed an account’s sense of authority to users.

“X, in particular, has gone from a year ago being the first platform that people switched on and then remained glued to in the midst of a crisis to a frankly unusable mess in which is more effort than it’s worth, just trying to discern what’s true.”

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


seventeen − thirteen =