Select Page

From the outset of this weekend’s Israel-Hamas conflict, graphic footage of abductions and military operations have spread like wildfire on social media platforms, including X, formerly known as Twitter. But disinformation on the platform has made it harder for users to assess what’s going on in the region.

Over the weekend, X flagged several posts as misleading or false, including a video purportedly showing Israeli airstrikes against Hamas in Gaza. Thousands of users saw the posts, and the most widely shared posts were flagged as misleading by the platform. Still, dozens of posts with the same video and caption were not flagged by X’s system, according to CNBC’s review.

The patchwork enforcement comes days after NBC News reported that X made cuts to its disinformation and election integrity team. Shortly before Hamas launched its surprise attack, X removed headlines from links on the platform, making external links difficult to tell apart from standard photos shared on X.

Before Elon Musk acquired Twitter, the company’s management had devoted significant resources to fighting manipulated or misleading information. After Musk took over, renaming the platform, he slashed headcount in teams dedicated to fighting misinformation and criticized the company’s past work with the U.S. government on Covid-19 disinformation.

Under Musk, X has prioritized user-driven content tagging with Community Notes, the pre-existing feature formerly known as Birdwatch. But a September study from the EU found that despite the feature, which adds crowd-sourced context to posts, disinformation was more discoverable on X than on any other social media platform and received more engagement than on other platforms, on a relative basis.

Alex Goldenberg, an analyst at the Network Contagion Research Institute, studies hate and right-wing extremism on social media and in the real world. Goldenberg told CNBC that even before Musk’s tenure, Twitter had a challenging time handling non-English disinformation.

“I’ve often found that missing disinformation and incitement to violence in the English language are prioritized, but those in Arabic are often overlooked,” Goldenberg said. He added that NCRI has noted an uptick in “recycled videos and photos from older conflict being associated, intentionally sometimes with, this particular conflict.”

Users have noticed the impact of the changes to X’s content moderation, and some have fallen prey to sharing disinformation on the platform.

“It’s remarkable how Elon Musk has destroyed what was perhaps the best thing about Twitter: the ability to get relatively accurate and trustworthy data in real time when there’s a crisis,” Paul Bernal, an IT law professor at the University of East Anglia in England, wrote on X Monday.

On Sunday, a British politician shared a video purportedly from a BBC correspondent. “Following some pretty appalling equivocation and whataboutary from the BBC yesterday and this morning, now this from a BBC journalist,” Chris Clarkson, a member of parliament for Heywood & Middleton, wrote.

The video was not from a BBC correspondent; Clarkson wrote on Monday that his “comments on the BBC stand” but conceded that the original post was not from a BBC journalist.

Although government verification now awards certain accounts a silver checkmark, verification for notable individuals and reporters was phased out in favor of paid Twitter Blue verification, making it “even more difficult to ascertain whether the messenger of a particular message or its content is authentic,” Goldenberg said.

Some Hamas-created propaganda videos have also been circulating on X. While the terrorist organization is banned from most social media platforms, including X, it continues to share videos on Telegram. Those videos — including ones from the most recent assault on Israel — are often reshared onto X, Goldenberg told CNBC. And that can have real-world effects.

“As we’ve seen in the past, especially in May of 2021, for example, when tensions rise in the region, there’s, there’s a high possibility of rise hate crimes targeting the Jewish community outside of the region,” Goldenberg said.

Paid verification purportedly boosts a user’s posts and comments on X, and some posts tagged as misleading have come from those verified users. Musk himself has amplified such posts on several occasions — both pertaining to the conflict in Ukraine and more recently in Israel. On Sunday, Musk encouraged his 160 million followers to follow two accounts which Musk said had “good” content about the conflict.

One of those users had made anti-Semitic posts in the past, including one where the person told a Twitter user to “mind your own business, jew.” Musk later deleted his post promoting the account.

Share it on social networks