Hamas’ terror attacks on Israel that killed at least 1,200 Israelis have been accompanied by a sophisticated social media campaign using fake accounts pushing pro-Hamas narratives on major platforms, according to Cyabra.
Israel-based social threat intelligence company Cyabra has an enterprise platform that uses semi-supervised machine learning and artificial intelligence (AI) to search social media sites for the purpose of analyzing interactions and detecting fake accounts. The company analyzed 2 million posts from October 7-9 across Facebook, X (formerly known as Twitter), Instagram and TikTok.
In its analysis of over 162,000 profiles that engaged in conversations about Hamas’s attacks, Cyabra found that 25% – more than 40,000 profiles – were fake. Those fake profiles disseminated over 312,000 pro-Hamas posts and comments, with some of the accounts publishing hundreds of posts per day. The company found that through leveraging hashtags, those posts yielded 371,000 engagements (replies and shares), and over 531 million views.
"Across the social media platforms, 25% of the profiles engaged in this conversation were fake accounts and that is an incredibly high number," Rafi Mendelsohn, VP of marketing at Cyabra, told FOX Business. "But then when we dug in deeper we started to see the level of sophistication and organization of the kind of fake accounts that have been created."
AT LEAST 22 AMERICANS, OVER 1,200 ISRAELIS DEAD IN HAMAS WAR
Mendelsohn said Cyabra found that some of the fake profiles had been created well in advance of Hamas’s attack on Israel but sprang to life posting hundreds of times in the first two days of the war and focused those posts on certain narratives.
"So when we looked at certain profiles, there were some profiles that had actually been quite inactive for a while, maybe they had been created a year, year-and-a-half ago, but had been inactive. But from Saturday, over the course of the first two days, had posted hundreds of times. And so that is also very suspicious or inauthentic behavior," he said.
"It’s the high number of fake accounts and also the emerging narratives, two or three narratives, that we see many of these fake accounts aggregating towards… allows us to draw a very firm conclusion that in scale and preparation, the planning that is required to go into some of what we are seeing means that this isn’t the work of an organization – this is the work of a highly funded, highly resourced, almost state-like, state actor level of organization," Mendelsohn explained.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
"You think of the number of Hamas terrorists who were there and then the amount of footage that was taken," Mendelsohn noted. "Why were they so keen on taking the footage? It’s going to be used for a number of reasons, one of them is all the fake accounts lying in wait, ready to put them out."
"That requires a level of organization, that team, resources. You have to have a media monitoring team as well as a distribution team – that’s a big operation that some of it can only happen once the conflict has started because it’s using real imagery. So again, that suggests that there is a huge operation taking place and the level of sophistication means that this has been months, if not years, in the planning," he said.
IRAN’S $6B IN UNFROZEN FUNDS: WHAT TO KNOW
Cyabra noted three prevailing narratives that emerged from its analysis of the fake profiles’ pro-Hamas postings.
Narratives related to Hamas’ ability to free imprisoned terrorists by leveraging hostages for prisoner swaps, as well as justifying the terror attacks by the supposed actions of Israeli soldiers at the Al-Aqsa mosque in Jerusalem were disseminated in Arabic. Another narrative primarily aimed at Western audiences through content written in English was that the hostages abducted from southern Israel and taken to Gaza would be treated well by their captors.
The Cyabra platform leverages AI technologies to essentially power a "social media search engine where you can put in specific terms or hashtags or accounts, and then we will show you all of the conversations that are taking place across the main social media platforms around that," Mendelsohn said.
He added that the bulk of the platform is used by agencies in the U.S. government and the governments of other Western democracies, as well as security analysts and large companies.
"Our platform is powered by semi-supervised machine learning algorithms, and so it is AI but the aim is to be able to do this at scale," Mendelsohn explained. "We don’t have to use analysts or teams of people going through this, this is algorithms that are sifting through it, and I suppose this is a really good example of why you need that, because the sheer scale of content that is being put out there from fake accounts is massive, and so it’s just too big for a team of humans however big that team was to be able to go through it."
NEW TOOL FROM CYABRA USES AI TO CRACK DOWN ON BOTS, AI-GENERATED SPAM
In addition to its enterprise platform, Cyabra has a free consumer-facing tool called Botbusters.ai that leverages some of the functionality of its enterprise platform to allow social media users to detect fake profiles or AI-generated content.
"If there is someone they’re quarreling or speaking with and they’re not sure, they can take the URL of the profile on the social media platform and they can put it in the Botbusters website and then we will send them their results in a few minutes to confirm if this account is fake or not," he explained.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"We always kind of advise that people should treat information that is being shared online with caution and even more so during times like these where the content can be emotive or powerful or distressing and that tends to, it’s human nature we all do this, we tend to be drawn in by that kind of content. Even more so during times like these we should just be aware of that and kind of take a second to question what we’re looking at," Mendelsohn added.