A large network of fake social media accounts promoting Indian government and military propaganda targeting Indian readers has been uncovered after three years of operation.
Researchers from NewsGuard connected at least 500 Facebook and 904 X accounts together, which have been posting, reposting, and commenting on content intended to spread favor for Prime Minister Narendra Modi’s administration in India, The content also routinely casts aspersions on China, the Maldives, Bangladesh (following the popular ousting of its former prime minister, Sheikh Hasina), and, of course, Pakistan.
Remarkably, the relatively amateurish influence operation has survived, unreported, since September 2021.
“It was certainly a surprising find,” says Dimitris Dimitriadis, director of research and development for NewsGuard. By contrast, he says, “We regularly track inauthentic networks, but then a week or two in, they get detected and taken down.”Â
Indian Propaganda on X & Facebook
Despite evading notice for so long, there isn’t anything particularly subtle about the campaign, but it’s certainly notable for its sprawling size.
The profiles all feature fake names and profile pictures, and typically promote propaganda rather than outright disinformation. Sometimes, they do so by reposting favorable news stories from pro-government news outlets, as well as more popular outlets like the Hindustan Times.
In July, as just one example, 20 fake X accounts tied to the propaganda network all commented on a post from the pro-government outlet ANI News, reporting on how “Army Chief General Upendra Dwivedi touches the feet of his brother and other relatives as he takes over as the new Chief of Army Staff.” The fake profiles all added cookie-cutter commentary: “The Indian Army — A symbol of national strength that deters aggression”; “Every soldier’s story, a legacy of bravery passed down through time”; and “General Dwivedi — A leader who values transparency and accountability. Indian Army, with public trust.”
In other instances, the fake profiles create their own content. The ironically named JK News Network, for instance, purports to provide 24/7 news updates, but instead posts pro-army news and commentary, as well as more narcissistic content, like flattering photos of military personnel.
Often, the posts from these profiles appear to be AI-generated. “It’s the type of text you expect to see — very bland, very dry, quite sloppy, some awkward English, some unfinished sentences, which suggested that it could be unsupervised,” says Dimitriadis.
Worse from an operational security perspective for those running the campaign, the accounts tend to be blatantly repetitive and overlapping. The same ones regularly post the same content up to 10 times per day, and hundreds of accounts will make identical posts to one another. In June, for example, when JK News Network posted, “Balochistan Under Strain: Persistent Harassment by Pakistani Security Forces Demands an End.#FascistPakArmy,” in reference to Pakistan’s suppression of religious minorities in the Balochistan region, it was reposted verbatim by 429 other fake accounts as well.
Online Influence Ops Prove Ineffectual
The relative lack of effort and creativity might explain why such a longstanding, widespread campaign seems to have had no measurable impact on its intended audience.
As Dimitriadis explains, “It’s no secret that these types of campaigns are very bad at generating traction. They’re normally quite awkward, and quite sloppy in terms of just reading the mood — being able to tap into real public conversations. We’ve seen some recent counterexamples [like] Spamoflauge, but with this campaign, it was very much along those lines. We didn’t really see any engagement.”
Source: NewsGuard
As for why such underdeveloped, sometimes obviously AI-generated content managed to raise so few eyebrows, it might have more to do with the social media platforms themselves than what’s actually posted to them.
“Until a clear connection linking them to a campaign is established, many users dismiss these influence operations accounts as minor,” explains Abu Qureshi, threat intelligence lead of BforeAI. In reality, “Based on how general social media algorithms operate, just a few accounts per user are displayed initially, to see the engagement of the consumer. This makes them easy to overlook.”
He adds, “To stay hidden, these account users may change usernames, delete posts, or modify content. Additionally, the majority of the engagements these posts get are from like-minded supporters who may have no reason to report or flag such posts as threats.”