Over 1,400 AI-operated social media accounts that have been spreading pro-military and pro-government propaganda in India for the past three years have been uncovered as part of a vast network. Researchers have discovered that this operation consists of 904 X (formerly Twitter) accounts and 500 Facebook identities that are exclusively targeted at Indian audiences.
The operation, which began in September 2021, focuses on enhancing the public image of Prime Minister Narendra Modi’s government while promoting the Indian Army. It also aims at neighbouring countries like China, the Maldives, Bangladesh—following the political exit of former Prime Minister Sheikh Hasina—and Pakistan, frequently spreading negative content about these nations.
This fraudulent social media campaign, compared to most others, evaded detection by using generic photos and fake profiles.
Researchers revealed that the network has surprisingly evaded detection for years despite its simplicity. In contrast to many inauthentic campaigns that are swiftly identified and shut down, this propaganda operation relied on accounts with fake identities and generic profile pictures. Rather than disseminating outright false information, the accounts focused on sharing content supportive of the government and military, often reposting news from pro-government outlets like ANI News and popular sources such as Hindustan Times.
It appears that artificial intelligence is the source of many of the posts. There seemed to be little human monitoring involved because the language is frequently repetitive, poorly worded, and of low quality. In July, for example, twenty fake X accounts replied to a post about Army Chief General Upendra Dwivedi at the same time, almost the same things, praising the Indian military and its leadership.
Operational security for this network appears to be weak, as many of the accounts frequently recycle the same content several times each day. In one notable case from June, 429 fake social media profiles reposted the same message about alleged mistreatment of religious minorities in Pakistan’s Balochistan region, showcasing the network’s reliance on duplication rather than fresh content.
Despite its large scale, the campaign has struggled to gain any significant traction with its target audience. The overly scripted and robotic nature of the posts, combined with their lack of creativity, has rendered the operation largely ineffective in generating genuine public engagement.
The campaign’s ability to remain undetected may be linked to how social media platforms work. Algorithms often display only a small number of accounts to users at first, making such networks easy to overlook. Additionally, the network likely benefited from engagements by supporters who had little reason to flag the content as problematic.