Vladimir Putin‘s shadowy cyberspace army is ‘weaponising’ artificial intelligence to spread disinformation online and confuse Britons into siding with the Kremlin, experts have warned.
The new technology is ‘already in use’ and ‘blurring the lines’ between fact and fiction, researchers at the Royal United Services Institute (RUSI) worryingly claimed.
Experts from the London-based think-tank say Russia-linked groups – including ‘hacktivist collectives’ and pro-Kremlin influencers – have already been mobilised.
Using so-called ‘generative AI’, the groups are working to seed disinformation about Russian activity on an industrial scale, using custom-built automated propaganda to ‘sow discord’ across the West.
The tech has already been ‘integrated’ into Russia’s cyber operations and is now ‘fuelling an information arms race’ while seeking to ‘overwhelm’ governments in the West, analysts fear.
‘Far from being a distant risk, the research shows how AI is already central in Russian disinformation operations for its ability to scale, and personalise disinformation, generate content automatically, and reduce attribution risks,’ RUSI said in a report.
Generative AI refers to artificial intelligence systems that can create new content, such as text, images, audio, or code, based on the data they have been trained on.
The technology has exploded in recent years, with some now able to produce near photo-realistic images, that appear convincingly real.

Vladimir Putin ‘s shadowy cyberspace army is ‘weaponising’ artificial intelligence to spread disinformation online, experts have warned
Fake images created by AI have included anything from fictional attacks and atrocities, to videos purportedly showing victims caught up in wartime assaults.
But AI can also be used to create fake news reports, and rows between ‘bots’ – automated accounts – on social media built to dupe people into believing a false story.
In a 22-page report dubbed ‘Russia, AI and the Future of Disinformation Warfare’, RUSI warned Kremlin-backed groups were already looking into ways to use AI to ‘amplify’ content.
‘Generative AI is already being integrated into Russian disinformation operations,’ RUSI’s report said. ‘Automated tools generate fake articles, social media posts, images and deepfakes.
‘Operations like the “DoppelGänger” campaign, in which AI-generated articles mimicked legitimate Western news outlets, illustrate how these tactics aim to erode trust and sow confusion at scale.
‘AI-powered bots and automated social media accounts help amplify disinformation, saturate public discourse and simulate grassroots sentiment – a tactic known as “astroturfing”
‘In some cases, fake conversations between bots are staged to simulate debate and mislead third-party observers.’
The smart technology is reportedly being used strategically by mercenaries from the Wagner group – a team of guns for hire who have previously been ordered to fight in Ukraine by the Kremlin.

The smart technology is reportedly being used strategically by mercenaries from the Wagner group (pictured are members from the mercenary team in Ukraine)

Hackers from the group NoName057(16) have ‘openly’ discussed using AI to sharpen its malicious cyber attacks, misinformation campaigns, and reputational sabotage (file image)
The group is allegedly targeting messaging app Telegram, and are using generative AI as a tool to ‘undermine trust in Western institutions, sow discord… and frame any Russian cyber activities as defensive responses to perceived Western aggression’.
Meanwhile, hackers from the group NoName057(16) have ‘openly’ discussed using AI to sharpen its malicious cyber attacks, misinformation campaigns, and reputational sabotage.
Since its launch in 2022, the cyber cartel has already used such attacks to disrupt the running of a range of Ukrainian, European and American government agencies and media outlets.
Experts say Russian chiefs see AI as both an ‘opportunity and a threat’, and as a ‘powerful tool for information manipulation’ which the West may have a better grip of.
The Kremlin has long prioritised information warfare as a central element of statecraft, viewing it as a theatre of war ‘on par with conventional or nuclear warfare’, writers of the RUSI report said.
Disinformation teams affiliated with Putin are thought to have invested heavily in AI technologies to influence European audiences in the run-up to the 2024 European Parliament elections.
In August, the Mail revealed how a Russian-linked fake news website had fuelled violent protests over the Southport stabbings last year.
The misinformation spread like wildfire and within 27 hours, cities across the UK were in flames as rioting erupted.

Channel3 Now, which claims to be based in the US but has paid for high-end privacy protections, started life 11 years ago as a Russian YouTube channel that posted videos of rally-driving in the snow in Izhevsk, a Russian city about 750 miles east of Moscow

A police van was set on fire near a mosque in Southport during the rioting that followed last year
And last year, Russia was accused of generating more AI content to influence the US presidential election than any other foreign power as part of its broader effort to get Donald Trump re-elected, a US intelligence official claimed.
But as the tech improves and becomes cheaper, it is lowering the threshold for pro-Russian groups to take advantage, potentially opening the floodgates for a sea of disinformation to flood social media.
Russian disinformation campaigns aim to undermine its adversaries by fanning the flames of internal division, eroding trust in democratic institutions, and weakening alliances such as Nato or the EU.
‘Although many of these campaigns are under-resourced and disorganised, social media platforms allow for low-cost, large-scale experimentation without significant consequence for failed attempts,’ RUSI said.
‘Trial and error approaches carry little risk, and the volume of content often matters more than precision.’
In a series of recommendations, defence experts at RUSI urged the UK to up its monitoring of Kremlin-linked groups using AI.
Britain also needs to ‘support civil society resilience against AI threats’, by investing in ‘digital literacy’ campaigns to help Britons identify fake, AI propaganda.
And researchers have called for the development of ‘AI governance frameworks to prevent abuse’.
‘The fusion of AI and influence operations reinforces the need for AI governance frameworks that explicitly address malign use cases … Coordination between governments, platforms, researchers, and journalists must also happen at a larger scale … sharing insights on observed tactics and uses of AI tools,’ RUSI said.
‘Generative AI is no longer merely a tool – but is an ideological and operational centrepiece reshaping the mechanics, narratives, and strategic cultures of Russian disinformation,’ the experts concluded.
‘While Russian influence actors prize AI for its ability to scale, anonymise, and personalise propaganda, they also voice deep concern over the Western monopoly on high-performance AI tools and the ideological unreliability of domestic alternatives.

Pictured are Russian troops from across the country taking part in a march
‘By highlighting actor-level conversations, recruitment efforts, and operational applications, the report offers a rare window into how Russia’s digital influence ecosystem is evolving in real time and how competing narratives – of empowerment and vulnerability – are fuelling an information arms race, where the ability to manipulate perception and shape narratives through technology is becoming just as critical as traditional warfare capabilities.
‘The findings underscore the necessity for renewed vigilance in AI governance and disinformation strategy, not only to understand how AI tools are used, but how they are discussed, imagined, and embedded in adversarial worldviews.’