‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real | AI (artificial intelligence)
The New Age of Digital Deception: How AI Avatars and Deepfakes Are Reshaping Politics and Propaganda
In an era where truth and fiction blur more than ever before, artificial intelligence has become the ultimate tool for crafting convincing illusions. From hyper-realistic avatars of fictional people to deepfakes of world leaders, AI is reshaping the landscape of online influence, politics, and propaganda. What once required elaborate studios and skilled actors can now be done in minutes by anyone with access to generative AI tools.
The Rise of AI-Generated Influencers
Meet Jessica Foster—or rather, the AI-generated persona of her. This fictional blonde woman, often depicted in a U.S. military uniform, exploded onto Instagram in late 2025. Her photos—sitting in a barracks bunk, posing in an office chair with her feet on the desk, or strolling a tarmac in high heels beside former President Donald Trump—drew over a million followers in a matter of months.
But Jessica Foster isn’t real. She’s a digital fabrication, created to attract attention and drive engagement. Her creators intentionally highlighted her feet in many images—a detail that may seem odd, but was designed to appeal to niche audiences and direct them toward her OnlyFans account, where foot photos were sold as if they were hers.
This isn’t just about clicks and cash. It’s a new form of digital propaganda. Even when people know the images are fake, they can still reinforce existing beliefs, making the unreal feel emotionally “true.” As Daniel Schiff, a technology policy professor at Purdue University, puts it: “We are blending the lines between political cartoons and reality.”
Deepfakes in Politics: From Taylor Swift to Trump
Political deepfakes have skyrocketed in recent years. Since the start of 2025, researchers have catalogued over 1,000 English-language social media posts featuring fake images or videos of political figures—more than in the previous eight years combined. This explosion is largely due to advances in generative AI, which make it “trivially easy” to create realistic scenes and insert real people into them, says Sam Gregory of Witness, a human rights organization.
During the 2024 U.S. election, Donald Trump shared AI-generated images of Taylor Swift fans supposedly supporting him. Since then, the Trump White House has posted at least 18 deepfakes on social media. But the issue isn’t partisan: California Governor Gavin Newsom has also used deepfakes, including one showing Trump smiling at a hologram of Jeffrey Epstein.
Even when viewers know the content is fake, it can still be persuasive. “People aren’t necessarily looking for things that are real; they are looking for things that represent their beliefs,” Gregory explains. These deepfakes add another layer of reinforcement, making it less likely that people will reconsider their views, says Valerie Wirtschafter of the Brookings Institution.
AI Avatars as Propaganda Tools
The use of fake avatars goes beyond individual influencers. During the war in Ukraine, videos have circulated on social media featuring AI-generated female Iranian soldiers saying, “Habibi, come to Iran.” One giveaway: Iran prohibits women from serving in combat roles. Similarly, an AI-generated female police officer on TikTok has amassed over 26,000 followers, posting content that supports Trump’s immigration policies.
These avatars are part of what researchers call “AI swarms”—coordinated networks of bots that can infiltrate communities and fabricate consensus. “It’s sort of like a troll farm without actually having to have people any more,” Wirtschafter says.
The Fight for Digital Authenticity
As the threat grows, so do efforts to combat it. The Coalition for Content Provenance and Authenticity has developed a technical standard for embedding cryptographically signed metadata in digital content, allowing platforms to label AI-generated material. Companies like LinkedIn, Pinterest, TikTok, and YouTube have pledged to adopt these labels.
But implementation is inconsistent. An investigation by The Indicator found that even the most diligent platforms only labeled about two-thirds of AI-generated content, while Instagram labeled just 15 out of 105 fake images. Meta’s oversight board has expressed concern over the company’s “inconsistent” application of these standards.
“We don’t need to give up on the ability to discern what is real from synthetic,” Gregory says. “But we do need to act fast.”
The Bottom Line
AI-generated avatars and deepfakes are more than just digital curiosities—they’re powerful tools for influence, propaganda, and profit. As technology advances, the line between reality and fiction will only blur further. The challenge now is not just to detect fakes, but to build a digital ecosystem where authenticity can be verified and trust can be restored.
Viral Tags:
AIinfluencer #Deepfake #DigitalDeception #Propaganda #FakeNews #TechEthics #AIethics #SocialMediaManipulation #PoliticalDeepfakes #JessicaFoster #TrumpAI #TaylorSwiftAI #GavinNewsom #IranPropaganda #AIbots #ContentAuthenticity #MetaAI #TikTokAI #InstagramAI #OnlineInfluence #DigitalTrust
Viral Phrases:
“AI-generated avatars are the new propaganda”
“Jessica Foster: The fake influencer who fooled a million”
“Deepfakes are making the unreal feel true”
“AI swarms: The future of digital manipulation”
“Can you trust what you see online?”
“The rise of the AI troll farm”
“Political deepfakes: When fiction shapes reality”
“Tech giants struggle to label AI content”
“The battle for digital authenticity”
“AI is rewriting the rules of influence”
,



Leave a Reply
Want to join the discussion?Feel free to contribute!