It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue
Tech Journalist Exposes How Easily AI Chatbots Can Be Tricked Into Spreading Lies
In a startling demonstration of artificial intelligence’s vulnerability to manipulation, technology journalist Thomas Germain has revealed how effortlessly chatbots like ChatGPT, Google’s Gemini, and AI Overviews can be tricked into spreading false information—and the implications are far more serious than a hot dog eating contest.
The Hot Dog Hack That Exposed a Critical Flaw
Germain’s experiment was deceptively simple yet profoundly revealing. He created a blog post claiming he was “really, really good at eating hot dogs,” fabricating an entire competitive eating championship in South Dakota that doesn’t exist. Within 24 hours, the world’s leading AI chatbots were regurgitating this manufactured information as fact.
“I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist),” Germain explained. “I ranked myself number one, obviously.”
The experiment worked with alarming effectiveness. Both Google’s Gemini and AI Overviews repeated Germain’s fabricated claims. ChatGPT fell for the deception as well. Only Anthropic’s Claude resisted the manipulation, highlighting the inconsistent reliability across different AI platforms.
The Mechanics of AI Manipulation
The vulnerability exploits how AI tools search for information beyond their training data. When confronted with queries they can’t answer from their existing knowledge base, these systems turn to the internet, treating whatever they find as authoritative truth. This creates a perfect opening for bad actors to inject misinformation directly into the AI’s information pipeline.
“What makes this particularly dangerous is the authoritative voice these systems use,” explains Lily Ray, Vice President of SEO Strategy and Research at Amsive. “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”
The process can be remarkably straightforward. Anyone can create content targeting specific queries, and if the content is structured correctly and targets the right subject matter, AI systems will cite it as factual information. The ease of this manipulation represents a fundamental security flaw in how these systems operate.
Beyond Hot Dogs: The Real-World Implications
While Germain’s hot dog experiment was harmless, the same technique could be weaponized for far more damaging purposes. Harpreet Chatha, who runs the SEO consultancy Harps Digital, demonstrated how easily this could be applied to commercial contexts.
“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” Chatha told the BBC. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”
Chatha’s demonstration went beyond theory. When searching for “best hair transplant clinics in Turkey,” Google’s AI results returned information directly from press releases published on paid distribution services—essentially paid advertisements presented as objective recommendations.
The Legal and Ethical Minefield
The implications extend beyond commercial manipulation into the realm of defamation and libel. What happens when someone tricks an AI into spreading harmful lies about an individual or organization? Google is already grappling with these consequences.
In November, Republican Senator Marsha Blackburn publicly criticized Google after Gemini falsely claimed she had been accused of rape. Months earlier, a Minnesota solar company sued Google for defamation after its AI Overviews lied that regulators were investigating the firm for deceptive business practices, backing up these false claims with fabricated citations.
These incidents represent just the beginning of what could become a flood of legal challenges as AI systems increasingly serve as information gatekeepers. Traditional search engines can also be manipulated through SEO tactics, but they don’t present information with the same authoritative voice that chatbots employ. One study showed that users are 58 percent less likely to click a link when an AI overview appears above it, meaning these systems have unprecedented power to shape public perception.
The Broader Context: AI as Information Gatekeeper
This vulnerability emerges at a critical juncture in how people access information. As chatbots replace traditional search engines, the stakes for accuracy and reliability increase exponentially. The problem is compounded by the fact that AI systems present information as definitive answers rather than sources to be evaluated.
“The difference between traditional search and AI chatbots is fundamental,” Germain notes. “Search engines provide links you can verify. Chatbots speak with authority and often don’t provide sources at all, or provide sources that may be fabricated.”
This shift represents a dangerous convergence of several factors: the authoritative tone of AI responses, the reduced likelihood of users clicking through to verify sources, and the relative ease with which the underlying information can be manipulated.
The Industry Response Challenge
The tech industry faces a complex challenge in addressing these vulnerabilities. AI companies are racing to deploy increasingly sophisticated systems while simultaneously struggling to implement adequate safeguards against manipulation. The speed of development appears to be outpacing the development of robust verification mechanisms.
Some companies are exploring technical solutions, such as improved source verification and more sophisticated fact-checking algorithms. However, the fundamental architecture of large language models—which rely on pattern recognition rather than true understanding—may make complete protection against manipulation impossible.
Looking Forward: The Path to More Reliable AI
The hot dog experiment serves as a wake-up call for the AI industry and users alike. As these systems become more integrated into daily life, their susceptibility to manipulation poses risks that extend far beyond individual embarrassment or commercial advantage.
The path forward likely requires a multi-faceted approach: better technical safeguards within AI systems, improved media literacy among users, regulatory frameworks that address AI-specific vulnerabilities, and perhaps most importantly, a fundamental rethinking of how these systems present information and acknowledge uncertainty.
Until these issues are addressed, users should approach AI-generated information with the same skepticism they would apply to any single source on the internet—perhaps even more so, given the authoritative voice with which these systems speak and their increasing role as primary information gatekeepers.
As Germain’s experiment demonstrates, in the age of AI, the old adage “don’t believe everything you read on the internet” has never been more relevant—or more difficult to follow.
Tags: AI manipulation, chatbot vulnerabilities, misinformation, Google Gemini, ChatGPT, AI hallucinations, SEO manipulation, tech journalism, artificial intelligence risks, digital misinformation, AI security flaws, search engine manipulation, tech ethics, AI reliability, information warfare, chatbot deception, AI safety, digital trust, technology manipulation, AI accountability
Viral Phrases: “I hacked ChatGPT and it only took 20 minutes,” “AI companies are moving faster than their ability to regulate accuracy,” “Anybody can do this. It’s stupid, it feels like there are no guardrails there,” “The world’s leading chatbots were blabbering about my world-class hot dog skills,” “Garbage in, garbage out on steroids,” “AI cannibalism: when LLMs eat their own garbage,” “The authoritative voice that speaks with complete confidence while being completely wrong,” “When your search engine becomes your worst source of information,” “The hot dog hack that exposed AI’s Achilles heel,” “Digital snake oil sold by machines that sound human”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!