Why using AI responsibly also means knowing when not to use it
In an era where artificial intelligence is rapidly reshaping how we work, learn, and interact, a growing chorus of experts is urging us to pause and ask not just how we use these powerful tools—but whether we should be using them at all. Professor Sam Illingworth of Edinburgh Napier University, a leading voice in science communication and interdisciplinary studies, argues that the real challenge isn’t mastering AI prompts or speeding up workflows. It’s about cultivating a deeper, more critical understanding of the technology’s limitations, biases, and ethical implications.
Most AI training today focuses on efficiency: write better prompts, refine your queries, generate content faster. But this approach treats AI as just another productivity tool, measuring success by speed and output. Illingworth contends that this misses the point entirely. Instead, we need to embrace what he calls “critical AI literacy”—a mindset that asks probing questions about the technology’s role in our lives and the values it reflects.
AI systems are not neutral. They carry biases that most users never see. For example, a 2025 study analyzing the British Newspaper Archive found that digitized Victorian newspapers represent less than 20% of what was actually printed. The sample skews toward overtly political publications, leaving out independent voices and alternative perspectives. Anyone drawing conclusions about Victorian society from this data risks perpetuating distortions baked into the archive. The same principle applies to the datasets that power today’s AI tools. If we can’t see the biases, we can’t interrogate them.
This is where the humanities come in. Literary scholars have long understood that texts help construct, rather than simply reflect, reality. A newspaper article from 1870 is not a window onto the past, but a curated representation shaped by editors, advertisers, and owners. AI outputs work the same way. They synthesize patterns from training data that reflects particular worldviews and commercial interests. The humanities teach us to ask whose voice is present and whose is absent.
Consider the findings from a 2023 study published in the Lancet Global Health journal. Researchers attempted to invert stereotypical global health imagery using AI image generation, prompting the system to create visuals of Black African doctors providing care to white children. Despite generating over 300 images, the AI proved incapable of producing this inversion. Recipients of care were always rendered Black. The system had absorbed existing imagery so thoroughly that it could not imagine alternatives. This is the real danger of “AI slop”—outputs that perpetuate biases without interrogation.
Even our most cherished human relationships are at risk. Philosophers Micah Lott and William Hasselberger argue that AI cannot be your friend because friendship requires caring about the good of another for their own sake. An AI tool lacks an internal good; it exists to serve the user. When companies market AI as a companion, they offer simulated empathy without the friction of human relationships. The AI cannot reject you or pursue its own interests. The relationship remains one-sided—a commercial transaction disguised as connection.
The stakes are not hypothetical. Decisions made with AI assistance are already shaping hiring, healthcare, education, and justice. If we lack frameworks to evaluate these systems critically, we outsource judgment to algorithms whose limitations remain invisible. Educators need to distinguish when AI supports learning and when it substitutes for the cognitive work that produces understanding. Journalists need criteria for evaluating AI-generated content. Healthcare professionals need protocols for integrating AI recommendations without abdicating clinical judgment.
This is the work Illingworth pursues through Slow AI, a community exploring how to engage with AI effectively and ethically. The current trajectory of AI development assumes we will all move faster, think less, and accept synthetic outputs as a default state. Critical AI literacy resists that momentum.
None of this requires rejecting technology. The Luddites, textile workers who organized against factory owners in the early 19th century, were not opposed to progress. They were skilled craftsmen defending their livelihoods against the social costs of automation. When Lord Byron rose in the House of Lords in 1812 to deliver his maiden speech against the frame-breaking bill (which made the destruction of frames punishable by death), he argued these were not ignorant wreckers but people driven by circumstances of unparalleled distress. The Luddites saw clearly what the machines meant: the erasure of craft and the reduction of human skill to mechanical repetition. They were not rejecting technology. They were rejecting its uncritical adoption.
Critical AI literacy asks us to recover that discernment. Moving beyond “how to use” toward an understanding of “how to think.” Ultimately, it’s not about mastering prompts or optimizing workflows. It’s about knowing when to use AI and when to leave it the hell alone.
Tags & Viral Phrases:
AI literacy, critical thinking, AI ethics, digital literacy, AI bias, technology ethics, Slow AI, human judgment, AI limitations, AI companionship, synthetic empathy, algorithmic bias, humanities and AI, AI in education, AI in journalism, AI in healthcare, Luddites, Lord Byron, Victorian newspapers, British Newspaper Archive, Lancet Global Health, Micah Lott, William Hasselberger, AI slop, delve, em dashes, AI-generated content, human relationships, automation, social costs, technological discernment, ethical AI usage, AI decision-making, AI in justice, AI in hiring, AI in society, critical AI literacy, AI frameworks, human skill, craft, technological progress, ethical technology, AI and humanity, AI and friendship, AI and empathy, AI and bias, AI and judgment, AI and learning, AI and journalism, AI and healthcare, AI and education, AI and justice, AI and hiring, AI and society, AI and ethics, AI and technology, AI and humanity, AI and friendship, AI and empathy, AI and bias, AI and judgment, AI and learning, AI and journalism, AI and healthcare, AI and education, AI and justice, AI and hiring, AI and society, AI and ethics, AI and technology.
,




Leave a Reply
Want to join the discussion?Feel free to contribute!