There’s a lot to hate about AI. But what if there was a mindful way to use it? | Life and style
AI for the People: A New Approach to Artificial Intelligence
I’ll admit it—I’ve become that guy at social gatherings who can’t stop talking about artificial intelligence. When I mention I’m working on a newsletter about AI, I’m often met with the same skeptical expressions and raised eyebrows. But before you dismiss this as another tech evangelist’s pitch, let me explain why this is different.
This isn’t about how AI can replace your friends or make your boss think you pulled an all-nighter on that presentation. Instead, I’m focused on using AI in ways that enhance our humanity rather than diminish it.
Like most people, I share concerns about mindless AI-generated content and the very real threats AI poses to our privacy, cognitive abilities, and employment. But I view AI similarly to how we view the internet today.
Yes, the internet has given us doomscrolling, data harvesting, clickbait, and your uncle’s vaccine conspiracy theories on Facebook. But it has also given us digital maps, podcasts, niche blogs, Wikipedia, video calls, and—who could forget—the Guardian app itself.
Any powerful tool will inevitably be exploited for nefarious purposes, but that doesn’t mean we must follow suit or simply accept the consequences. It means we need to demand proper regulation and accountability from the companies building these systems. Now is the crucial moment to advocate for guardrails around privacy, environmental impact, and the spread of misinformation.
And if we’re going to use AI, we must do so with our eyes wide open.
So where does this leave us? In “AI for the People,” our new free six-week newsletter course, we explore practical ways to work with AI while maintaining control and awareness—at work, in the kitchen, at the gym, and beyond. We’ll approach this with clear boundaries, which I’ll explain through our four cardinal rules.
But first, let me share why I believe AI can actually be useful.
I despise informational asymmetry—when corporations use complex legal language to confuse us into signing contracts we never actually read. Remember those arbitration clauses that Disney and Uber used to prevent people from suing them? I’ve started taking terms and conditions and legal contracts, feeding them to AI, and asking it to explain them in plain English while highlighting the most concerning clauses.
I’ve also used AI to help with my chronic time blindness, cram for my driving permit test, cook more adventurously, maintain a consistent workout routine, and even learn to play the Lord of the Rings theme on the tin whistle.
In most cases, I’ve found that AI is no substitute for human connection—no surprise there. But as an assistant that helps me understand new information, speed up tasks, or create tailored plans, my year has been full of small, practical revelations that I’m excited to share.
“AI for the People” isn’t about “10 prompts that will change your life” or letting a chatbot do your job for you. It’s about learning how AI can help you without surrendering your judgment.
As AI expert Ethan Mollick told me: “It’s just like any other tool: you dull your skills and critical thinking by giving all your skills and critical thinking to the AI.”
Many of these challenges aren’t new. Speaking to the New York Times in 2002, Italian author Umberto Eco was already grappling with misinformation in the early days of the web. “The problem with the internet is that it gives you everything, reliable material and crazy material,” he said. “So the problem becomes, how do you discriminate?”
That question—how we learn to discriminate, adapt, and stay in control—is the guiding philosophy behind “AI for the People.” We hope you’ll join us.
Our Four Cardinal Rules for This Series
AI can be powerful and genuinely useful, but only if we approach it with intention. Here are the principles we’re working from.
1. You’re the boss
You can give AI instructions and let it do everything for you, uncritically accepting its responses. But over time, that trade-off costs you control.
As Ethan Mollick, AI expert and bestselling author of “Co-Intelligence,” told me: “It’s just like any other tool, right? You dull your skills and critical thinking by giving all your skills and critical thinking to the AI. If you’re trying to learn something, make sure the AI is asking you questions and not giving you answers.”
That’s why we’ll always look at AI as a smart collaborator or assistant—with you staying firmly in charge.
2. Be your own factchecker
AI tools can get things wrong, whether due to bad sourcing or hallucinations. One notorious example: in 2024, Google’s AI search overview advised people to add glue to pizza after mistaking a joke on Reddit for a real recipe tip.
The key is to treat AI information like any other information. “If it’s something that really matters, you have to spend the time to verify it,” says Mollick.
You can ask your AI tool to provide links to sources, or you can upload the source itself (like a peer-reviewed study or official report) and ask the AI to only base its answers on what you’ve provided.
3. Be informed and intentional
The Guardian has covered some of the alarming environmental impacts of AI. This might leave individual users confused about how they should use it. Data is hard to pin down, but the bigger environmental issue we should think about is the rapid growth of AI infrastructure, how AI is being passively integrated into digital services, and how it’s being powered.
Everything we do online consumes energy and water—whether it’s watching Netflix, sending emails, or hopping on a video call. Some data suggests that using AI for simple tasks isn’t orders of magnitude higher than ordinary web activity, though it can be more energy-intensive than a basic search.
For this series, we will only use text-based prompts, which are on the lower end of AI energy consumption. None of this is to say that we should all send a hundred prompts a day. Just like you wouldn’t run your dishwasher to clean one fork, or take a private jet to the supermarket, it’s all about responsible use.
4. Don’t share sensitive information
If you want to maintain your privacy, or in some cases your job, you need to be careful about what you share with an AI tool. Whatever you type is sent to servers owned by the corporation and can be accessed via data breaches or legal requests. Many workplaces have strict policies about how to use AI; anything you share can also be used to train the model unless you’re able to opt out.
Tags: AI for the People, artificial intelligence, technology newsletter, AI tools, digital literacy, tech ethics, AI regulation, misinformation, environmental impact, privacy concerns, human-centered AI, practical AI applications, AI education, digital skills, tech guardrails
Viral Phrases: “I’m not proud of it, but I’ve become that annoying guy at parties who talks about AI,” “This isn’t about replacing your friends or tricking your boss,” “You dull your skills by giving all your skills to the AI,” “Everything we do online consumes energy and water,” “Don’t share sensitive information unless you want it training the next AI model,” “AI as a smart collaborator, not a replacement,” “The problem becomes, how do you discriminate?”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!