I’m not proud of it, but for the last year I’ve been that annoying guy at parties who talks about AI. When I tell people I’m working on a newsletter about it, I’m met with the usual frowns and suspicion.
But wait! Don’t walk away yet. This is not one of those ads about how an AI tool can replace your friends or trick your boss into thinking you stayed up all night working on a presentation. Instead, I’m focused on ways to use AI that don’t rob me – or anyone – of humanity.
Like most people, I hate mindless slop and the threat AI poses to our privacy, mental faculties and jobs. But I view AI in a similar way to the internet.
Yes, the internet unfortunately gave us doomscrolling, data harvesting, clickbait and your uncle’s Facebook posts on vaccines. But it also gave us digital maps, podcasts, niche blogs, Wikipedia, video calls and, who can forget, the Guardian app.
Like any powerful tool, people are going to exploit AI for nefarious means, but that doesn’t mean we have to follow suit or acquiesce. It means we have to demand the companies building it are properly regulated and accountable. Now is the time to call for guardrails regarding privacy, environmental impact, and the reach of misinformation.
And if we’re going to use AI, we should do it with our eyes open.
So where does that leave us? In AI for the People, our new free six-week newsletter course , we look at useful ways to work with AI while staying alert and in control – at work, in the kitchen, at the gym, and beyond. We will do this with guardrails – more on this below with our four cardinal rules.
But back to me being annoying at parties.
Here’s what I tell my skeptical acquaintances about how AI can actually be useful: I hate informational asymmetry. Take, for example, corporations trying to bamboozle us with legalese and ending with us signing contracts we never read. Remember those arbitration clauses used by Disney and Uber that stopped people from suing them?
So I’ve been taking terms and conditions and legal contracts and getting the AI to explain them in plain English while highlighting the clauses I should be most concerned about.
I’ve also used AI to help with my chronic time blindness, cram for my driving permit test, cook more adventurously, work out more consistently and, even better, learn to play the Lord of the Rings theme on the tin whistle.
In most cases, I’ve found that AI is no substitute for a real human being – no big surprise there. But as an assistant that helps me understand new information, speed up tasks, or come up with tailored plans, my year has been full of small, practical revelations that I’m looking forward to sharing with you.
AI for the People isn’t about “10 prompts that will change your life”, or letting a chatbot do your job for you. It’s about learning the ways in which AI can help you without surrendering your judgement.
As the AI expert Ethan Mollick told me: “It’s just like any other tool: you dull your skills and critical thinking by giving all your skills and critical thinking to the AI.”
Many of these problems aren’t new. Speaking to the New York Times in 2002, the Italian author Umberto Eco was already grappling with misinformation in the early days of the web. “The problem with the internet is that it gives you everything, reliable material and crazy material,” he said. “So the problem becomes, how do you discriminate?”
That question – how we learn to discriminate, adapt and stay in control – is the guiding philosophy behind AI for the People. We hope you’ll join us.
Our four cardinal rules for this series
AI can be powerful and genuinely useful, but only if we approach it with intention. Here are the principles we’re working from.
1. You’re the boss
You can give the AI instructions and let it do everything for you, and regurgitate its responses uncritically. But over time, that trade-off costs you control.
As Ethan Mollick, the AI expert and bestselling author of Co-Intelligence, told me: “it’s just like any other tool, right? You dull your skills and critical thinking by giving all your skills and critical thinking to the AI. If you’re trying to learn something, make sure the AI is asking you questions and not giving you answers.”
That’s why we’ll always look at AI as a smart collaborator or an assistant – with you staying in charge.
2. Be your own factchecker
AI tools can get things wrong, whether it’s because of bad sourcing or hallucinations. One example: in 2024 Google’s AI search overview advised people to add glue to pizza, after mistaking a joke on Reddit for a real recipe tip.
The key is to treat AI information like any other information. “If it’s something that really matters, you have to spend the time to verify it,” says Mollick.
You can ask your AI tool to provide links to sources, or you can upload the source itself (like a peer-reviewed study or an official report) and ask the AI to only base its answers on what you’ve provided.
3. Be informed and intentional
The Guardian has covered some of the alarming environmental impacts of AI. This might leave individual users confused about how they should use it. Data is hard to pin down, but the bigger environmental issue we should think about is the rapid growth of AI infrastructure, how AI is being passively integrated into digital services, and how it’s being powered.
Everything we do online consumes energy and water – whether it’s watching Netflix, sending emails, or hopping on a video call. Some data suggests that using AI for simple tasks is not orders of magnitude higher than ordinary web activity, though it can be more energy-intensive than a basic search.
For this series, we will only use text-based prompts, which are on the lower end of AI energy consumption. None of this is to say that we should all send a hundred prompts a day. Just like you wouldn’t run your dishwasher to clean one fork, or take a private jet to the supermarket, it’s all about responsible use.
4. Don’t share sensitive information
If you want to maintain your privacy, or in some cases your job, you need to be careful about what you share with an AI tool. Whatever you type is sent to servers owned by the corporation and can be accessed via data breaches or legal requests. Many workplaces have strict policies about how to use AI; anything you share can also be used to train the model unless you are able to opt out.







