I’ve taught thousands of people how to use AI – here’s what I’ve learned | AI (artificial intelligence)


Training teams to use AI at work has given me a front-row seat to a new kind of professional divide.

Some people hand everything over to the machine and stop thinking. Others won’t touch it at all.

But there’s a third group. They learn to work with AI critically, treat it like a bright, enthusiastic intern that needs to be managed and supported to do their best work.

The difference? It’s rarely technical ability. It’s curiosity. A willingness to experiment, get things wrong, and figure out what AI is actually good at.

Here’s what I’ve learned so far.

Most people fail with AI because they don’t understand what it actually is

The people I’ve worked with tend to swing between extremes: treating AI as an all-knowing oracle or dismissing it entirely after one mistake.

Current AI has as much in common with the human brain as a bird has with an A380. Both can fly, but that’s where the similarity ends. Large Language Models simply predict words based on patterns in their training data. It’s why they can produce fluent prose about well-covered topics, but will confidently make things up when they’re on unfamiliar ground.

Once users understand this, their approach changes to providing it with clear goals and proper context. When someone tells me everything they get from AI is rubbish, it almost always turns out they’re getting generic answers to generic prompts.

The people who get the best results treat AI as a skill, not a shortcut

The biggest predictor of success isn’t technical ability. It’s whether someone treats AI as a skill to be learned rather than a magic box that either works or doesn’t. The people best at using it are the ones who experiment daily and reflect on how to get better results next time. The goal is to get the machines to work for us, not to think for us – that means using it in a proactive, critical and engaged way.

AI needs direction, feedback and correction – just like people do

The skills needed to use AI are ones many people already have: communication and delegation. Just as with that intern, you wouldn’t hand them a project and disappear. You’d break it down, check in regularly, and course-correct as needed. The same applies to AI.

And just like with an intern, as their manager you’re ultimately responsible for what they produce. That’s what ‘human in the loop’ really means: it’s your job to keep the AI on track and make sure the output is up to scratch.

You shouldn’t outsource your judgment to AI – or give it sensitive data

A few months ago, a manager at a small retail chain was proudly showing me the HR dashboard he had coded using AI. Unfortunately, he had also imported sensitive information without thinking about what would happen if that data leaked or any policies he needed to follow. I sent him straight to IT.

But the risks go beyond security. AI systems are trained on data created by humans and reflective of our collective biases. You should avoid asking AI to make high-level subjective judgement calls such as “should we put this candidate through to interview” that could be prone to bias. Focus instead on factual evaluations, for example “does this candidate have the right number of years of experience”.

Ignoring AI won’t stop its impact

The environmental, ethical and social impact of AI is significant and growing. In a recent session for an environmental charity, one director was torn between the ability to do more as an organisation and the moral costs of doing so, such as the carbon impact of running AI systems. But AI is not going away. It’s far better to have AI-literate citizens, able to demand that it’s built in a responsible and democratic way. AI is not a train waiting for us to board; it’s already mid-journey. The only question is who gets to steer.

The pace of AI’s evolution leaves no room for slow decisions

Today’s version of AI is the worst it will ever be and it’s improving faster than most people realize. Tasks that were impossible a year ago are now routine. Where once I spent long nights hunched over a keyboard trying to figure out why my code wouldn’t run the way it was supposed to, now I create whole applications in a matter of hours with nothing more than a few prompts. Many developers laughed last year when Anthropic’s CEO said 90% of code would soon be written by AI. Today many admit he wasn’t far off.

Unlike the technological revolutions of the past, this one is moving faster than our ability to adapt. It took a century from the steam engine to the locomotive, and fifty years for Faraday’s induction to become Edison’s power plant. Today, the gap between breakthroughs and global adoption is a few months. We don’t have the luxury of a decade-long debate; we must build our social and democratic response as fast as technology, or risk being governed by tools we don’t yet understand.

The people who will shape how AI changes the world don’t have to be the technologists who build these systems. They can be the ones who are willing to experiment, to take both capabilities and risks seriously. We all have a responsibility not just to understand AI ourselves, but to push our employers, communities and governments to use it in ways that ensure no one gets left behind.

Tom Hewitson is the founder and chief AI officer of General Purpose, an AI training company based in London



Source link

  • Related Posts

    ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI | AI (artificial intelligence)

    Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events,…

    Apple puts every $600 Windows PC to shame

    I really don’t know how Apple did it. The MacBook Neo is a $600 laptop that doesn’t feel like an afterthought, which is a curse that has befallen so many…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Ex-Missouri house speaker sentenced 21 months for misusing Covid relief funds | Missouri

    Ex-Missouri house speaker sentenced 21 months for misusing Covid relief funds | Missouri

    SEC basketball tournament 2026: Bracket, schedule and scores

    SEC basketball tournament 2026: Bracket, schedule and scores

    FDA approves new use of the drug leucovorin — but not for autism

    FDA approves new use of the drug leucovorin — but not for autism

    ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI | AI (artificial intelligence)

    ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI | AI (artificial intelligence)

    Resident Evil Requiem story DLC officially announced

    Resident Evil Requiem story DLC officially announced

    Shots fired at US consulate in Canada as police investigate incident | Canada

    Shots fired at US consulate in Canada as police investigate incident | Canada