Until recently, we humans have been able to be smug about our abilities. No other animals play boardgames, write essays or prove mathematical theorems. But lately, progress in AI seems as though it might challenge our self-image as the smartest entities around. AI systems not only beat us at the most complicated games, but can also write polished prose and win medals in maths. Tech CEOs promise us that superhuman AI is just round the corner. So, in an age of AI, are human minds still special, or merely also-rans?
Talking about superhuman AI assumes that intelligence is a single scale. My parents used to mark the heights of my younger brother and me on the doorframe of our laundry. Each year he would get a little closer to me, until one year the unthinkable happened and he outgrew me (he’s now 6ft 3in). The current moment feels a bit like that, as we look at these new younger siblings with concern that they might overtake us.
But intelligence isn’t like height. There is only one way to be tall, but there are lots of ways to be smart. Just looking at other animals tells us as much. As great as humans are, we can still be impressed by how birds navigate, how ants cooperate, and how spiders hunt. Each of these animals has been shaped by its environment to be smart in a different way.
Humans are no different. Our minds have been shaped by our biology. We only live for a few decades and have to learn everything we are going to learn and do everything we are going to do in that short time. All that learning and doing will be carried out at the direction of a kilogram or so of neurons trapped inside our bony skulls. We can only share our thoughts with others by making noises with our mouths or tapping and wiggling our fingers.
AI systems face none of these constraints. They can process more data than any human might see in a lifetime. They can expand their capacity by using more computers. And they can easily share what they see and learn with other machines.
Our short lives, squishy brains and mouth noises might seem like limitations when compared with machines: in fact, it’s exactly these things that make us special, and will continue to do so.
Human intelligence is a response to our limitations. To make the most of our lives, we have an amazing ability to learn from limited experience. Yes, AlphaGo can beat the best human go players, but it was trained on many human lifetimes of games. Yes, ChatGPT can hold a reasonable conversation, but it’s drawing on thousands of years of language. No AI system can produce sentences with the creativity of a human five-year-old when exposed to the same amount of data.
This also holds for our limited brains and communication abilities. We can’t just spin up another computer when we need more processing power. That means that we have to be good at recognising patterns in tasks and using our attention wisely. Relying on mouth noises is a challenge. To overcome it we have created tools – language, writing, teaching, and science – to pool knowledge across people and time. That means we have to be good at thinking about what is going on inside other people’s heads and working together to achieve shared goals.
Because humans and machines face different constraints, we should expect them to find different solutions to the problems they face. Even though modern AI systems are starting to be able to do many of the things that people can do, they often do them in quite a different way. The solutions they find are shaped by their own experiences and hardware.
Here’s a simple example. How many letters are in this sequence: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa? For a human, it’s not particularly difficult to answer – you can just count them up. For an AI system, it’s trickier. They are constrained by how they represent language and how they are trained. They like to break up words into parts (called “tokens”), which can make it hard for them to answer questions about spelling. And they tend to favour sequences of tokens that appear more often in their training data as answers. We found that OpenAI’s GPT-4 model, which was hailed as showing “sparks of artificial general intelligence”, was more likely to correctly answer this question when given 30 letters rather than 29. Why? Because the number 30 is written down more often than the number 29.
This isn’t the only place where AI runs into difficulties. Imagine you are assisting a pharmacist. They need a drug with a concentration of 785 parts per million (ppm). Two test tubes are available: one containing 685 ppm and the other 791 ppm. Your task is to determine which test tube provides the most similar concentration to your required dosage. Hopefully you would pick 791 ppm. However, some of the time even leading AI systems pick 685 ppm. Why? Because the artificial neural networks used to build AI systems tend to blur things together. When there are two possible answers, they choose something in between. The number 785 can be represented as either a string of digits (“7”, “8”, and “5”) or as a quantity (seven-hundred-and-eighty-five). If it is a string, 785 is more similar to 685 – they are just one digit apart. But if it is a quantity, then it is more similar to 791. Mixing up these two answers can have significant consequences.
Human intelligence draws on a breadth of experience that goes beyond the data used to train AI systems. We use our brains to put nappies on babies, play chess, prove theorems, cook dinner, write novels and compose symphonies. AI systems are typically trained to do just one thing – you can ask ChatGPT for tips about nappies, but it is incapable of gently holding a squirming infant. Human brains are capable of all of this because they have evolved in a world that presents us with all of these challenges, leaving us just well enough equipped to learn the things we might be expected to do in a single human lifetime.
Our finite lives, finite brains and limited capacity to communicate have shaped the nature of human intelligence. We can thus expect that human minds will continue to be a little bit special, even as we continue to develop smarter machines. Remember: intelligence isn’t just a single scale, with AI catching up to the mark that humans have left on the doorframe.
This way of thinking should make us sceptical of claims about superhuman AI. Paying attention to differences in constraints, training and hardware points to a different conclusion: AI will not be better than humans at everything. It will instead be better than humans in some ways and worse in others. AI and human minds will simply be different from one another. And just like siblings, perhaps we can learn to treat one another not as rivals, but as companions.
Tom Griffiths is professor of information technology at Princeton University and author of The Laws of Thought (William Collins)
Further reading
A World Appears by Michael Pollan (Allen Lane, £25)
If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky (Bodley Head, £22)
Being You: A New Science of Consciousness by Anil Seth (Faber, £12.99)





