About a decade ago, Geoff Hinton declared that human radiologists were like Wily E. Coyote in the Roadrunner show. They had run off the edge of a cliff, but hadn’t realised it yet. Their livelihoods were about to come crashing down.
Ten years on, radiology has yet to be automated. Machines can read scans better than humans can, but they cannot yet replace human radiologists. Radiologists do much more than just read scans in the ideal conditions of the laboratory. They do it in the messy, unpredictable, fast-changing environment of the hospital. They discuss those scans with colleagues, compare notes about patients, make decisions, and refer their findings to a range of other authorities. It is surprisingly hard to specify all the things that human radiologists are doing which machines cannot yet replicate, but there are a lot of them.
Most people in the AI community now expect that superintelligence will arrive in the next few years or decades. “Superintelligence” means an AI system which can perform any cognitive task that an adult human can. It will, of course, be dramatically super-human in many respects, like ingesting text at speed, playing chess, and mental arithmetic. 2030 is one common estimate for when this may happen, but the range of timelines is wide.
AGI (Artificial General Intelligence) is sometimes used as a synonym for superintelligence, although it has recently acquired a bewildering variety of definitions clustered around the idea of an AI that can automate all human jobs. It is a term that should probably be retired.
Intelligence is humanity’s superpower. It is why there are more than eight billion of us and fewer than half a million chimpanzees, the second smartest species on the planet, and their fate is determined entirely by us. By creating superintelligence, we are turning ourselves into chimpanzees, so it is a matter of more than passing interest to know what cognitive capabilities and features separate us from our successors-to-be.
This article is a list of 21 capabilities and features which humans have, and machines currently do not. It is unlikely to be comprehensive, but if and when machines acquire all these features, superintelligence will be upon us, or very nearly.
A. Self and Agency
1. Conscious phenomenal experience
Humans report subjective feelings, including pain, awe, the grain of a moment. There’s no accepted evidence that machines experience anything comparable. Without that interiority, motivation and attention are engineered rather than lived. Philosophers and cognitive scientists disagree vigorously about what this means, how to test for phenomenal experience, and whether consciousness is necessary for superintelligence.
2. Volition
People generate their own goals, persist when bored, and sometimes say no to tempting but distracting suggestions. AIs’ goals are trained or instructed, and their preferences are shallow and highly steerable. So far, machines only look purposeful when someone is pointing them towards a purpose.
3. Self-modelling and identity
Humans have biographies, with commitments, regrets, and promises to be kept. That continuity generates patterns of behaviour. AI memories are often session-bound, with weak links to past choices and consequences. Until they accrue histories they care about preserving, their “selves” will feel like costumes: convincing for a while, but packed away afterwards.
4. Planning, and meaning-making over time
Humans weave events into stories that generate and embody plans, priorities, and actions. They revise the plots of these stories when facts change. They make plans that last days, months, or years, involving multiple stages and delayed gratification. AI models can summarise stories well, but they struggle to maintain themes and motives over time.
5. Sensorimotor grounding
Humans test and revise their models of themselves and the world based on proprioception, haptic feedback, the physical consequences of their actions, tool use, etc. AIs might be able to achieve superintelligence as the metaphorical “brains in a vat”, but it would be easier for them to shape the world if they were embodied.
6. Curiosity-driven exploration
Children explore far and wide for the sheer thrill of it, and sometimes stumble upon something useful. Humans of all ages engage in open-ended, self-initiated learning, through play, experiments, and exploration. This yields rewards and new skills. AIs operate within constrained, pre-defined sandboxes, and discovering useful novelty that is transferable is rare for them.
7. Aesthetic sense and taste formation
People cultivate stable but evolving preferences about form, craft, and beauty that guide selection, editing, and restraint. They develop tastes, which guide their output and makes them say “no” as often as “more.” Today’s AI systems excel at the generation of narratives, but they lack anchored judgment, historical context, and personal preferences.
B. Knowledge Creation and Reasoning
8. Robust scientific method and axiom revision
Humans don’t just run experiments, they re-write the brief when reality springs surprises. They invent new measures, and question foundations. Today’s AIs mostly optimise within given frames, and seldom proposes unprompted, paradigm-shifting tests.
9. Causal discovery and variable invention
People think up new variables, like “stress,” “spin,” and “attachment,” which unlock novel predictions. Models generally stick with the variables they’re given, and struggle to propose new abstractions, or check whether correlation means causation.
10. Handling ambiguity, and prioritisation
Humans keep multiple stories in their minds simultaneously, and look for the killer question. They change frames easily and rapidly. They prioritise tasks and allocate resources accordingly. AIs usually snap quickly to neat answers. They often misread which uncertainties matter most, and fail to notice when a task is badly specified.
11. Hunches and intuition
Humans get a feeling that something important has been revealed before they can prove it, and they prioritise their search accordingly. AIs can guess, but they lack genuine curiosity and experience. They surface possibilities without the conviction that something useful is within reach.
12. Analogy and cross-domain transfer
Humans can spot deep structures as well as surface patterns, so they can generalise lessons from, say, mechanics to finance with scant feedback. Models excel at spotting patterns, but they miss the relevant relationships and constraints.
13. Meta-cognition and epistemic virtues
Doubt, intellectual honesty, and caution enable humans to pause, reframe, or stop a line of enquiry entirely when the evidence weakens, or when they realise they have made a mistake. AI systems can simulate caution verbally but they often mis-calibrate confidence, over-fit, and persist mechanically. Meta-cognition requires self-audits, uncertainty, and rationales that change with incoming data.
14. Deception and subterfuge
Humans can deceive, and engage in strategic concealment, obfuscation, and adversarial thinking. Machines are currently brittle in deception, but as they get smarter they will acquire the capacity to reason about hidden goals, and mask their own intentions.
C. World Modelling and Planning
15. Rich physics world models
Humans can handle splashing liquids, squishy objects and faulty tools, reasoning counter-factually and adapting on the fly. AIs analysis fractures outside training regimes, and their manipulation is brittle. They can create impressive demos, but are unreliable fixers. The real world is a harsher examiner than simulations.
16. Counterfactual planning
People assess the costs of being wrong, and stand prepared for the world to change. AIs plan for the obvious consequences, and ignore the possibility of hidden constraints. AIs are strong on straight lines, but weaker on forks and cul-de-sacs.
D. Social Understanding and Norms
17. Human and societal modelling
Humans attempt to discern intentions, anticipate reactions, and manage their reputations over time, and in varying organisations and cultures. AIs can imitate conversation convincingly, but they falter in messy situations, they cannot handle sarcasm, or changing incentives and coalitions. Passing classroom theory-of-mind tests is one thing; navigating office politics and social norms is quite another.
18. Normative reasoning
Humans juggle fairness, harm, duties, and consent. They accept moral failures up to a point, and they defend their decisions with reasons that others can review. AIs rely on brittle rule-following and polite parroting. When principles collide, models wobble. They cannot yet manage consistency across cases.
19. Pragmatism and common sense
People decode subtext, take unsaid assumptions into account, and watch out for status games. AIs are formidably articulate, but they stumble over irony, euphemism, and rules about politeness that differ between cultures. Context often confers meaning, and machines usually miss that.
20. Cultural grounding
Words and ideas get their meaning from culture, history, embodiment, and shared context. AIs currently lack this grounding in symbols, culture, and history.
21. Institutional and legal agency
People respect roles, rules, and obligations, or at least pay them lip service. They form coalitions, make credible commitments, and accept responsibility. Institutions remember, co-ordinate, and apply sanctions. Models can mimic the text of policies, but they lack the ability to accept obligations over time.

