Prediction is Not Intelligence
Advances in AI are driving the cost of prediction to zero. But prediction is not intelligence. An intelligent decision requires a combination of prediction (what might happen) and judgment (how we use that information to make a decision). As Ajay Agrawal, professor of management at the University of Toronto, has observed, the value of judgment increases as the cost of prediction declines.
Agrawal notes, “despite all the advances we have made in AI, we’ve made zero advances in increasing the judgment of machines. Only people can exercise judgment.” The distinction matters because the problems we face in business aren’t neat problems with clear answers. They involve (often implicit) trade-offs among values that are important to us as individuals or as representatives of a company. With cheaper prediction, we have the luxury of iterating more and focusing on the judgment calls that really matter. But the judgment is critical.
Agrawal believes that “every part of our economy is now optimized for a pre-prediction world.” In this world, prediction and judgment are often conflated. And because current generative AI models are so articulate and offer such reasonable plans, we often fail to slow down and ask whether their results are consistent with our values and the specific trade-offs that make sense to us. This may cause us to over value crisp analysis and to under value our uniquely human contribution to good decisions.
Because this distinction is not made, companies are designing AI systems backwards. They automate everything possible, then ask people to monitor for failures or to pick up the pieces when failure happens. This is is a design flaw, not an inevitability. Fortunately, some companies are thinking differently.
What Smart Companies Are Doing
Vibe coding – a popular approache that lets non-programmers write code in plain English – is a promising application of generative AI. But SolveIt founders, Jeremy Howard and Eric Ries, were disappointed after experimenting with it. They found that the code it produced was poorly understood developers and not extensible. Its development also did nothing to increase the skills of the engineers doing the work. SolveIt was formed to slow down vibe coding and reinsert the judgment of humans.
“We discovered a key principle,” Howard says. “The AI should be able to see exactly what the human sees, and the human should be able to see exactly what the AI sees at all times.” In SolveIt, the human does the coding and the AI supports the coder. In contrast to most AI models, SolveIt encourages the user to slow down and do what he or she can do themselves before asking the AI to do it for them.
There are several principles embedded in this approach. First, the human is in control. As the author of what she is coding, the developer understands what she is producing. As a corollary, the product itself is more cleanly architected and therefore more extensible. Finally, the user learns while doing. The AI provides help, critiques code, recommends alternatives and helps to identify problems. But the human is growing as a developer by actually doing the work.
SolveIt is also experimenting with using the same approach for writing. Eric Ries, author of The Lean Startup, is writing his next book with the assistance of AI – but he is writing every word. AI is looking over his shoulder, available to assure consistency and flow and to provide editorial support. A further extension is to a particular type of reasoning and writing – legal work. SolveIt has created a subsidiary to support such work.
ASAPP, which makes software for call centers, is another good example. The company has designed its system so that the call center agent is in control. AI is there to support the agent in completing necessary but routine work. As Joe Chiuffo, then Head of Marketing at ASAPP, told me in an interview in 2024, the agent is “the person we’re trying to win over.” Ciuffo continued, “if [the agents] have the autonomy to make more decisions, and they’re less encumbered by all the tools that [they] need to focus on, they can make those judgment decisions” more effectively. As with SolveIt, the agent is in control of the dialog, and the AI supports the agent at the agent’s behest.
To support this philosophy, ASAPP includes agent-centered metrics like employee NPS (Net Promoter Scores) and agent turnover. It doesn’t focus exclusively on customer satisfaction or efficiency measures, like average hold time. ASAPP believes that a satisfied employee leads to better customer support and, ultimately, to lower total costs.
Implications for Leaders
We are at the early stages of the evolution of new AI systems to support people at work, but the outlines of what will constitute a good solution are becoming clearer.
- Put humans at the center. The person doing the work should control the AI – not simply check its work for obvious errors. Designing such systems requires careful thought about what to automate and what to augment. Too often, companies default to maximal automation.
- Design for learning, not just efficiency. AI systems can short-circuit learning in a rush to get to a fast answer. Work designers should build in time for learning and include metrics that support human development.
- Cultivate judgment as a source of competitive advantage. As prediction costs fall, judgment becomes more valuable. Companies that design systems to build human capital will outcompete those chasing pure automation.
As AI gets faster and better at prediction tasks, these principles will matter more, not less. But implementing them requires reconceptualizing work, not simply adding better prediction into existing workflows. The warning sign that things are off-track? When AI delivers fast, articulate answers but your team has stopped thinking critically about them. That’s not efficiency; it’s abdication.
