We’re entering a new era where artificial intelligence can generate content faster than we can apply critical thinking. In mere seconds, AI can summarize long reports, write emails in our tone and even generate strategic recommendations. But while these productivity gains are promising, there’s an urgent question lurking beneath the surface: Are we thinking less because AI is doing more?
The very cognitive skills we need most in an AI-powered world are the ones that the tools may be weakening. When critical thinking takes a back seat, the consequences are almost comical—unless it’s your company making headlines.
- McDonald’s shut down its AI drive-thru pilot after customers reported bizarre mistakes like customers being charged for 20 McNuggets meals in a single order or getting ketchup with ice cream.
- Google scaled back its AI Overviews feature when it suggested using glue to keep cheese from sliding off pizza and eating one rock a day.
- In a lawsuit with Walmart, Morgan & Morgan lawyers were sanctioned by a federal judge after submitting multiple fake case citations generated by AI.
- Air Canada’s chatbot offered a bereavement discount that didn’t exist. When the airline refused to honor it, the passenger sued—and won.
These real-world breakdowns show what can happen when critical thinking is absent.
As AI models get more advanced and powerful, they exhibit even higher rates of hallucinations, making human supervision even more critical. And yet, a March 2025 McKinsey study found that only 27% of organizations reported reviewing 100% of generative AI outputs. With so much of the focus on the technology itself, many organizations clearly don’t yet understand the growing importance of human oversight.
Clarifying what critical thinking is
While most people agree that critical thinking is essential for evaluating AI, there’s less agreement on what it actually means. The term is often used as a catch-all term for a wide range of analytical skills—from reasoning and logic to questioning and problem-solving—which can feel fuzzy or ambiguous.
At its core, critical thinking is both a mindset and a method. It’s about questioning what we believe, examining how we think and applying tools such as evidence and logic to reach better conclusions.
I define critical thinking as the ability to evaluate information in a thoughtful and disciplined manner to make sound judgments instead of accepting things at face value.
As part of researching this article, I spoke with Fahed Bizzari, Managing Partner at Bellamy Alden AI Consulting, who helps organizations implement AI responsibly. He described the ideal mindset as “a permanent state of cautiousness” where “you have to perpetually be on your guard to take responsibility for its intelligence as well as your own.” This mindset of constant vigilance is essential, but it needs practical tools to make it work in daily practice.
The GPS Effect: What happens when we stop thinking
This need for vigilance is more urgent than ever. A troubling pattern has emerged where researchers are finding that frequent AI use is linked to declining critical thinking skills. In a recent MIT study, 54 participants were assigned to write essays using one of three approaches: their own knowledge (“brain only”), Google Search, or ChatGPT. The group that used the AI tool showed the lowest brain engagement, weakest memory recall and least satisfaction with their writing. This cognitive offloading produced essays that were homogeneous and “soulless,” lacking originality, depth and critical engagement. Ironically, the very skills needed to assess AI output—like reasoning, judgment, and skepticism—are being eroded or suppressed by overreliance on the technology.
It’s like your sense of direction slowly fading because you rely on GPS for every trip—even around your own neighborhood. When the GPS fails due to a system error or lost signal, you’re left disoriented. The skill you once had has atrophied because you outsourced your navigation skills to the GPS.
Bizzari noted, “AI multiplies your applied intelligence exponentially, but in doing so, it chisels away at your foundational intelligence. Everyone is celebrating the productivity gains today, but it will eventually become a huge problem.” His point underscores a deeper risk of overdependence on AI. We don’t just make more mistakes—we lose our ability to catch them.
Why fast thinking isn’t always smart thinking
We like to think we evaluate information rationally, but our brains aren’t wired that way. As psychologist Daniel Kahneman explains, we tend to rely on System 1 thinking, which is fast, automatic and intuitive. It’s efficient, but it comes with tradeoffs. We jump to conclusions and trust whatever sounds credible. We don’t pause to dig deeper, which makes us especially susceptible to AI mistakes.
AI tools generate responses that are confident, polished and easy to accept. They give us what feels like a good answer—almost instantly and with minimal effort. Because it sounds authoritative, System 1 gives it a rubber stamp before we’ve even questioned it. That’s where the danger lies.
To catch AI’s blind spots, exaggerations or outright hallucinations, we must override that System 1 mental reflex. That means activating System 2 thinking, which is the slower, more deliberate mode of reasoning. It’s the part of us that checks sources, tests assumptions and evaluates logic. If System 1 is what trips us up with AI, System 2 is what safeguards us.
The Critical Five: A framework for turning passengers into pilots
You can’t safely scale AI without scaling critical thinking. Bizzari cautioned that if we drop our guard, AI will become the pilot—not the co-pilot—and we become unwitting passengers. As organizations become increasingly AI-driven, they can’t afford to have more passengers than pilots. Everyone tasked with using AI—from analysts to executives—needs to actively guide decisions in their domains.
Fortunately, critical thinking can be learned, practiced and strengthened over time. But because our brains are wired for efficiency and favor fast, intuitive System 1 thinking, it’s up to each of us to proactively engage System 2 to spot flawed logic, hidden biases and overconfident AI responses.
Here’s how to put this into practice. I’ve created The Critical Five framework, which breaks critical thinking into five key components, each with both a mindset and a method perspective:
-
Self-regulation. While we may see the output as being neutral, we view it through our own personal filters. We must monitor and question our initial reactions, which are often influenced by personal biases, heuristics and assumptions. It’s essential to acknowledge our cognitive limitations and stay open-minded.
Mindset: Be reflective and open-minded.
Method: Take an intentional pause before accepting AI output to activate your System 2 thinking and ask yourself what is influencing your gut reaction to the information. -
Evaluation. No matter how refined or polished the AI output appears to be, it shouldn’t be blindly trusted as real, accurate or complete. We must verify the quality and reliability of its sources. AI tools are only as good as the inputs, and they tend to hallucinate. This means we must be diligent in validating their outputs. However, it’s also important to calibrate your skepticism to the stakes and context at hand. For example, routine tasks with low consequences don’t require the same rigor as strategic decisions that impact customers, finances or brand reputation.
Mindset: Be skeptical but fair.
Method: Double check that key citations exist and are accurately represented, especially for claims that seem surprising or too convenient. -
Analysis. If we only focus on the surface elements, we can miss crucial details and misunderstand the bigger picture. We must break down or deconstruct the information by dissecting core arguments, identifying key components, isolating underlying assumptions and spotting hidden gaps. Following this analytical process helps determine what’s signal and what’s noise.
Mindset: Curious and systematic.
Method: Ask follow-up questions about the main claim. Challenge the assumptions behind the numbers or narrative. -
Inference. When most AI output sounds authoritative and confident, we can fail to evaluate its actual logic. We may not notice whether the arguments are well-structured or the conclusions truly follow from the evidence. Without closer examination, we may miss weak logic or fallacies in the AI output that leads to faulty conclusions.
Mindset: Logical and disciplined.
Method: Trace the reasoning behind the AI output. Question whether the conclusions follow from the evidence or lead to alternative conclusions. -
Interpretation. Without adequate context, AI tools may misread what’s appropriate, realistic or needed in each specific situation. We need to think more broadly about the information, considering real-world constraints, ethical implications and organizational nuances that AI might miss. This human perspective determines how we adapt the AI output or when we reject it entirely.
Mindset: Thoughtful and sense-making.
Method: Consider what the AI tool might be missing in terms of context, nuance, ethics or real-world constraints.
Just ASK: A quick AI check for busy minds
While these five skills provide a solid foundation for AI-related critical thinking, they don’t operate in a vacuum. Just as pilots must adapt their approach based on weather conditions, aircraft type and destination, we must be able to adapt our critical thinking skills to fit different circumstances. Your focus and level of effort will be shaped by the following key factors:
- Domain expertise. A seasoned pilot will be less dependent on their co-pilot than a rookie one. If you have deep domain knowledge on a topic, you’re more likely to detect issues in the AI output. If you lack adequate domain expertise, you’ll be tempted to trust the AI tool more. However, you must be more cautious and rigorous with the framework because you’re less able to spot potential problems.
- Organization culture. Individual critical thinking skills will only thrive in environments where they are supported. Even the most skilled pilots need air traffic control and proper runway conditions to land safely. Organizations must actively encourage questioning AI output through training, time allocation and leadership examples. If organizations expect this level of scrutiny from employees, they can’t complain if some efficiency is sacrificed to preserve overall effectiveness.
- Time constraints. Ideally, you want to be as thorough as possible when applying critical thinking to AI output. However, being pragmatic, you may apply a “triage” approach where high-stakes decisions get the full treatment while routine tasks get a streamlined evaluation.
Recognizing that many scenarios with AI output may not demand an in-depth review, I’ve developed a quick way of injecting critical thinking into daily AI usage. This is particularly important because, as Bizzari highlighted, “Current AI language models have been designed primarily with a focus on plausibility, not correctness. So, it can make the biggest lie on earth sound factual and convincing.” To counter this exact problem, I created a simple framework anyone can apply in seconds. Just ASK:
- Assumptions: “What does this assume?”
- Sources: “Can I trust this?”
- Keep it objective: “Am I being objective?”
To show this approach in action, I’ll use an example where I’ve prompted an AI tool to provide a marketing strategy for my small business.
- Assumptions: “What does it assume about my target audience, budget or market conditions?”
- Sources: “It mentions a conversion rate as an industry benchmark. Is there a reliable source for that estimate?”
- Keep it objective: “The strategy emphasizes LinkedIn campaigns, which aligns with my professional preference. Am I accepting this because it confirms my existing bias, or because it’s actually the best approach?”
This quick evaluation could reveal potential blind spots that might otherwise turn promising AI recommendations into costly business mistakes, like a misguided marketing campaign.
The future of AI depends on human thinking
If more employees simply remember to ‘always ASK before using AI output,’ your organization can begin building a culture that actively safeguards against AI overreliance. Whether using the full Critical Five framework or quick ASK method, people transform from passive passengers into engaged pilots who actively steer how AI is used and trusted.
AI can enhance our thinking, but it should never replace it. Left unchecked, AI encourages shortcuts that lead to the costly mistakes we saw earlier. Used wisely, it becomes a powerful, strategic partner. This isn’t about offloading cognition. It’s about upgrading it—by pairing powerful tools with thoughtful, engaged minds.
In the end, AI’s value won’t come from removing us from the process—it will come from how disciplined we are in applying critical thinking to what it helps us generate.