Walk into any boardroom right now and you’ll hear some version of the same update: We’ve launched pilots. We’re training employees on GenAI. We’re exploring use cases. All of that is fine. None of it tells you whether AI is actually changing the trajectory of the business. Boards don’t need more AI demos. Boards need better questions. Here are ten questions boards should use to move AI from “interesting” to “material.”
1. Ambition: Are we a frontier leader or a fast follower – and where, exactly?
This sounds simple, but most boards haven’t answered it explicitly. In which businesses or functions do we intend to lead with AI? Where are we comfortable being a fast follower?
That choice drives everything: risk appetite, pace of experimentation, capital allocation, and how often policies and controls need to be revisited.
2. Strategy: How does AI change – or reinforce – our strategy?
Too many AI discussions live in the “IT section” of the agenda instead of the strategy section.
As a board, we focus on two things: Where does AI accelerate the existing strategy (faster, cheaper, better)? Where might AI force a rethink of the business model (new products, channels, competitors)? If your AI conversation never touches the strategy slide, you’re still treating it as tooling, not as a driver of advantage.
3. Outcomes: Are we managing AI activity, or AI outcomes?
You’ve probably seen dashboards that highlight: Number of pilots. LLM accuracy scores. Number of people trained. Those are activity metrics. They’re necessary, but not sufficient.
The outcome questions are sharper: Where has AI improved margins? Where has it reduced opex or headcount growth? Where has it shortened cycle times, improved quality, or reduced risk?
4. Freed Capacity: When AI saves time, where does the freed capacity go?
Every AI case study seems to quote hours saved: “This tool saves three hours a week per employee.” The question that matters is: What happens to those hours?
Are they being redeployed into: More sales conversations? Deeper audit coverage? Better service levels? Additional product experiments? If “freed capacity” isn’t explicitly redirected to higher-value work, it quietly disappears into the noise of the workday.
5. Board Fitness: Is the board itself fit for purpose on AI?
This is uncomfortable but essential. You don’t need a board full of data scientists. You do need enough:
- Strategic understanding of how AI changes value creation in your business.
- Technical literacy to interrogate management beyond buzzwords.
- Transformation and culture experience to recognize real change vs. theater.
A simple exercise: list the AI-relevant capabilities around the table and where the gaps are. Then decide: will you close those gaps with education, new directors, or external advisors? The reality is that your own capability is the limiting factor on the speed and quality of AI decisions.
6. Ownership: Can non-technical leaders clearly explain our AI strategy?
One of the fastest tests of maturity: Ask the CHRO, CFO, and a business unit leader (without the CIO/CTO in the room) to explain: What we’re doing with AI. Why we’re doing it. How we’ll know it’s working
If only the technology leaders can describe the AI strategy, AI is still a tech project, not an enterprise transformation. Our goal as a board should be to see AI show up in the language of talent, productivity, customer value, and culture, not just in the language of models, tools, and platforms.
7. Workforce readiness: How AI-ready is our workforce – psychologically and practically?
“AI readiness” is not just about tool training. Two very human blockers show up repeatedly. Fear: If employees think “this tool will make me redundant,” adoption stalls, no matter how compelling the business case. Blindness to waste: Teams often insist there is “no waste” in their workflows and underestimate human error, while treating every AI error as unacceptable.
Boards should ask: How are we creating psychological safety so people will use AI rather than quietly resist it? Are we using independent assessments (internal audit, external experts, process reviews) to identify where AI can actually help, instead of relying solely on self-diagnosis? Until we address fear and bias (human bias against the machine), “AI strategy” will remain a slide, not a behavior.
8. Skills, not just tools: Are we investing in the right skills, not just the right tools?
The most AI-ready teams aren’t just good prompters. They’re good at describing their own work: What they do. The steps involved. Where value is created. Where judgement is needed. That ability to “think about their work” is exactly what’s needed to design AI-enabled workflows
When you hear about “AI training,” probe further: Are we also building strategic thinking, communication, collaboration, and judgment? Are we incentivizing people to codify and share what they build so the whole organization gets smarter, not just one team?
9. Assurance: What is our assurance framework for AI?
From an audit and risk perspective, AI is just another major transformation—only faster and more pervasive. If AI risks and controls aren’t showing up in your risk register and audit agenda, they’re probably not being managed systematically.
The board should be asking: How do we manage data quality, lineage, and access control for AI use cases? How do we test and monitor for bias, model risk, and model drift in high-impact decisions? How do we protect IP and sensitive data when using external models and vendors? How are internal audit and external audit incorporating AI into their plans?
10. Accelerators & blockers: Who is accelerating AI – and who is blocking it – and why?
Every company has both: Executives who are championing AI and pushing hard, and Executives who are slowing or blocking it. Our job is not simply to “remove blockers.”
It’s to understand why: Some may be the adults in the room, rightly worried about safety, bias, or regulatory exposure. Others may be resisting change, protecting empires, or uncomfortable with new ways of working. You cannot govern what you don’t see.
Bringing it back to the board agenda
In the end, we don’t need a separate “AI strategy” meeting every quarter. We need AI woven into:
- Strategy discussions (where it creates or defends advantage)
- Risk and audit (where it changes how decisions are made)
- Nom/Gov (board composition and governance structures)
- Comp/HR (incentives, workforce readiness, and talent strategy)
The technology is moving fast. Good governance is not about predicting the future of AI. It’s about asking better questions, more consistently, than the board across the street.
