The surge and shift to artificial intelligence (AI) in the workplace is forcing a pressing question to emerge: As a leader, how much should you rely on AI, and at what point do you risk outsourcing your humanity?
In an age where algorithms entwine with everyday decisions, leaders are beginning to face a stark imperative.
Can you utilize AI’s potential without surrendering judgment, creativity, or a leader’s core values?
Will you remember humanity?
The choices made now will determine whether AI becomes a trusted partner that enhances your organization or a disruptive force that completely undermines it.
These are the questions author and entrepreneur Faisal Hoque asks in his book Transcend: Unlocking Humanity in the Age of AI.
The Human Driver in an AI World
Hoque believes that finding the right balance between human agency and AI assistance is one of the defining leadership challenges of our time.
“A large part of human life isn’t really about the destination. It’s about the journey involved in getting there,” Hoque writes, using the metaphor of a self-driving car to question how much control we hand over to machines.
“We must ask ourselves if we want to sit in an AI-powered self-driving car on life’s journey or if we prefer to drive the car ourselves. More precisely, we will need to ask ourselves what mix of the two will work best for us.”
To put it another way, leaders need to determine the boundary between automation and human autonomy.
Hoque’s point is that AI is unlike any technology we’ve seen before; it is becoming an active participant in our decision-making. As Hoque describes it, one person pitted against “the brains of thousands or millions” of simulated intelligence can feel daunting.
The risk is that we become passive passengers.
“There’s a gut element to making decisions,” Hoque told me. “Your gut tells you, ‘Nah, this doesn’t sound right.’ So if you outsource all that, who’s going to tell you it doesn’t sound right? The machine is not going to.”
In leadership parlance—while AI might speed up analysis or drive routine choices—human leaders must remain in the driver’s seat regarding ethics and common sense.
AI’s Mirror and the Bias Dilemma
It’s comforting to think of AI as an objective, super-smart assistant, but that’s a dangerous oversimplification.
“AI is a mirror of our society, a mirror of whatever we’re feeding it,” Hoque cautions. “So obviously, there’s a huge element of bias.”
In our conversation, he unpacked how a seemingly efficient AI hiring tool could reflect and amplify existing prejudices.
For example, if a résumé-screening algorithm is trained on historically biased data, it might start favoring candidates from one city or background without anyone noticing.
Human bias, multiplied exponentially by an algorithm, is still bias; it’s just faster and more challenging to detect. Researchers at Brookings have warned that biased algorithms can produce systematically unfair outcomes at scale if left unchecked.
For leaders, the lesson is clear. We can’t assume AI will magically rid our organizations of bias. Hoque urges leaders to proactively question and test the outputs of their AI systems.
Are the recommendations fair? Is the data diverse? Is it up-to-date and bias-free to begin with?
This leadership vigilance is part of protecting human agency, ensuring that important decisions (hiring, promotions, customer offerings, and beyond) are not ceded entirely to a black-box model with blind spots.
Human Cost of Over-reliance
Beyond the ethical dilemma, there is another human pitfall to avoid: are you permitting AI the opportunity to erode the human connections and creativity in your workplace?
“Convenience is a drug,” Hoque quips, warning against the allure of delegating every possible task to automation. We’re creatures of comfort, and it’s easy to let an AI write all your emails, generate all your ideas, and even handle team communications.
But as Hoque points out, if you do that too much, “you’re outsourcing your faculties, and you no longer want to think.” That is where the danger comes in. When people lean on AI for everything, they may gradually lose the very skills and intuition that made them valuable in the first place.
There’s mounting evidence that an overreliance on AI can hurt your team’s well-being and performance.
Recent research highlighted a sobering trend: employees who used AI extensively feel “isolated and socially adrift,” even as they become more productive.
The more work team members handled with AI’s help, the lonelier they grew. The deep irony, as the researchers note, is that in chasing efficiency through AI, companies risk creating disengaged employees who ultimately perform worse. “Lonely, disengaged employees aren’t likely to bring their best selves to work. They’re less likely to collaborate, innovate, or go the extra mile for their organizations,” the study concludes.
Hoque advises leaders to be mindful and set boundaries: just because an AI tool can do something doesn’t mean you should use it for that.
For example, if a manager auto-generates all their team performance review feedback through ChatGPT, they might save time but inevitably lose trust when caught.
Employees can distinguish between a perfunctory robo-email and genuine, empathetic communication. The goal is to let AI handle the grunt work. At the same time, leaders double down on the uniquely human aspects of leadership—coaching, relationship-building, and vision—that no machine can replicate.
Purpose-driven, Open and Care Leadership
So, how should leaders proceed with AI?
In Transcend, Hoque outlines an “OPEN” framework (Outline the situation, Partner with both technology and people, Experiment to learn, and Navigate with oversight) paired with a “CARE” framework (Catastrophize the worst case, Assess the uncertainties, Regulate with guardrails, and Exit to potentially shut it down).
The philosophy is to embrace the innovation inherent in AI but guard fundamental human values simultaneously.
Furthermore, are you leading with purpose? It’s a question leaders should have been asking before the rise of AI.
“Just because you can doesn’t mean you have to,” Hoque says. It’s a reminder that restraint is a leadership virtue.
Leaders should establish ethical guidelines and even kill-switches for AI initiatives. As Hoque points out, being OPEN, operating with CARE, and leading with purpose will be necessary for a leader to pull the plug if something isn’t right.
Partner not Replacement
Hoque wants leaders to rethink what AI means in the workplace. “Look at AI as a partner, not an outsourcer,” he urges.
When leaders position AI as a collaborative partner—a tool that complements rather than replaces human capabilities—they send a clear message. Team members want leaders who advocate for them, not ones who quietly use technology as a pretext to cut costs or jobs.
As Hoque reminds us, transcending the AI temptation means intentionally guiding innovation with purpose, compassion, and an unwavering commitment to human dignity.
When leaders do that, they ensure our smartest machines amplify our best human instincts—instead of undermining them.
Watch the full interview with Faisal Hoque and Dan Pontefract on the Leadership NOW program below, or listen to it on your favorite podcast.