Amazon cut 14,000 jobs this week, citing, in part, efficiency gains from AI. But blaming AI for layoffs is like blaming the thermometer for the fever.
AI didn’t eliminate these jobs. Amazon did. Its leaders did. And they’re not alone. Across the tech giants, AI isn’t making these decisions—it’s masking them. Efficiency has become the new moral alibi—Microsoft, Meta, Google, and many others have all cited AI as justification for thousands of cuts. Every time leaders make a human decision and hide behind a technical one, they erode something far more valuable than headcount. They erode trust.
The New Alibi
The language of “AI gains” has become a convenient disguise for an old reflex: cut now, justify later. Leaders once blamed “the market.” They blamed Wall Street. Then automation. Today they blame AI. We’ve found the perfect scapegoat—one that never talks back. And efficiency becomes the virtue that absolves the decision.
The further leaders move from the people affected by their decisions, the easier it becomes to treat humans as inputs rather than individuals. Data becomes a shield against empathy. AI doesn’t make these choices, but it makes it easier to hide from them. That’s the new face of moral outsourcing—not machines taking over judgment, but humans abdicating it.
The Efficiency Myth
Here’s the irony: the productivity miracle AI promised hasn’t yet arrived. A recent MIT study found that despite $30–40 billion in enterprise investment in generative AI, 95 percent of companies have seen no measurable return. Only five percent of pilots are delivering meaningful financial impact.
If the payoff isn’t here, what exactly are we optimizing? The truth is, companies aren’t chasing performance, they’re shifting their bets. As GlobalData’s Neil Saunders observed, “In some ways, this is a tipping point away from human capital to technological infrastructure.” The story isn’t about productivity. It’s about where investment is flowing and what’s being left behind.
Amazon’s own numbers tell the story clearly. The company isn’t shrinking because AI made it leaner. It’s shrinking to pay for AI. Net sales are up double digits year over year, and operating income topped $18 billion last quarter. Yet free cash flow has fallen from $53 billion to $18 billion in a year, driven largely by record spending on data centers, custom chips, and cloud infrastructure.
Amazon is cutting costs today to fund the AI infrastructure that might, someday, deliver results tomorrow. Efficiency, pursued for its own sake, stops serving people and starts consuming them. From Microsoft to Meta, Google to Dell, the same logic echoes: cut people now to finance the systems to eventually replace them. The question isn’t whether AI will make us more productive. It’s what we’ll have sacrificed by the time it does.
The Human Cost of Layoffs
Each algorithmic layoff widens the distance between leaders and the people they lead. It doesn’t just trim headcount. It thins the thread of trust that holds organizations together. The stock market is booming. Yet, in the U.S. alone, nearly 700,000 job cuts were announced in the first half of 2025, up 80% over the same period last year. Every time an organization maximizes profits and machines over people, the social contract that binds workplaces together is collateral damage. It used to be that if you were smart, worked hard, and loyal to your company, they were loyal to you. Those days are disappearing. Now we ask laid off workers to pay the price for oversized payrolls, while the leaders move forward unaffected.
It’s unrealistic to suggest that a company can never downsize, that it can’t restructure itself to meet the needs of a disruptive technology. But when people are involved, the “how” matters. When an automotive supplier found itself in a similar pickle, needing to reduce its workforce by 30% to stay afloat, its leaders charted a different path. Instead of mass layoffs, they offered voluntary retirement plans, each with generous severance, and met their reduction needs organically. Preserving everyone’s dignity, those who left and those who stayed remained deeply committed to the company and its success.
You can’t innovate in a climate of fear. And more than half of workers now fear losing their jobs in the next year. People stop taking risks. They comply. They play it safe. Ironically, at the dawn of the AI era, we’re engineering the conditions least conducive to innovation. At its highest and best use, AI could help us cure cancer, rebuild infrastructure, even reverse climate damage. But we may throw that future away. Not because machines failed us, but because we did.
From Moral Outsourcing To Moral Opportunity
It doesn’t have to be this way. AI isn’t inherently dehumanizing. It’s a mirror, reflecting the values of those who build and wield it. LinkedIn’s Aneesh Raman envisions a new “relationship economy,” one where social intelligence sits at the center of work. “Human ideas are the new code,” he says. “Human energy is the new data center. And a new world of work anchored on the human brain is about to emerge.” It’s an inspiring vision—one we could realize, or ruin—depending on how we use the power in our hands.
Octavia Butler captured this paradox decades ago in her evocative short story, The Book of Martha, where God asks a woman to redesign humanity to save the world. Every solution she imagines creates a new harm. The story ends not with certainty, but humility. That’s the leadership lesson AI demands: every “efficiency” we celebrate carries consequences we can’t yet see. Every act of progress leaves a shadow, and real wisdom lies in seeing both.
Leadership In The Age Of AI
As AI moves closer to the center of decision-making, leadership’s job is to keep accountability in human hands. The future of work won’t be written by bots. It will be written in the daily decisions leaders make about people.
Taking responsibility begins with owning those choices. Leaders should state plainly what was decided, why it was decided, and who is accountable. Transparency is the first antidote to moral outsourcing.
It also requires re-humanizing the data. Every data point represents a person, a livelihood, a story. Before optimizing, leaders should ask who is affected, what relationships might be broken, and what trust might be lost. Data without dialogue degrades judgment.
And finally, responsibility means measuring what truly matters. Not just productivity or profit, but belonging, learning, and creativity—the conditions that make innovation possible. In an age obsessed with speed, these are the slow variables that determine whether progress endures.
If we want AI to serve humanity, we must first take responsibility for the humans we lead. Committed employees need a committed employer. It’s a social contract grounded in mutual trust. Amazon’s layoffs aren’t just another round of cost-cutting—they’re a warning to every company chasing the same playbook. When the world’s most powerful firms reduce people simply to expand their AI budgets, they send a clear signal about the future of work: speed over stewardship, profits over people, efficiency over empathy. The short-term gain is easy to count; the long-term cost isn’t. It won’t appear on a balance sheet. It will surface later, in the trust we squander, the talent we silence, and the future we forfeit.
The danger isn’t artificial intelligence. It’s artificial leadership.
