In a recent internal review, PwC noted that over 95% of its U.S. workforce had used generative AI tools as part of its large-scale AI rollout. Yet according to Microsoft’s 2024 Work Trend Index, a different story is playing out across industries: 78% of professionals bring their own AI tools to work, while more than half hesitate to admit it to their managers.
If you’re a business leader reading this, ask yourself: would you know if your team used ChatGPT to write that client memo? Would it matter to you? And what would you do if they told you?
Our research suggests many leaders are getting this equation wrong. In a study conducted with 130 mid-level managers at a global consulting firm, we discovered a striking pattern. Managers do indeed rate AI-assisted work as higher in quality. But when they were told AI had been used, they downgraded the employee’s effort. Meanwhile, those who hid their use of ChatGPT were often rated more favorably overall.
This creates a perverse incentive: employees benefit from hiding AI use more than disclosing it. The result is what we call “shadow adoption”. It’s a pattern of concealed AI use that undermines not just transparency, but the promise of AI-driven productivity itself.
Why Shadow AI Use Develops Mistrust And Strategic Risk
The typical narrative around AI adoption focuses on speed, innovation, and scale. But beneath those headlines lies a quieter story: mistrust. In our study, 44% of managers suspected AI had been used even when it hadn’t. That mistrust isn’t just an HR concern, it’s a strategic risk.
When people conceal their tools, organizations lose the ability to standardize, train, or audit for quality. When evaluations are skewed by disclosure bias, performance management becomes unreliable. And when employees fear that using new tools will make them look lazy (or worse, replaceable), they will resist innovation, even as they quietly rely on it.
So, leaders should ask: are our current systems incentivizing the behavior we want, or just the appearance of it?
What Forward-Thinking Companies Are Doing Differently
Firms like IBM, Salesforce, and Morgan Stanley are experimenting with policies that go beyond compliance and focus on behavior. Our research aligns with what these companies are discovering. To enable trustworthy, scalable AI use, organizations need three pillars: risk-sharing, disclosure frameworks, and better incentives.
- Share the Risk
When AI-generated work goes wrong with hallucinated facts, missed citations, or biased analysis the blame often falls solely on the employee. But that’s shortsighted. Managers must co-own output. If a junior consultant uses GPT to draft a slide deck, the manager should validate both the content and the process. This repositions AI oversight as a leadership duty, not a liability transfer.
- Enable Disclosure, Not Surveillance
AI tools are often browser-based and difficult to track. Heavy surveillance backfires—leading to evasive behavior or shadow use. A better path is structured self-disclosure. For example, add a checkbox to internal submission forms: “Did you use any AI tools?” This fosters honesty without punishment. Some U.S. law firms have already integrated similar flags in their e-discovery and client memos.
- Reward Responsible AI Adoption
Most performance systems still reward those who “do it all themselves.” But that model is outdated. Using AI is a skill, and employees who use it right should be rewarded. Do not discount effort to AI pioneers in your organization. Reward employees who leverage tools thoughtfully, document their process, and spot errors. It’s not about who writes the most lines of code – it’s about who uses responsibly the best tools available to do their job.
This Isn’t Just A Tech Issue, It’s A Cultural Reckoning
Executives often think AI transformation is about buying the right tools. It’s not. It’s about building a culture where transparency is rewarded, not punished.
Shadow adoption is not a temporary phase. It’s a systemic signal that current policies are misaligned with emerging practices. Any knowledge-based industry like consulting, finance, education and design will face this challenge. Wherever outputs are hard to measure and effort is invisible, concealed AI use will spread.
According to Microsoft and LinkedIn’s 2024 global survey, while 75% of professionals are using AI at work, a majority say they are doing so unofficially, often without training or policy guidance. That gap isn’t just operational, it’s cultural. And the longer it remains, the harder it will be to close.
The Leadership Imperative: Clarity Over Control
What the best companies are realizing is that compliance isn’t enough. You can’t monitor your way to a high-trust AI culture. You have to design for it.
The next wave of transformation will be led by firms that don’t just adopt AI, but govern it wisely. They should implement incentives that reward transparency, norms that reduce fear, and policies that treat responsible AI use as a form of leadership.
This is no longer just an IT issue. It’s a CEO issue.
David Restrepo Amariles is HEC Associate Professor of Artificial Intelligence and Law, Hi! PARIS Fellow, and Worldline Chair Professor.
Cathy L. Yang is Associate Professor in HEC’s Department of Information Systems and Operations Management
Daniel Brown is Head of HEC Research Communication.