John Ohlwiler is CEO of Inc. 5000-ranked Sentry Technology Solutions, specializing in strategic IT and AI.
In the race to harness AI’s transformative potential, companies face an unexpected threat from within: employees’ unauthorized use of AI tools. While organizations methodically develop AI governance frameworks, their workforces are quietly creating significant vulnerabilities through unauthorized AI applications—a phenomenon security experts now call “shadow AI.”
The Hidden Threat Within
A 2024 report from Microsoft and LinkedIn found that 75% of knowledge workers were using AI tools at work—but 78% of them were using them without clearance from their employers. This parallel tech infrastructure creates multiple areas for data breaches, intellectual property theft and compliance violations.
The consequences are real. In early 2023, it was reported that Samsung employees inadvertently exposed sensitive proprietary information when they used the free version of ChatGPT to help debug code. In three separate incidents within just 20 days, engineers entered confidential source code, equipment testing sequences and internal meeting recordings into the chatbot.
Because the free version uses all inputs as training data, Samsung’s confidential information became part of the AI’s knowledge base—potentially accessible to anyone who knew how to prompt the system correctly. The employees weren’t trying to cause harm; they were simply trying to work more efficiently.
Understanding The Risk Landscape
Employees aren’t using unauthorized AI tools because they’re reckless. They’re using them because they’re under pressure to deliver results, and the tools actually help them work faster. Tight deadlines, mounting workloads and a genuine desire to be more productive are driving people to ChatGPT, Gemini, Claude and dozens of other AI platforms.
The problem is that most organizations still haven’t given their teams a way to leverage AI safely. When there’s no official AI tool for a specific task and employees aren’t sure what’s allowed, people make their own decisions. Those decisions can expose sensitive data, violate compliance requirements and create vulnerabilities that take months to discover.
The solution isn’t to ban AI. That ship has sailed. Instead, every organization needs to create a framework for safe AI use—clear guidelines that give employees the power to work efficiently while protecting the business from unnecessary risk.
Building A Secure AI Framework
Over the past two years at Sentry, we’ve worked with businesses to implement AI strategies that actually work. Forward-thinking organizations are taking a three-part approach: clear policies, ongoing education and the right technical safeguards.
First, you need comprehensive AI policies that go beyond “don’t use ChatGPT.” Your team needs to understand which tools are approved, how to handle different types of data and what security requirements must be met. These policies should be practical guidelines that help people make good decisions in the moment.
Second, education can’t be a one-time training session. Share real-world examples of what goes wrong when data isn’t handled properly. Create a culture where people feel comfortable asking questions without fear of punishment. In three to five years, knowing how to work effectively with AI will be as fundamental as knowing how to use email today.
Third, you need the right technical solutions to support your policies. This means monitoring for unauthorized AI usage, deploying data loss prevention tools and providing secure alternatives to consumer AI products.
How We Built Our AI Framework
IT companies don’t really like change. We like things that work, are secure and are repeatable. When AI exploded onto the scene, many MSPs weren’t jumping in early. They weren’t moving quickly internally or having conversations with their clients.
We pivoted from being a managed service provider (MSP) to operating more like a managed information provider (MIP). We developed a Technology Maturity Model (TMM) that allowed us to step strategically into AI. We now leverage this same TMM with clients to help them launch AI securely.
Our first major move was building the Sentry AI Bot using Microsoft’s Copilot Studio. We launched this bot to give our technicians access to all client information through a simple chat interface. Before the bot, complex trouble tickets required escalation. Now, our techs have an AI sidekick that knows everything about every client—built with security at the forefront.
We rolled out Microsoft Copilot access to all employees—the paid enterprise version with full data protection. When our marketing team asked about using Anthropic’s Claude, we evaluated it, paid for enterprise licenses and required everyone to complete Anthropic’s free AI certification course.
AI has become a normal part of our staff rhythm. At every all-hands meeting, we discuss AI and innovation. It’s not a special initiative anymore—it’s just how we work.
For your business, the right tool depends on where your data lives, what work ecosystem you’re using and what compliance requirements you need to meet. However, there is a right tool for your situation, and you can get a plan to leverage it securely.
Looking Ahead
AI isn’t slowing down. The businesses that move forward thoughtfully—with clear frameworks, secure tools and trained teams—won’t just avoid the risks. They’ll capture the competitive advantage.
You can wait until shadow AI becomes a crisis, or you can build a framework now that turns AI from a security threat into a strategic asset. The companies winning are the ones that started taking action while others stood still.
The question isn’t whether AI will reshape how your business operates. It’s whether you’ll be leading that transformation or scrambling to catch up.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
