Arun Kumar Elengovan is director, engineering security at Okta.
Everywhere you look, organizations are rushing to adopt AI, dazzled by its promise to boost productivity and efficiency. But as someone who lives in the world of security, I see both edges of the sword: The same technology that can help us defend our systems can also be misused by bad actors to infiltrate them.
AI has lowered the barrier to entry for cybercrime. Attackers don’t need years of study to write sophisticated scripts anymore. Now they can automate malware creation or reverse shell scripts to open a backdoor into a company’s infrastructure and exfiltrate sensitive data or leak confidential information. Tools like crescendo attacks allow bad actors to bypass AI safeguards by asking clever hypothetical or “research” questions, such as how malware was written decades ago, and then extract the steps to recreate those attacks today. In my work leading enterprise security teams, I’ve already seen early attempts where attackers test these methods against corporate defenses, underscoring how quickly theory becomes practice.
At the same time, AI can also be our most powerful defense. It’s a classic cat-and-mouse game, super-powered by technology. Here are the main ways we can best use AI on the defensive side.
Embrace AI As A Partner
The first principle every leader should internalize is coexistence. AI shouldn’t replace human expertise, but rather should serve as a partner. Many developers now use Copilot, CodeWhisperer, Cursor or Codeium to help write code. But the key word is “help”; the human is still the captain of the plane, and AI is simply the co-pilot. That partnership, human judgment supported by AI’s speed and breadth, is critical.
For example, using AI for threat detection lets defenders scan massive amounts of ingress and egress data and detect malicious patterns faster than the lengthy manual reviews that were once required. Another example is when AI is used for threat modeling, a proactive design-time activity to identify and neutralize threats before they happen. There’s still a human in the loop, however, confirming what is working and what isn’t. The organizations that succeed are the ones treating AI as an extension of skilled analysts, not a replacement.
Context Is King
The old programming adage “garbage in, garbage out” holds with AI as well. Many people assume AI will think for them, but that’s a dangerous misconception. These models are probabilistic engines, not independent thinkers. To get meaningful results, they need to have context.
Retrieval-augmented generation (RAG) is a good example. Large language models like ChatGPT can sound convincing, but they often give responses that are generic at best. By adding context, such as feeding a vector database with 40 past security incidents, their root causes and mitigation steps, you transform generic output into specific, actionable output. When the next incident arises, the AI can draw on organizational memory to offer precise guidance.
I’ve seen organizations skip this step, and the result is often confusion or wasted effort. Contextual awareness and prompt engineering are essential skills for every team that uses AI. The difference between a vague prompt and a well-structured one can be the difference between a useful answer and a dangerous mistake. In fact, I’ve advocated within industry groups that context engineering should become a baseline discipline, just as secure coding once did.
Shift Left On Security
Another principle is adopting a “shift-left” approach to AI security. Models can leak sensitive data if not properly secured. That’s why companies need to build in safeguards during the design phase instead of scrambling to do so after deployment.
Treat your AI pipelines like any other piece of critical infrastructure. Encrypt sensitive components, and audit and monitor every step. Take Anthropic’s model context protocol as a cautionary tale, when multiple security issues surfaced within months of release. One critical flaw in its Inspector tool was rated a CVSS 9.4, illustrating how quickly design oversights can escalate into industry-wide concern. Companies that had baked security into their design were able to make quick changes without panicking and damaging morale. It’s much easier to stay calm and respond effectively to a threat if you’ve already laid the groundwork.
The Bigger Picture
AI isn’t magic. It’s nothing more than data and context, and the higher the quality of both, the better the results. Leaders need to understand this probabilistic nature to keep their expectations realistic. As they say, “done is better than perfect, because perfect never gets done.” In other words, don’t wait for someone to hand you a perfect roadmap.
Start small, iterate and embed safeguards from the start. Those are the first steps toward making AI as natural a part of your organization as the internet has become over the past 30 years. Use AI smartly, stay aware and never forget that you’re the one flying the plane. AI is simply the co-pilot helping you navigate the skies. Looking ahead, the organizations that thrive will be those that make AI auditable and accountable, not just usable, thereby setting the tone for trust in the decade to come.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?