Skip to content

Navigating the AI Security Paradox

In 2026, nobody questions whether AI can deliver meaningful value. AI is embedded across development workflows, security operations, and enterprise decision-making across industries.

 

The question is no longer whether to adopt it, but how to do so without exposing the organization to entirely new categories of risk. It’s a double-edged sword — AI is simultaneously accelerating innovation while compressing the window to defend against threats.

 

This tension between opportunity and risk defines what many now refer to as the AI security paradox. The same capabilities that enable innovation also expand the attack surface, compress response timelines, and challenge long-standing assumptions about governance, control, and resilience.

 

Insights from a recent Tech in Motion leadership discussion reveal how experienced security leaders are navigating this reality as AI becomes foundational to how modern enterprises operate.

 

Play button image
Close button

 

Watch the full panel, Leveraging AI for Advanced Cyber Defense, hosted by Chris Maenner (Palmtree Ventures), featuring Silas Adams (Pep Boys), Jenny Menna (Sallie Mae), Matthew Franz (Bespin Global), and Robert Pimentel (Humana).

 

AI Tied to Business Outcomes

 

Organizations are moving beyond experimentation and focusing on measurable outcomes. The most successful implementations are those that align directly with business priorities such as efficiency, speed, and revenue protection.

 

In practice, that means using AI to improve developer productivity, accelerate time to market, and reduce operational friction. It also means applying AI to security in a way that protects revenue rather than simply checking compliance boxes. When AI is positioned as a business enabler rather than a technical novelty, it becomes far easier to justify investment and scale adoption.

 

That shift in mindset is critical. AI initiatives that are disconnected from business outcomes tend to stall, while those tied to operational and financial impact gain traction quickly and create lasting value.

 

 

Quotes icon

We are getting away from doing AI for the sake of AI and actually delivering value to our bottom line, our investors, and our shareholders.”

Silas Adams, CISO @ Pep Boys

 

 

AI Risk Is Contextual

 

AI risk cannot be evaluated as a standalone technical issue. It must be understood in the context of the business itself. Different organizations face very different risk profiles depending on their data, regulatory environment, and operational priorities. They must then apply controls accordingly, rather than relying on blanket strategies.

 

A use case that is relatively harmless in one department could be catastrophic in another. For example, uploading sensitive customer data into a public AI tool could create severe legal and financial exposure. In contrast, a marketing team using AI to test simple user experience changes, such as color preferences on a homepage, represents minimal risk.

 

This is also where “shadow AI” becomes a growing concern. Employees often adopt external AI tools outside approved environments to do their jobs more efficiently, and without approved alternatives or clear guardrails, that behavior is inevitable. As a result, risk is not just about technology itself, but about how people use it across different parts of the business.

 

The goal is not to eliminate risk entirely, but to balance it against the risk of inaction

 

 

Quotes icon

You cannot just understand the technology. You have to understand your business and what is really important to you.”

Jenny Menna, Chief Security Officer @ Sallie Mae

 

 

 

Attack Speed Has Outpaced Security Models

 

AI has dramatically changed the speed of cyberattacks. Tasks that once required significant expertise and time can now be executed rapidly with the help of AI tools. This has lowered the barrier to entry for attackers and increased the volume and sophistication of threats.

 

Recent ITPro data shows that average attacker breakout times have dropped to just 29 minutes, with some attacks occurring in seconds, while nearly 29% of vulnerabilities are exploited on or shortly after they are publicly disclosed.

 

The traditional security model, which relies heavily on patch cycles and threat intelligence sharing, is struggling to keep up. By the time vulnerabilities are disclosed and communicated, they may already be under active exploitation. This creates a fundamental mismatch between how quickly threats emerge and how quickly organizations can respond.

 

That compression of time changes everything. It forces organizations to rethink detection, prioritization, and response strategies, while increasing the importance of continuous monitoring and rapid remediation.

 

 

Quotes icon

What used to take days or weeks can now happen in hours, sometimes minutes.”

Robert Pimentel, Director, Offensive Security @ Humana

 

 

Data Governance Is the Foundation of AI Security

 

AI systems depend entirely on the data they consume, and the quality and security of that data determine the effectiveness of the system.

 

Organizations that lack strong data governance are at a significant disadvantage. Without clear visibility into where data resides, how it is classified, and how it flows through the organization, it becomes nearly impossible to control how AI systems use that data.

 

The challenge is already significant even before AI is layered in. According to ZeroThreat, more than half of vulnerabilities remain unremediated after 90 days, and 41% of organizations still have internet-facing critical vulnerabilities.

 

The path forward requires investment in data classification, access control, and monitoring. It also requires a cultural shift that treats data as a strategic asset rather than a byproduct of operations.

 

 

Quotes icon

Organizations that struggle with basic governance and hygiene are the ones that are going to struggle the most with AI.”

Matthew Franz, AI Security Lead @ Bespin Global

 

 

DSC06162

Live from Mindspace Wanamaker — March 26, 2026

 

Agentic AI Will Transform Security Operations

 

The emergence of agentic AI represents the next major evolution in how organizations use artificial intelligence. Instead of relying on AI as a tool to assist human decision-making, organizations are beginning to deploy autonomous agents that can act, communicate, and coordinate with one another.

 

This creates powerful new possibilities. Security operations can become more automated, more responsive, and more scalable. In some cases, organizations are already moving toward environments where systems can identify vulnerabilities, generate fixes, and initiate remediation workflows with minimal human involvement.

 

At the same time, this shift introduces new forms of risk. Autonomous systems must be carefully governed to ensure they behave as intended. Interactions between agents can create complex and unpredictable outcomes, particularly if visibility and control are limited.

 

This highlights the need for robust oversight and monitoring of non-human identities. Organizations must be able to track what agents are doing, understand how they are interacting, and intervene when necessary.

 

 

Quotes icon

If one agent drifts, that drift can scale across the entire ecosystem very quickly.”

Silas Adams, CISO @ Pep Boys

 

 

 

 

What This Means for Tech Talent

 

All of this reinforces a broader truth. AI is not slowing down, and neither are the risks associated with it. Organizations need individuals who can think in systems, understand business context, and make informed decisions in complex environments. Professionals must be comfortable working with AI tools while also questioning their outputs and understanding their limitations.

 

 

New call-to-action

 

 

Join the
Community

Want to be the first to know about upcoming events? Sign up to receive event updates, industry insights, and more helpful content straight to your inbox!