Please ensure Javascript is enabled for purposes of website accessibility

Industry Insights

The Agentic AI Security Paradox: AWS AgentCore Gets It Right—But It's Not Enough

July 21, 2025

·

7-Minute Read

Share Article

Amazon Web Services recently announced Amazon Bedrock AgentCore, a comprehensive platform for deploying and operating AI agents at enterprise scale. Reading through AWS's detailed blog post, I'm struck by something remarkable: they've built what might be the most security-conscious agentic AI platform we've seen yet.

Session isolation. Identity and access controls. Secure token vaults. VPC-only networking options. Zero Trust verification. The list goes on.

AWS clearly understands that agentic AI isn't just another cloud service—it's a fundamental shift in how non-human identities operate, and it requires a security-first approach from day one.

This represents a pivotal moment for the industry—a signal that the enterprise adoption of agentic AI is moving from experimental to production-ready. But it also highlights a critical question: what happens next?

The Gap Between Platform Security and Implementation Reality

AWS has done something impressive with AgentCore. They've recognized that AI agents represent a new class of autonomous actors that don't fit traditional security models. They've built a platform with enterprise-grade controls for identity management, session isolation, and Zero Trust verification.

But platform security is only half the equation.

The other half is what happens when developers start building on that platform. And that's where things get complicated fast.

Why Agentic AI Changes Everything About Non-Human Identity

At Clutch Security, we've been tracking the explosion of non-human identities in enterprise environments. Pre-agentic AI, organizations typically managed about 45 NHIs per human identity. That was already a lot.

But agentic AI changes the math entirely—and here's why: agents have no "agency" without machine identities. Every autonomous action an AI agent takes requires credentials, API keys, tokens, or service accounts. They are, by definition, prolific creators and consumers of non-human identities.

Early adopters are seeing 300-500% annual growth in non-human identities. We're moving toward a reality where enterprises manage 82 NHIs for every human user—and with agentic AI mainstream adoption, we could see this explode by 100x or even 200x.

What makes this dangerous isn't just the volume. It's that AI agents don't behave like traditional automation:

  • Traditional automation follows predictable paths. You can map the credential flows.
  • Agentic AI reasons dynamically, combines tools unpredictably, and uses credentials in ways no human programmed.

This fundamentally breaks our existing identity governance models.

The Developer Reality Check

Here's what AWS gets right with AgentCore: they've made enterprise security features accessible to developers. Identity providers, token management, session isolation—it's all there, ready to use.

But here's where the cloud's shared responsibility model becomes critical. AWS provides the secure infrastructure, but implementation security remains the customer's responsibility. And this is where human nature meets complex systems.

Consider this snippet from AWS's own tutorial code:

Example of hardcoded credentials in AWS tutorial code demonstrating the gap between secure platform capabilities and real-world implementation practices

Even when AWS's security documentation explicitly states "NEVER hardcode credentials" and provides clear examples of what not to do, their tutorial still includes placeholder secrets. Sure, this is just an example code snippet, but it perfectly demonstrates how pervasive the pattern is—even engineers from security-conscious organizations default to showing secrets inline when illustrating concepts. It's a perfect reminder that developers will be developers—they copy, paste, and adapt example code. Those placeholders become hardcoded secrets in production systems.

This isn't a failure of AWS's platform—it's an illustration of why secret sprawl has persisted for decades despite security best practices being well-established. Now, as we enter the agentic AI era where every autonomous action requires credentials, this pattern threatens to proliferate at an unprecedented scale.

When every AI agent needs multiple credentials to function, and those credentials multiply across hundreds or thousands of autonomous actors, the same development patterns that created today's secret sprawl problems become systemic risks.

The Three-Layer Challenge

The agentic AI security challenge operates on three interconnected layers within the cloud's shared responsibility model:

Layer 1: Platform Security

AWS has delivered here with AgentCore. Session isolation, network controls, identity management infrastructure—the foundational security controls are robust and enterprise-grade.

Layer 2: Implementation Security

This falls squarely on the customer side of the shared responsibility model. Secure coding practices, proper secret management, input validation, and secure configuration of agents. Organizations must navigate challenges like credential lifecycle management across dynamic agent deployments, ensuring least-privilege access as agents scale, and maintaining security consistency across diverse agent frameworks.

Layer 3: Runtime Governance

This is where the biggest gap emerges, and it's not something any cloud provider can solve alone. Who's continuously monitoring what these agents actually do with their credentials once deployed? How do you detect when an agent's behavior deviates from expected patterns? What happens when an agent with legitimate credentials starts exhibiting malicious activity?

AWS AgentCore addresses Layer 1 brilliantly and provides tools that help with Layer 3 visibility. But the runtime governance of non-human identities—understanding the full lifecycle and behavior of every credential your agents create, access, and use—remains an organizational responsibility that extends far beyond any single platform's capabilities.

Why This Matters Right Now

Organizations aren't waiting for perfect security solutions. They're deploying AI agents today. We're seeing this across industries—customer support bots, automated workflows, data analysis agents—all requiring credentials to access internal systems.

The risk compounds when you consider that modern AI agents:

  • Store and propagate credentials across systems
  • Make autonomous decisions about tool usage
  • Operate with write permissions to critical infrastructure
  • Scale faster than traditional governance can track

A misconfigured agent doesn't just expose data—it can trigger cascading actions across interconnected systems faster than humans can intervene.

The Security-First Approach to Agentic AI

The AWS AgentCore announcement signals something important: the era of "move fast and break things" is ending for AI systems. Security can't be an afterthought when you're deploying autonomous actors with system-level access.

But platform security alone isn't enough. Organizations need:

  • Continuous NHI Discovery: Know every credential your agents create, use, or access
  • Real-time Behavioral Analysis: Detect when agents deviate from expected patterns
  • Dynamic Risk Assessment: Understand which credentials pose the greatest risk if compromised
  • Automated Governance: Apply least-privilege principles without breaking agent functionality

Building Security That Scales with Innovation

AWS deserves significant credit for recognizing that agentic AI requires new security paradigms. AgentCore represents a major step forward in making enterprise-grade security accessible for AI agents and signals where the industry is heading.

But as we've seen, the shared responsibility model means that platform capabilities are only part of the equation. The more complex challenge lies in what's missing from the broader ecosystem: comprehensive governance for the explosion of non-human identities that agentic AI creates.

This isn't AWS's responsibility to solve—it's an industry-wide challenge that requires purpose-built solutions. Organizations deploying agentic AI need to think beyond platform capabilities to comprehensive non-human identity governance that spans their entire technology stack. The future belongs to organizations that can move fast with autonomous agents while maintaining comprehensive visibility and control over their expanding non-human workforce. That requires both secure platforms like AgentCore and specialized governance solutions for the explosion of identities those platforms enable.

The security challenge of agentic AI isn't that it's impossible to secure—it's that doing it right requires solving problems that no single vendor can address alone. It requires a new category of solutions focused specifically on non-human identity governance.

AWS has shown us where the industry is heading with their platform-first approach. Now the question becomes: who's going to fill the governance gap that sits at the heart of the agentic AI revolution?

CTA Image
Secure Non-Human Identities. Everywhere.
Author

About the author

Ofir is the Co-Founder and CEO of Clutch Security. With over 15 years of experience in cybersecurity, including leadership roles at Sygnia and Hunters, he’s helped global enterprises respond to the most advanced cyber threats. At Clutch, Ofir is focused on tackling one of the industry’s most overlooked risks: securing the explosion of Non-Human Identities across modern infrastructure.