ConductorOne is now C1

NIST’s New Cyber AI Profile Signals a Shift: AI Security Starts With Identity

Claire McKennaClaire McKenna, Director of Corporate Marketing

Share

NIST’s New Cyber AI Profile Signals a Shift: AI Security Starts With Identity

As we head into 2026, AI systems are becoming embedded in core business operations, security tooling, and decision-making workflows across the enterprise.

Recognizing this shift, the National Institute of Standards and Technology (NIST) recently released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (NIST IR 8596), also known as the Cyber AI Profile.

This new guidance builds on NIST Cybersecurity Framework (CSF) 2.0 and the AI Risk Management Framework (AI RMF) to help organizations adopt AI securely, responsibly, and at scale. More importantly, it reflects a growing consensus: AI fundamentally changes the threat model, and traditional security approaches are no longer sufficient.

Identity is at the center of this shift.

Why NIST released the Cyber AI Profile#

NIST’s Cyber AI Profile is the result of a yearlong effort involving thousands of contributors across government, industry, and academia. It arrives at a moment when organizations are grappling with a new reality:

  • AI is embedded in products, internal tools, and third-party services
  • AI systems increasingly act autonomously, not just as chatbots or assistants
  • Attackers are using AI to scale phishing, deepfakes, malware, and reconnaissance
  • Security teams are expected to defend faster, with fewer people, and more automation

The Cyber AI Profile is NIST’s attempt to create a shared, practical foundation for a cybersecurity strategy that addresses these realities.

The three AI security challenges every organization must address#

Rather than treating AI as a single problem, the Cyber AI Profile organizes guidance around three interconnected focus areas:

1. Securing AI systems#

AI systems introduce new and unfamiliar attack surfaces. Models, agents, APIs, datasets, and training pipelines all become security-relevant assets. Unlike traditional software, AI behavior can be opaque, dynamic, and difficult to predict.

This means organizations need:

  • Inventories of AI models, agents, and integrations
  • Clear ownership and accountability for AI actions
  • Strong controls around data provenance, integrity, and access

In practice, this is an identity problem as much as a data or infrastructure problem.

2. Using AI to defend the enterprise#

AI can dramatically improve security operations by accelerating detection, prioritization, and response. But AI-driven defense introduces its own risks, especially when systems act without human oversight.

NIST emphasizes the need for:

  • Human-in-the-loop approvals for sensitive actions
  • Clear boundaries on what AI systems are allowed to do
  • Visibility into AI-generated decisions before they affect production systems

As AI begins to take action, not just make recommendations, authorization and access controls become mission-critical.

3. Thwarting AI-enabled attacks#

With AI, attacks can now happen faster and at greater scale. To keep pace, organizations must:

  • Detect abuse of credentials and permissions earlier
  • Respond faster with automated but governed controls
  • Build resilience assuming AI-driven attacks are the norm, not the exception

Again, identity sits at the center. Most attacks still rely on compromised access.

What the cyber AI profile makes clear about identity#

While the Cyber AI Profile does not prescribe specific tools, its guidance repeatedly points to the same conclusion: AI security depends on knowing who or what has access, what they are allowed to do, and when that access should be revoked.

Across the document, NIST highlights the need for:

  • Clear human ownership of AI system actions
  • Strong governance over permissions, APIs, and service accounts
  • Continuous monitoring and re-evaluation of risk as AI systems evolve
  • Frequent updates to risk tolerance and security policies

These are continuous identity governance challenges, not just annual audit exercises.

As AI systems become more agentic, identity needs to shift from static access reviews to a real-time control plane for security.

Turning NIST guidance into action#

The Cyber AI Profile gives organizations a framework. The next step is operationalizing it.

That starts with:

  • Mapping AI systems to identities, permissions, and actions
  • Applying least privilege dynamically, not periodically
  • Ensuring AI-assisted actions are governed, approved, and auditable
  • Treating identity as the foundation for AI security, not an afterthought

AI changes how systems behave, and identity determines whether those behaviors are safe. If you need additional guidance on how to operationalize your identity program, be sure to check out the Path to Identity Maturity.

The release of NIST’s Cyber AI Profile marks an inflection point. AI security is no longer theoretical, and identity is no longer just an IAM concern.

As organizations head into 2026, the question is not whether to adopt AI, but whether security and identity programs are ready for it.

The future of AI security runs through identity.

 

Stay in touch

The best way to keep up with identity security tips, guides, and industry best practices.

Explore more articles

We Are C1

We Are C1

Squire: Agentic-First Ephemeral Dev Environments at C1

Squire: Agentic-First Ephemeral Dev Environments at C1

A CISO's Top 3 Takeaways from RSA Conference 2026

A CISO's Top 3 Takeaways from RSA Conference 2026