NIST Proposes OAuth 2.0 for AI Agent Identity and Authorization
NIST's NCCoE concept paper proposes OAuth 2.0 standards for AI agent identity and authorization. Technical framework for enterprise agent security.
Why AI Agent Identity Is the Next Big Security Challenge
The proliferation of autonomous AI agents across enterprise environments has surfaced a critical security gap: there is no established standard for how AI agents identify themselves, prove their authority to act, or have their access scoped and governed. The National Institute of Standards and Technology (NIST), through its National Cybersecurity Center of Excellence (NCCoE), has released a concept paper proposing OAuth 2.0 as the foundational protocol for AI agent identity and authorization.
This matters because AI agents are not users, and they are not traditional applications. They operate with varying degrees of autonomy, act on behalf of human principals, interact with APIs and services across organizational boundaries, and may delegate tasks to other agents. Existing identity and access management systems were designed for human users logging into applications or for service-to-service authentication within a single trust domain. Neither model adequately addresses the reality of autonomous agents that traverse multiple systems, organizations, and authorization contexts.
According to the NCCoE paper, more than 60 percent of enterprise AI agent deployments in 2025 relied on static API keys or shared credentials, approaches that provide no granularity, no auditability, and no mechanism for dynamic scope adjustment. The result is a growing attack surface where compromised agent credentials grant broad, unmonitored access across enterprise systems.
The NCCoE Concept Paper: Core Proposals
NIST's concept paper does not introduce a new protocol from scratch. Instead, it proposes extending the OAuth 2.0 authorization framework, already widely adopted for human-facing and service-to-service authentication, to accommodate the unique requirements of AI agents. The key proposals include:
- Agent identity tokens: AI agents receive verifiable identity tokens that encode the agent's identity, its deploying organization, its authorized scopes, and its delegation chain. These tokens are cryptographically signed and time-limited, replacing static credentials
- Delegated authorization model: When a human user instructs an AI agent to perform a task, the agent receives a delegated authorization token that derives from the user's permissions but can be further constrained. The agent cannot exceed the delegating user's authority
- Scope narrowing for autonomous actions: As agents operate with increasing autonomy, their authorization scopes should narrow rather than expand. An agent performing routine data entry might hold broad scopes, but an agent making financial commitments should hold tightly constrained, transaction-specific scopes
- Cross-organizational agent authentication: When agents from different organizations need to interact, the paper proposes a federated identity model where each organization's identity provider vouches for its agents, similar to how SAML federation works for human users
Agent Identity Verification in Detail
The concept paper proposes a multi-layered approach to establishing and verifying AI agent identity:
Registration and Provisioning
Before an AI agent can operate within an enterprise environment, it must be registered with the organization's identity provider. Registration captures the agent's purpose, deploying team, authorized systems, maximum autonomy level, and the human principals responsible for its behavior. This registration creates a verifiable identity record that persists throughout the agent's lifecycle.
Runtime Authentication
At runtime, agents authenticate using short-lived tokens obtained through the OAuth 2.0 client credentials flow or a proposed new agent credentials flow. Each token includes claims that identify not just the agent but its current operational context: what task it is performing, on whose behalf, and under what constraints. Token lifetimes are measured in minutes rather than hours or days, reducing the window of exposure if a token is compromised.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Continuous Authorization Evaluation
Unlike traditional authentication where access is granted at login and persists until session expiration, NIST proposes continuous authorization evaluation for AI agents. Authorization decisions are re-evaluated at each significant action, allowing the system to revoke or adjust permissions based on the agent's behavior pattern, the sensitivity of the requested action, or changes in the security posture of the environment.
Authorization Scope Management
One of the most technically detailed sections of the concept paper addresses how OAuth scopes should be defined and managed for AI agents:
- Fine-grained resource scopes: Rather than broad scopes like "read:documents" or "write:database," agent scopes should be defined at the resource level, such as "read:customer_record:12345" or "write:invoice:draft_only." This limits the blast radius of a compromised agent
- Temporal scopes: Scopes can include time-based constraints, allowing an agent to access a system only during business hours or only for the duration of a specific workflow
- Action-based scopes: Scopes define not just what resources an agent can access but what actions it can perform on those resources. An agent might have permission to read and summarize a document but not to share it externally or delete it
- Escalation protocols: When an agent needs to perform an action outside its current scope, the framework defines a protocol for requesting scope elevation from a human approver or a higher-authority system, with full audit logging of the request and decision
Cross-Enterprise Agent Authentication
The most forward-looking aspect of NIST's proposal addresses how AI agents authenticate when crossing organizational boundaries. As agents increasingly interact with external APIs, partner systems, and other organizations' agents, a standardized trust framework is essential:
- Agent identity federation: Organizations publish agent identity metadata through well-known endpoints, similar to OpenID Connect discovery. Partner organizations can verify an incoming agent's identity and authority by checking its token against the issuing organization's metadata
- Mutual agent authentication: When two agents from different organizations interact, both must authenticate to each other. The paper proposes mutual TLS combined with OAuth token exchange to establish bidirectional trust
- Trust level negotiation: Not all agent interactions require the same level of trust. The framework defines trust levels ranging from anonymous information queries through authenticated data exchange to authorized transactional operations, allowing organizations to gate agent access based on the sensitivity of the interaction
Technical Implementation Considerations
The concept paper acknowledges several implementation challenges that must be addressed as the framework matures:
- Token management at scale: Enterprises deploying thousands of agents, each performing hundreds of actions per hour, will generate enormous token volumes. Authorization servers must handle this load without becoming bottlenecks. The paper suggests token caching strategies and batch authorization approaches for repetitive, low-risk actions
- Backward compatibility: Many existing systems authenticate agents using API keys or basic credentials. A migration path from legacy authentication to OAuth-based agent identity is needed, potentially involving gateway-level token translation
- Multi-model agent architectures: Modern AI agents often compose multiple language models, tools, and sub-agents. The identity framework must account for these internal delegation chains, ensuring that each component in a multi-model pipeline inherits appropriate authorization constraints
- Revocation speed: When an agent is compromised or behaves anomalously, revocation must take effect within seconds across all systems the agent can access. Short-lived tokens help, but real-time revocation lists or push-based revocation mechanisms may also be necessary
Industry Response and Adoption Outlook
Major technology companies have responded positively to NIST's concept paper. Microsoft has announced plans to integrate agent identity capabilities into Entra ID. Google Cloud is developing agent-specific IAM roles and OAuth flows for Vertex AI agents. Okta and Auth0 are prototyping agent identity management features. The OpenID Foundation has formed a working group to develop an Agent Identity specification building on NIST's proposals.
Enterprise adoption will likely follow a phased approach. Organizations with mature identity infrastructure will implement agent identity within existing OAuth deployments. Organizations still relying on API keys will need to modernize their identity architecture, a process that typically takes 12 to 18 months.
Frequently Asked Questions
Why does NIST propose OAuth 2.0 rather than a new protocol for AI agent identity?
OAuth 2.0 is already the dominant authorization framework across enterprise and cloud environments, with mature tooling, broad library support, and well-understood security properties. Building on OAuth reduces adoption friction and leverages existing infrastructure investments. NIST's extensions add agent-specific capabilities such as delegation chains, continuous authorization, and cross-organizational federation without requiring organizations to deploy an entirely new identity stack.
How does the proposed framework handle agent-to-agent interactions?
When one AI agent delegates a task to another agent, the framework uses OAuth token exchange to create a derived token that carries the original delegation chain. The receiving agent's token includes claims identifying the originating human principal, the delegating agent, and the specific task scope. This maintains full traceability and ensures that no agent in a delegation chain can exceed the authority of the original principal.
What happens when an AI agent needs to perform actions across multiple organizations?
The framework proposes federated agent identity, where each organization's identity provider issues tokens for its agents that can be verified by partner organizations. Cross-organizational interactions use mutual authentication and trust level negotiation to establish appropriate access. This is conceptually similar to how SAML and OpenID Connect federation work for human users but adapted for agent-specific authorization patterns.
How quickly can compromised agent credentials be revoked?
The framework relies primarily on short-lived tokens with lifetimes measured in minutes, which limits the exposure window. For immediate revocation, NIST proposes real-time revocation mechanisms including push-based notification to all systems an agent can access. Organizations should also implement behavioral anomaly detection that automatically suspends agent access when unusual patterns are detected, even before a formal revocation decision is made.
NYC News
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.