Microsoft Agent 365: Governance for AI Agents
Part of: AI Learning Series Here
Quick Links: Resources for Learning AI | Keep up with AI | List of AI Tools | Local AI
Subscribe to JorgeTechBits newsletter
Disclaimer: I create this content entirely on my own time, and the views expressed here are mine alone (not my employer’s). Because I love leveraging new tech, I use AI tools like Gemini, NotebookLM, Claude, Perplexity and others as a “digital team” to help research and polish these articles so I can share the best possible insights with you!
As organizations move from experimenting with AI assistants to deploying autonomous and semi‑autonomous agents, a new challenge emerges: governing what those agents actually do at runtime.This isn’t about who built an agent—it’s about what it can access, what actions it can take, and whether it’s operating safely inside the enterprise. That’s the problem Microsoft Agent 365 is designed to solve.
Before we begin, one question comes up repeatedly Does Agent 365 track token usage, quotas, or model consumption? — The short answer is no—and that’s by design.
What Agent 365 Actually Is
Microsoft Agent 365, generally available as of May 1, 2026, is an enterprise control plane for AI agents.
- It does not build agents.
- It does not host models.
Instead, it sits above agent frameworks and model platforms, extending Microsoft 365 security, identity, and compliance controls to AI agents. In practical terms, Agent 365 treats agents like first-class enterprise identities—similar to users or applications.
Please see companion blog posts on AI Governance & Orchestration Vendors (May 2026) and ServiceNow AI Control Tower
The Three Pillars: Observe, Govern, Secure
Agent 365 is built around three core capabilities:
|
Pillar |
What It Does |
Key Capabilities |
Outcome |
|---|---|---|---|
|
Observe |
Provides visibility into all AI agents across the environment |
Centralized Agent Registry in Microsoft 365 admin center; Inventory of known and registered agents; Visibility into agent activity and health; Discovery of unsanctioned or “shadow AI” agents via Defender and Intune |
Ensures agents cannot operate outside IT awareness |
|
Govern |
Controls how agents operate using enterprise identity and policy frameworks |
Entra Agent ID for each agent; Least-privilege access enforcement; Alignment with Zero Trust principles; Full lifecycle management (onboarding to retirement); Integration with Entra and Purview for access and compliance |
Ensures agents operate within defined permissions and policies |
|
Secure |
Enforces protection and policy at runtime |
Continuous monitoring via Microsoft Defender; Detection of anomalous or risky behavior; Ability to isolate or sandbox agents (including Windows 365 for Agents); Enforcement of data protection and DLP policies via Purview |
Moves governance from static policy to real-time enforcement |
Governance is enforced at runtime, not just at approval.
- Continuous monitoring through Microsoft Defender
- Detection of anomalous or risky behavior
- Ability to isolate or sandbox agents (including Windows 365 for Agents)
- Enforcement of data protection and DLP policies via Purview
This brings AI governance out of policy documents and into real-time control.
What Agent 365 Does Not Do
Agent 365 does not manage LLM token usage.
It does not:
- Count prompt or completion tokens
- Enforce token budgets or quotas
- Provide token-level cost reporting
- Rate-limit model usage based on tokens
This is not a missing feature—it’s a deliberate architectural boundary.
Token Tracking and Governance
Token usage is handled by the model platform, not the agent governance layer.
For example:
- Azure OpenAI / Azure AI Foundry: tokens, quotas, TPM/RPM limits, cost tracking
- Copilot Studio: capacity-based consumption models
- Third-party providers: token accounting and billing
Agent 365 can observe that an agent invoked a model—but it does not meter that usage.
Why This Separation Matters
This design follows familiar enterprise patterns:
- Entra ID governs identity—not compute usage
- Azure Resource Manager governs infrastructure—not user behavior
- Agent 365 governs agent behavior—not model economics
Separating these concerns delivers three advantages:
- Consistent governance across models and clouds
- Clear ownership between security teams and platform teams
- Independent evolution of model platforms without breaking governance
Put simply:
Agent 365 answers “who” and “what.” | Model platforms answer “how much.”
How to Explain It Simply to Customers
Microsoft Agent 365 governs who an AI agent is, what it can do, what data it can access, and whether it is operating safely. Token usage, quotas, and costs are managed by the model platforms the agent uses.
- Agent 365 delivers enforceable governance for AI agents
- It manages identity, access, lifecycle, and runtime security
- It intentionally does not track tokens or quotas
- Token management belongs to Azure OpenAI, AI Foundry, or other providers
- This separation is what makes enterprise-scale AI governance workable
Resources:
- Please see companion blog posts on AI Governance & Orchestration Vendors (May 2026) and ServiceNow AI Control Tower
- Microsoft Mechanics YouTube Video: How Microsoft Agent 365 works
- Discover, manage, and secure AI agents with Microsoft Agent 365 by Vasu Jakkal’s at RSAC 2026
- Agent 365 | Your Security & Compliance Controls

