Managing IT Services in the AI Era
Part of: AI Learning Series Here
Quick Links: Resources for Learning AI | Keep up with AI | List of AI Tools | Local AI
Subscribe to JorgeTechBits newsletter
Disclaimer: I create this content entirely on my own time, and the views expressed here are mine alone (not my employer’s). Because I love leveraging new tech, I use AI tools like Gemini, NotebookLM, Claude, Perplexity and others as a “digital team” to help research and polish these articles so I can share the best possible insights with you!
Have questions, ideas to share, or just want to connect? I’d love to hear from you! Check out my About Page to learn more about me or connect with me.
You’ve probably sat through the pitch. Every IT services vendor you work with — or are evaluating — now leads with AI. Faster delivery. Smarter automation. Outcome-based everything.
Some of it is real. A lot of it is repackaging. And as the person responsible for making these relationships actually work, the pressure is on you to tell the difference.
Here’s a practical framework for evaluating, managing, and renegotiating your IT services relationships as AI fundamentally changes what good looks like.
The Baseline Has Shifted — Has Your Contract?
For years, IT services contracts were built around inputs: headcount, hours, ticket volume, response times. Those metrics made sense when human labor was the primary delivery engine. They make much less sense when your vendor is using AI to do in minutes what used to take days.
The uncomfortable question: if your vendor’s delivery costs have dropped significantly because of AI-driven automation, is that efficiency showing up in your pricing — or their margin?
This isn’t cynical. It’s the right question to ask. The shift toward outcome-based pricing is real, but it’s uneven. Many vendors are still billing on legacy models while quietly absorbing the productivity gains AI provides. A frank conversation about how their cost structure has changed, and how that should be reflected in your agreement, is not just fair — it’s overdue.
What to ask your vendor:
- What percentage of delivery is now automated vs. human-handled?
- How has your cost-to-serve changed in the last 18 months?
- Can we move toward pricing tied to outcomes rather than effort?
AIOps Is a Capability, Not a Feature — Evaluate It Accordingly
Most vendors will tell you they have AIOps capabilities. Few can articulate what that actually means for your environment specifically.
AIOps done well means your vendor’s systems are continuously ingesting data across your infrastructure, identifying anomalies before they become incidents, and either resolving them automatically or escalating with context — not just an alert. The result is fewer surprises, less unplanned downtime, and IT teams that spend more time on architecture than triage.
AIOps done poorly — or worse, marketed but not delivered — means you’re still in the alert-response cycle, just with a more expensive contract.
What good looks like:
- Predictive incident resolution with documented track record, not just capability claims
- Clear data governance: who owns the data your vendor ingests, how is it stored, who can access it?
- Transparency into what the system is doing and why — black box automation is a governance risk, not a feature
Who Deploy AI Agents — Govern Them Like It
This is the area where most IT leaders are furthest behind their vendors, and it’s the one that carries the most risk.
AI agents — systems that act autonomously on behalf of users or processes — are increasingly part of what your vendors deploy in your environment. They handle service desk requests, trigger workflows, access systems, and make decisions at machine speed. They are also a new and largely ungoverned attack surface.
Prompt injection, identity misuse, and autonomous actions with unintended consequences are not theoretical. They are happening in production environments right now. If your vendor agreement doesn’t address how their AI agents are governed, verified, and monitored within your environment, that gap needs to close.
What to build into vendor agreements:
- Clear definitions of what AI agents can and cannot do within your systems
- Logging and auditability requirements for all autonomous actions
- Incident response protocols specifically covering AI agent failures or misuse
- Regular security reviews that explicitly cover agentic systems
The Integration Promise: What’s Real, What to Watch
One of the most compelling vendor claims right now is dramatically faster integration timelines — months compressed to days through AI-driven automation. In many cases this is genuine. AI can map data schemas, configure APIs, and stand up environments faster than manual processes ever could.
But the caveat matters: legacy systems, compliance requirements, and complex edge cases still require experienced human judgment. Vendors who promise fully automated integration without acknowledging this are either oversimplifying or underselling the work ahead.
The smarter evaluation lens:
- Ask for specific examples of integrations in environments similar to yours — not a general capability claim
- Understand where human oversight is still required in their process
- Make sure SLAs account for the complexity of your specific environment, not an idealized one
Your Team Is Part of the Equation — Don’t Let Vendors Ignore That
The most overlooked dimension of AI-era vendor management is the human impact inside your own organization. When a vendor deploys new automation, changes a workflow, or introduces an AI assistant your team interacts with daily, adoption and experience directly affect whether you see the promised ROI.
Poor AI experiences — confusing interfaces, opaque decisions, workflows that don’t match how your team actually works — create friction, mistrust, and workarounds that quietly erode productivity. This is measurable, even if most vendor agreements don’t measure it.
Build experience accountability into vendor relationships:
- Define what successful adoption looks like, not just successful deployment
- Include user experience quality as part of your vendor review criteria
- Require vendors to have a change management component, not just a technical delivery plan
The New Scorecard: What to Measure in 2026
If you’re still evaluating your IT services vendors primarily on ticket resolution time and uptime SLAs, you’re measuring the last era. Those metrics still matter, but they’re table stakes.
The vendors worth investing in should be able to speak to:
- Automation coverage — what percentage of routine work is handled without human intervention, and is that number growing?
- Model accuracy and drift — for AI-driven systems, how is performance monitored over time and how are degradations caught?
- Cost per outcome — not just total contract value, but what you’re paying per resolved incident, per deployment, per productivity gain
- Security posture for AI systems — specific, not general
- Workforce impact — are your people more capable and less frustrated than they were 12 months ago?
The Bottom Line for IT Leaders
The AI era doesn’t make vendor management simpler — it makes it more consequential. The gap between a vendor who is genuinely AI-mature and one who is marketing AI maturity is significant, and it’s your job to close that information gap before it shows up as a budget problem or a security incident.
The good news: the right questions are knowable, the right metrics are definable, and the vendors who are doing this well will welcome the scrutiny. The ones who don’t are telling you something important.
We work with IT leade

