Pillar 01 · ITSM AI Readiness
Strategy & value alignment before the licence renewal arrives.
The five questions every IT leader should answer to make sure AI investment is solving real problems, not creating dashboard activity that nobody can connect to outcomes.
Last updated May 2026
·
Audience CIOs, IT directors, service desk leads
·
Reading time ~10 min
The hardest questions about AI in ITSM aren't technical. They're strategic. Most organisations buy AI features as part of a broader platform commitment, then struggle to articulate what success looks like once the features are switched on. By the time the licence renewal arrives, the conversation is about whether to keep paying for something nobody quite measured.
This page covers the five questions that determine whether AI investment in ITSM produces value or simply produces output. Each has an honest answer that vendors won't volunteer and consultancies usually obscure. Read them before the next planning cycle, not after.
1 Business Problems
What specific business problems should AI in ITSM actually solve?
Short answer: AI is well-matched to four specific problems: agent time on repetitive low-value tasks, knowledge that fails to scale across the team, slow first-contact resolution, and inconsistent handling between agents. It is poorly matched to unclear processes, organisational silos, or strategic uncertainty. Those need leadership decisions, not AI.
The framing trap is treating AI as a general productivity tool. It isn't. Modern ITSM AI is good at specific repeatable patterns in well-bounded data. It is bad at ambiguity, judgement calls, and anything that requires understanding context the data doesn't contain.
The four problems where AI delivers genuine value:
Repetitive task volume
Password resets, access requests, status updates, ticket triage. AI can summarise, suggest, and pre-fill, reducing the cognitive cost of high-volume low-value work.
Knowledge that doesn't scale
Senior agents solve the same problems weekly. Their knowledge stays in their heads. AI surfaces past resolutions to other agents at the moment they're needed.
Slow first-contact resolution
Tickets that bounce between teams because the right answer wasn't surfaced first. AI suggestions, when accurate, reduce that bouncing, but only when the underlying data supports them.
The fourth problem, inconsistent handling between agents, is where AI delivers its most underrated value. New starters benefit from suggestion-class AI more than experienced agents do, because suggestions encode patterns that experienced agents already know. Onboarding speed and quality consistency are realistic, measurable wins.
The problems AI cannot solve, despite vendor claims:
- Unclear processes. If your incident process is undefined, AI will accelerate undefined work. The output is faster confusion.
- Organisational silos. If teams don't share information today, AI in one team's tool won't bridge to another's. The data still doesn't flow.
- Strategic ambiguity. "Should we adopt AI" isn't a question AI can answer. It's a leadership decision about what kind of operation you want to run.
- Cultural problems. If agents don't write good resolution notes today, they won't write better ones because AI is reading them. The behaviour change has to come first.
Decision frame
Before procuring AI, write down the three specific business problems you want it to solve. Be concrete: "reduce average handling time on password resets by 30%" not "improve service desk efficiency." If you can't write the three sentences, you're not ready to buy.
2 ROI
How do we calculate ROI for AI in ITSM?
Short answer: By comparing licence cost plus implementation effort against measurable outcomes, agent hours saved (with realistic acceptance rates of 20-40%, not 100%), deflection volume against new KB content, FCR improvement on AI-active tickets, and reduction in escalation. Vendor ROI calculators consistently overstate by 3-5x because they assume universal acceptance.
The ROI calculations vendors provide in sales conversations almost universally assume that every AI suggestion is acted on, every deflection avoids a full ticket, and every summary saves the full read time. None of those is true in practice.
A realistic ITSM AI ROI calculation looks like this:
Vendor assumption
Realistic baseline
Suggestion acceptance: 100%
20-40% in mature deployments. Below 15% means data quality is wrong.
Deflection equals full ticket avoided
30-50% of "deflected" interactions return as tickets within a week.
Summarisation saves 5+ minutes per ticket
1-2 minutes for routine tickets. Up to 8 for complex history.
Implementation effort: minimal
40-120 hours of data prep, change management, training, measurement setup.
Apply realistic baselines and the ROI math still works for most teams, but the payback period typically runs 9-18 months rather than the 3-6 months vendors quote. That difference matters at budget sign-off, especially when finance teams check the numbers against actual outcomes 12 months later.
Three additional cost categories that vendor ROI calculators reliably ignore:
- Data preparation. The 20-60 hours of triage, categorisation, and resolution-note enforcement that determines whether AI works at all. Hidden in vendor proposals as "your existing data is fine."
- Change management. Agents need training. Managers need new dashboards. Quality assurance processes need updating. Conservative estimate: 40 hours of senior time per 25 agents.
- Ongoing measurement. ROI cannot be claimed without measurement infrastructure. Building that infrastructure, baseline data, control groups, attribution logic, is itself a project. Most teams skip it and rely on vendor dashboards, which means they never know the actual return.
The single most useful ROI signal
Suggestion acceptance rate, measured over 60 days post-launch. Above 30% means the data is supporting the AI, value will follow. Below 15% means the AI is generating output that agents are filtering out, which costs attention without producing value. Measure this before extrapolating any other ROI claims.
3 Quick Wins
What are the right "quick win" AI use cases to pilot first?
Short answer: Ticket summarisation, suggested replies, and knowledge surfacing. All three are agent-experience features that work within existing workflows, produce measurable outcomes within 30 days, and don't require governance changes before launch. Avoid leading with autonomous agents, AI-driven routing, or auto-resolution.
Quick wins matter because they create the political capital needed for the harder, higher-value AI work later. A failed first pilot kills the programme. A successful first pilot funds the second.
The three pilots that consistently work as first deployments:
Ticket summarisation
Generates a 3-line summary of long ticket histories. Saves real time on escalations. Failure mode is benign, agents read the original if the summary is wrong. Easiest to instrument and measure.
Suggested replies
Pre-fills draft responses based on similar past tickets. Acceptance rate is the cleanest signal of data quality. Agents stay in control of what gets sent.
Knowledge surfacing
Surfaces relevant KB articles based on ticket content. Drives KB use up; lets you measure which articles work and which don't. Compounding value.
What to avoid in the first pilot, regardless of vendor pitch:
- Autonomous agents that take actions without human review. The governance overhead exceeds the operational benefit until trust is established.
- AI-driven routing at the start. Routing failures produce visible, auditable consequences (SLA breaches, customer complaints). Wait until acceptance is measured on lower-risk features.
- Auto-resolution. The combination of AI confidence and irreversible action is the single highest-risk pattern in ITSM AI. Save it for after maturity.
- Multi-team rollout. Pilot with one team, one shift, one ticket type. Expand on evidence, not enthusiasm.
The 30-60-90 framework
Day 30: baseline measurements complete, pilot enabled in one team. Day 60: acceptance rates measured, governance gaps identified, decision to expand or pause. Day 90: scale to second team if Day 60 signals were positive, or revisit data quality if they weren't. Teams that skip Day 60 review almost always over-extend.
4 Maturity Alignment
How do we align our AI strategy with our ITSM maturity?
Short answer: Match AI ambition to operational maturity. Teams without consistent categorisation, knowledge management discipline, or stable processes will struggle to extract value from AI features regardless of vendor. The honest sequence is: stabilise core practices first, enable AI on top, expand scope as the team matures. Skipping ahead produces output, not outcomes.
ITSM maturity isn't an abstract concept here, it's the specific set of operational disciplines that determine whether AI features have anything useful to work with. The three that matter most for AI readiness:
- Categorisation discipline. Tickets land in consistent categories. Generic catch-all categories ("Other," "General") are below 15% of volume. Without this, AI clustering produces incoherent groups.
- Knowledge management as a habit, not a project. KB articles are written when issues are resolved, not in a quarterly cleanup. Without this, AI knowledge surfacing has nothing to surface.
- Resolution practice. Tickets close with substantive notes describing what was actually done. Without this, AI suggested-replies cannot learn from past resolutions.
An honest maturity assessment maps to AI readiness as follows:
Maturity signal
AI strategy implication
Inconsistent categorisation, sparse KB, generic resolution notes
Stabilise basics first. AI features will underperform regardless of vendor.
Reasonable categorisation, basic KB, mixed resolution notes
Pilot summarisation and knowledge surfacing. Expect modest acceptance rates initially.
Strong categorisation, maintained KB, consistent resolution notes
Full agent-assist deployment is appropriate. Begin measuring uplift on FCR.
All of the above plus mature change and problem management
Predictive features (impact analysis, change risk) become viable. Most teams aren't here.
Most organisations sit in the middle two rows. The rare ones in the top row should be honest about it, buying AI before fixing the basics produces 18 months of vendor calls explaining why the features aren't delivering. The fix is upstream, not in a different vendor.
A useful test
Ask three agents to resolve the same hypothetical ticket. If their resolution notes look meaningfully different in structure, length, and detail, your AI features will struggle. Standardisation of practice precedes standardisation of AI output.
5 Agent Wellbeing
Can AI in ITSM actually reduce agent burnout, or does it create new problems?
Short answer: Both, depending on framing. AI reduces burnout when it removes repetitive tasks agents already disliked. AI creates new burnout when it surfaces low-quality suggestions agents must constantly ignore, when it adds review burden, or when leadership treats it as a headcount-reduction tool rather than a capacity multiplier. The framing matters as much as the technology.
The vendor narrative, "AI removes drudgery so your agents can focus on meaningful work", is half true. The full picture includes failure modes that don't appear in the sales deck.
AI reduces burnout when:
- It removes work agents already wanted off their plate. Repeated password resets, status update emails, copy-paste responses to common queries. Agents welcome the relief.
- Suggestion quality is high enough to trust. Above 30% acceptance rate means agents stop second-guessing every suggestion. Below that, they're triple-checking, which adds work.
- New starters get earlier autonomy. AI suggestions encode tribal knowledge. Junior agents reach competent independent handling weeks earlier.
- Leadership frames AI as capacity expansion. "We can take on the customer satisfaction project we never had time for" produces engagement. "We can reduce headcount" produces resistance and quiet sabotage.
AI creates new burnout when:
- Suggestions are wrong often enough to require constant filtering. Below 15% acceptance, the cognitive cost of evaluating every suggestion exceeds the benefit. Agents experience this as additional work, not less.
- Review responsibility increases without recognition. "You also need to review the AI's outputs" lands as additional unpaid work if framed as a duty rather than a productivity feature.
- Metrics shift to AI-friendly ones without context. "Tickets per agent" goes up because AI is doing the easy ones; the human-handled tickets are now disproportionately complex, but compensation and recognition don't reflect this.
- Headcount reduction is the explicit goal. Agents will undermine the AI to protect colleagues, often without realising they're doing it. Adoption collapses.
The most damaging pattern: quietly retraining the team's job description without saying so. "AI handles the easy tickets, humans handle the complex ones" sounds reasonable until you realise the easy tickets were the breaks in the day. Agents handling complex tickets back-to-back-to-back burn out faster than agents handling a mix.
A useful question for leadership
Does your AI investment plan describe success in terms of agent capacity, or agent reduction? The first produces sustainable value. The second produces a 12-month adoption cycle ending in rollback.
Ground your AI strategy in actual data.
Run our free assessment. Upload a CSV export from your ITSM tool, Halo, ServiceNow, Freshservice, Zendesk, TOPdesk, or any CSV. We score it across categorisation, resolution quality, completeness, and noise. The whole thing runs in your browser. Nothing leaves your machine.