Pillar 01 · ITSM AI Readiness

Strategy & value alignment before the licence renewal arrives.

The five questions every IT leader should answer to make sure AI investment is solving real problems, not creating dashboard activity that nobody can connect to outcomes.

Last updated May 2026 · Audience CIOs, IT directors, service desk leads · Reading time ~10 min

The hardest questions about AI in ITSM aren't technical. They're strategic. Most organisations buy AI features as part of a broader platform commitment, then struggle to articulate what success looks like once the features are switched on. By the time the licence renewal arrives, the conversation is about whether to keep paying for something nobody quite measured.

This page covers the five questions that determine whether AI investment in ITSM produces value or simply produces output. Each has an honest answer that vendors won't volunteer and consultancies usually obscure. Read them before the next planning cycle, not after.

1 Business Problems

What specific business problems should AI in ITSM actually solve?

Short answer: AI is well-matched to four specific problems: agent time on repetitive low-value tasks, knowledge that fails to scale across the team, slow first-contact resolution, and inconsistent handling between agents. It is poorly matched to unclear processes, organisational silos, or strategic uncertainty. Those need leadership decisions, not AI.

The framing trap is treating AI as a general productivity tool. It isn't. Modern ITSM AI is good at specific repeatable patterns in well-bounded data. It is bad at ambiguity, judgement calls, and anything that requires understanding context the data doesn't contain.

The four problems where AI delivers genuine value:

Repetitive task volume

Password resets, access requests, status updates, ticket triage. AI can summarise, suggest, and pre-fill, reducing the cognitive cost of high-volume low-value work.

Knowledge that doesn't scale

Senior agents solve the same problems weekly. Their knowledge stays in their heads. AI surfaces past resolutions to other agents at the moment they're needed.

Slow first-contact resolution

Tickets that bounce between teams because the right answer wasn't surfaced first. AI suggestions, when accurate, reduce that bouncing, but only when the underlying data supports them.

The fourth problem, inconsistent handling between agents, is where AI delivers its most underrated value. New starters benefit from suggestion-class AI more than experienced agents do, because suggestions encode patterns that experienced agents already know. Onboarding speed and quality consistency are realistic, measurable wins.

The problems AI cannot solve, despite vendor claims:

Decision frame Before procuring AI, write down the three specific business problems you want it to solve. Be concrete: "reduce average handling time on password resets by 30%" not "improve service desk efficiency." If you can't write the three sentences, you're not ready to buy.

2 ROI

How do we calculate ROI for AI in ITSM?

Short answer: By comparing licence cost plus implementation effort against measurable outcomes, agent hours saved (with realistic acceptance rates of 20-40%, not 100%), deflection volume against new KB content, FCR improvement on AI-active tickets, and reduction in escalation. Vendor ROI calculators consistently overstate by 3-5x because they assume universal acceptance.

The ROI calculations vendors provide in sales conversations almost universally assume that every AI suggestion is acted on, every deflection avoids a full ticket, and every summary saves the full read time. None of those is true in practice.

A realistic ITSM AI ROI calculation looks like this:

Vendor assumption
Realistic baseline
Suggestion acceptance: 100%
20-40% in mature deployments. Below 15% means data quality is wrong.
Deflection equals full ticket avoided
30-50% of "deflected" interactions return as tickets within a week.
Summarisation saves 5+ minutes per ticket
1-2 minutes for routine tickets. Up to 8 for complex history.
Implementation effort: minimal
40-120 hours of data prep, change management, training, measurement setup.

Apply realistic baselines and the ROI math still works for most teams, but the payback period typically runs 9-18 months rather than the 3-6 months vendors quote. That difference matters at budget sign-off, especially when finance teams check the numbers against actual outcomes 12 months later.

Three additional cost categories that vendor ROI calculators reliably ignore:

The single most useful ROI signal Suggestion acceptance rate, measured over 60 days post-launch. Above 30% means the data is supporting the AI, value will follow. Below 15% means the AI is generating output that agents are filtering out, which costs attention without producing value. Measure this before extrapolating any other ROI claims.

3 Quick Wins

What are the right "quick win" AI use cases to pilot first?

Short answer: Ticket summarisation, suggested replies, and knowledge surfacing. All three are agent-experience features that work within existing workflows, produce measurable outcomes within 30 days, and don't require governance changes before launch. Avoid leading with autonomous agents, AI-driven routing, or auto-resolution.

Quick wins matter because they create the political capital needed for the harder, higher-value AI work later. A failed first pilot kills the programme. A successful first pilot funds the second.

The three pilots that consistently work as first deployments:

Ticket summarisation

Generates a 3-line summary of long ticket histories. Saves real time on escalations. Failure mode is benign, agents read the original if the summary is wrong. Easiest to instrument and measure.

Suggested replies

Pre-fills draft responses based on similar past tickets. Acceptance rate is the cleanest signal of data quality. Agents stay in control of what gets sent.

Knowledge surfacing

Surfaces relevant KB articles based on ticket content. Drives KB use up; lets you measure which articles work and which don't. Compounding value.

What to avoid in the first pilot, regardless of vendor pitch:

The 30-60-90 framework Day 30: baseline measurements complete, pilot enabled in one team. Day 60: acceptance rates measured, governance gaps identified, decision to expand or pause. Day 90: scale to second team if Day 60 signals were positive, or revisit data quality if they weren't. Teams that skip Day 60 review almost always over-extend.

4 Maturity Alignment

How do we align our AI strategy with our ITSM maturity?

Short answer: Match AI ambition to operational maturity. Teams without consistent categorisation, knowledge management discipline, or stable processes will struggle to extract value from AI features regardless of vendor. The honest sequence is: stabilise core practices first, enable AI on top, expand scope as the team matures. Skipping ahead produces output, not outcomes.

ITSM maturity isn't an abstract concept here, it's the specific set of operational disciplines that determine whether AI features have anything useful to work with. The three that matter most for AI readiness:

An honest maturity assessment maps to AI readiness as follows:

Maturity signal
AI strategy implication
Inconsistent categorisation, sparse KB, generic resolution notes
Stabilise basics first. AI features will underperform regardless of vendor.
Reasonable categorisation, basic KB, mixed resolution notes
Pilot summarisation and knowledge surfacing. Expect modest acceptance rates initially.
Strong categorisation, maintained KB, consistent resolution notes
Full agent-assist deployment is appropriate. Begin measuring uplift on FCR.
All of the above plus mature change and problem management
Predictive features (impact analysis, change risk) become viable. Most teams aren't here.

Most organisations sit in the middle two rows. The rare ones in the top row should be honest about it, buying AI before fixing the basics produces 18 months of vendor calls explaining why the features aren't delivering. The fix is upstream, not in a different vendor.

A useful test Ask three agents to resolve the same hypothetical ticket. If their resolution notes look meaningfully different in structure, length, and detail, your AI features will struggle. Standardisation of practice precedes standardisation of AI output.

5 Agent Wellbeing

Can AI in ITSM actually reduce agent burnout, or does it create new problems?

Short answer: Both, depending on framing. AI reduces burnout when it removes repetitive tasks agents already disliked. AI creates new burnout when it surfaces low-quality suggestions agents must constantly ignore, when it adds review burden, or when leadership treats it as a headcount-reduction tool rather than a capacity multiplier. The framing matters as much as the technology.

The vendor narrative, "AI removes drudgery so your agents can focus on meaningful work", is half true. The full picture includes failure modes that don't appear in the sales deck.

AI reduces burnout when:

AI creates new burnout when:

The most damaging pattern: quietly retraining the team's job description without saying so. "AI handles the easy tickets, humans handle the complex ones" sounds reasonable until you realise the easy tickets were the breaks in the day. Agents handling complex tickets back-to-back-to-back burn out faster than agents handling a mix.

A useful question for leadership Does your AI investment plan describe success in terms of agent capacity, or agent reduction? The first produces sustainable value. The second produces a 12-month adoption cycle ending in rollback.

Ground your AI strategy in actual data.

Run our free assessment. Upload a CSV export from your ITSM tool, Halo, ServiceNow, Freshservice, Zendesk, TOPdesk, or any CSV. We score it across categorisation, resolution quality, completeness, and noise. The whole thing runs in your browser. Nothing leaves your machine.