Pillar 02 · ITSM AI Readiness
The five questions every IT leader should answer before turning on AI in their service desk, and what the answers look like when you ask them honestly.
Most ITSM AI projects don't fail because the AI is bad. They fail because the data underneath it can't support what the AI is being asked to do. The vendor demo works; the production rollout doesn't. Six months later, the dashboards are quiet and nobody can quite explain why.
This page answers the five questions that separate AI projects that deliver from those that quietly stall. The answers draw on a recent diagnostic we ran on 1,000 closed tickets and 232 open tickets from a single UK-based MSP, a working ServiceNow implementation partner serving 22 client organisations. We've anonymised the source. The patterns are real.
1 Data Quality
Most AI features shipping in ServiceNow Now Assist, Halo's AI suite, Freshservice Freddy, and Zendesk's AI tools don't fine-tune on customer data at scale. They use your data as a retrieval source. That distinction matters because the failure modes are different. A retrieval-based AI doesn't need millions of examples; it needs a few hundred coherent examples per pattern. The bottleneck isn't volume. It's consistency.
Three measurable conditions determine whether ITSM data is usable in this way:
When tickets describing the same issue land in different categories, or default to a generic catch-all, AI clustering can't form coherent groups. In one MSP's open ticket data, 70% of categorised tickets resolved to a single leaf category called "Configuration Change." That's effectively no taxonomy.
Suggested-replies, knowledge suggestions, and FCR uplift all depend on past resolutions being readable. In the same dataset, 26% of ticket summaries were under 30 characters. Resolution notes were absent entirely from the standard export. AI cannot suggest a fix from a ticket closed with no record of how it was solved.
ITSM systems accumulate non-customer noise: scheduled checks, automation alerts, time-tracking entries, dev tickets cross-posted from project tools. Roughly 13% of one MSP's closed tickets weren't customer issues at all. The largest discoverable cluster was time-tracking entries from a single agent.
The fix isn't AI training. It's data hygiene, and it produces value with or without AI.
2 Measuring Value
Most ITSM platforms ship AI features with built-in dashboards showing how often the features are invoked: how many summaries generated, how many suggestions surfaced, how many sessions used the virtual agent. Those numbers are easy to grow and tell you almost nothing. A suggestion that's surfaced but ignored is worse than no suggestion at all, it costs the agent attention without delivering value.
The four signals that actually measure whether AI is paying off all sit downstream of the data quality work described above. Each maps to a specific failure mode that buyers should expect to see if their data isn't ready:
The chain matters. If clustering is incoherent because categorisation is fragmented, suggestions can't form. If suggestions can't form, replies aren't useful. If replies aren't useful, FCR doesn't move. One broken link breaks every metric downstream. Which is why measuring usage in isolation produces false comfort, and why measuring all four together reveals the actual health of the AI deployment.
The teams that get this right run a baseline measurement before enabling AI, then measure the same signals 90 days post-enablement. Without the baseline, vendor dashboards become the only available frame, and they always look positive.
3 Data Preparation
Vendor demos run on cleaned, curated, representative data. Vendor sales materials describe AI as "ready to use against your existing data." Consultancy proposals describe AI as requiring a 6-12 month "data foundation programme" before any value can be realised. Both are misleading. The realistic position sits between them.
For most mainstream ITSM AI features, agent-assist, knowledge suggestions, ticket clustering, basic predictive routing, the preparation work breaks down into three categories:
What consultancies sell as "AI data preparation" often includes none of these and instead focuses on data warehousing, lake architectures, and governance frameworks. Those are valuable for analytics; they have minimal impact on whether ITSM AI features deliver. The teams getting fast value from AI are the ones doing the unglamorous triage and categorisation work first.
4 Integration
Vendor AI features generally produce three types of output: suggestions (agent-facing), automations (system-facing), and insights (manager-facing). Each requires different integration work, and most teams under-invest in the integration most likely to determine adoption.
Suggestions require the most workflow design and the least technical integration. The AI is already inside the agent's primary tool. The question is whether the suggestion appears at the moment the agent needs it, in a form they can act on without context-switching. If suggested-replies appear after the agent has already started typing, they get ignored. If knowledge articles surface in a sidebar the agent has minimised, they get ignored. The integration work is interface design, not API work.
Automations require the most technical integration and the most governance. AI-driven actions, auto-categorising, auto-routing, auto-resolving, must connect to legacy systems where the consequences land. A misrouted ticket in a workflow that sends to an email distribution list nobody monitors creates a longer outage than no routing at all. The integration risk is not the API call; it's the human or system at the receiving end of an action they didn't expect.
Insights require the least integration but the most cultural alignment. AI-generated trend reports, anomaly alerts, and capacity warnings only produce value if the recipient has authority and bandwidth to act on them. Most insight features get switched on, generate alerts that nobody actions, and quietly stop being read.
For legacy system integration specifically, older ITSM platforms, custom-built tools, on-premise systems, the practical question is whether the AI feature can be granted appropriate read/write access without exposing sensitive data. SaaS AI features often require data egress that legal or security teams will (correctly) block. The mitigation is usually one of: regional data residency commitments from the vendor, on-premise AI inference (rare but emerging), or scoping the AI feature to non-sensitive data only.
5 Vendor vs. Custom
The "AI-native platform" claim from ITSM vendors is overstated but increasingly meaningful. ServiceNow, Halo, Freshservice, Zendesk, and TOPdesk have shipped real AI features in 2025-2026 that meaningfully change the agent experience when the data underneath supports them. The features aren't differentiated enough to drive platform selection on their own, but they're capable enough that switching platforms purely to access AI rarely justifies the migration cost.
Three scenarios where custom AI development genuinely makes sense:
For everyone else, which is most readers of this page, the productive question is not vendor-vs-custom but how to extract maximum value from the vendor AI you've already paid for. The data preparation described in the previous questions is the highest-leverage activity available. Custom AI development is high-leverage in narrow circumstances and a costly distraction in most.
Run our free assessment. Upload a CSV export from your ITSM tool, Halo, ServiceNow, Freshservice, Zendesk, TOPdesk, or any CSV. We score it across categorisation, resolution quality, completeness, and noise. The whole thing runs in your browser. Nothing leaves your machine.