Pillar 03 · ITSM AI Readiness

Governance & security before AI touches production.

The five questions every IT leader must answer before AI features go anywhere near production data, covering leaks, hallucinations, ownership, oversight, and the EU AI Act.

Last updated May 2026 · Audience CIOs, CISOs, IT directors, compliance leads · Reading time ~11 min

Governance is the question that arrives late. AI features get evaluated, piloted, even deployed, before anyone asks the questions that should have come first. By the time the security review happens, the political momentum is against pulling back. Most ITSM AI governance failures are this same shape, not malice, just sequence.

This page covers the five governance and security questions that should be answered before AI features touch production data. Each has a defensible answer that doesn't require a PhD in AI ethics or a six-month consulting engagement. Read them at the start of the procurement, not after the contract is signed.

1 Data Protection

How do we prevent data leaks when using AI features in ITSM?

Short answer: Through three layered controls, contractual commitments from the vendor, technical scoping of what data the AI can access, and operational logging of every AI invocation. Public LLM use without enterprise agreements is the largest single source of ITSM AI leaks. Most leaks are agent-pasted, not vendor-extracted.

The leak most leadership worries about, vendor exfiltrating customer data, is real but well-mitigated by contracts. The leak most organisations actually experience is more mundane and harder to prevent: agents pasting ticket content into public LLM tools to get help with a problem, copying confidential customer information into systems with no enterprise protection.

Three layered controls produce defensible leak prevention:

Contractual layer

Data residency commitments, non-training clauses, deletion timelines, breach notification SLAs. Standard in enterprise vendor agreements; absent in consumer LLM terms of service. Read the actual contract, not the marketing.

Technical layer

Scope what data the AI can access. Mask PII before AI processing where possible. Avoid sending credentials, tokens, or financial data in ticket bodies that AI features will summarise. Some vendors offer regional inference.

Operational layer

Log every AI invocation. Audit periodically. Block public LLM domains at the network level if your governance policy forbids them. Train agents on the difference between sanctioned AI tools and shadow AI use.

The questions to ask vendors before signing:

The unfashionable truth The vendor agreement is usually the easy part. The hard part is governing what your own agents do with public AI tools when nobody is watching. That's a culture and policy problem, not a technology one.

2 Hallucinations & Transparency

How do we manage AI hallucinations and ensure transparency in ticket handling?

Short answer: Hallucination is reduced, not eliminated, by grounding AI responses in your specific data rather than generic model knowledge. Transparency comes from clearly labelling AI-generated content, showing what sources informed each suggestion, and giving agents one-click visibility into the AI's reasoning chain. Vendors that won't show their grounding sources are selling you marketing, not engineering.

Hallucination, the AI confidently producing plausible-sounding but incorrect content, is an inherent property of language models, not a bug that gets fixed. The question for ITSM is not how to eliminate hallucination but how to contain it so that hallucinated content cannot cause harm.

Three categories of hallucination that matter in ITSM, ranked by severity:

Hallucination type
Containment approach
Fabricated technical instructions (suggesting steps that would damage systems)
Hard requirement: AI suggestions for technical actions must cite specific KB articles or past tickets. No source = no suggestion shown.
Invented customer history (referencing tickets, agreements, or events that didn't happen)
AI summarisation must use only the visible ticket history. Disable any "infer from similar customers" features without explicit grounding.
Confidently wrong tone or framing (suggesting empathy where the customer is angry, or formality where casual was appropriate)
Lower-stakes; agent edit before send is sufficient. But measure edit distance, high edits = AI is misreading context.

Transparency mechanisms that materially help:

A useful hiring question for AI vendors "Show me, on screen, what the AI was looking at when it produced this suggestion." If they can't, the AI is probably ungrounded, meaning hallucination risk is structural, not occasional. Grounding visibility is a real technical capability, not just a UI choice.

3 Ownership

Who owns the AI model, the inputs, and the outputs?

Short answer: The model is owned by the vendor. Your inputs (ticket data, KB content) remain yours but are typically licensed to the vendor for inference. Outputs are usually yours but require contractual confirmation. The critical clauses to negotiate are non-training, deletion timelines, and audit rights, vendors default to terms favourable to themselves.

Ownership in AI contracts is more complex than ownership in conventional SaaS contracts, and the differences matter. Three layers need explicit clarity:

The model

Owned by the vendor or their underlying provider (OpenAI, Anthropic, Google). You license access. Concerns: model deprecation, version changes affecting output quality, vendor switching to providers you didn't approve.

Your inputs

Yours, but licensed to the vendor for inference. Concerns: whether they're used to train the next model, whether they're cached, how long they're retained, who at the vendor can see them.

The outputs

Usually yours, but check the contract. Some vendors claim non-exclusive rights to AI-generated content. KB articles generated from your tickets should be unambiguously yours.

Five contract clauses worth negotiating before signing any AI-feature ITSM agreement:

A negotiation principle Vendor default contracts are written for the vendor. The terms you accept become precedent for every renewal. Negotiating once, properly, at initial signing is much easier than re-opening terms three years in when the relationship is operationally embedded.

4 Human Oversight

What does "human-in-the-loop" actually mean for ITSM AI in practice?

Short answer: It means a named human reviews and approves every AI action that has external consequence, sending a customer message, modifying routing, closing a ticket, or making a change to production. Suggestion-class AI is HITL by default. Automation requires explicit approval gates per action class. "HITL on a dashboard somewhere" is theatre, not control.

"Human in the loop" has become a marketing phrase that vendors use to imply oversight without committing to specifics. The practical version requires defining which actions require what kind of human approval, by whom, in what time frame.

A useful taxonomy of AI actions in ITSM, with the appropriate level of human oversight for each:

Action class
Required oversight
Internal suggestions to agents (replies, KB articles, summaries)
Implicit HITL, the agent reviews before acting. Logging only.
Automated categorisation or tagging (no external effect)
Sample-based review. 5-10% of AI categorisations checked weekly. Pattern review monthly.
Automated routing (affects internal workflows)
Per-class approval gate at deployment. Continuous monitoring of misroute rates. Roll-back authority defined.
Customer-facing automated responses
Explicit approval per response template. Spot-check 10-20% of live responses. Disable on quality-event escalation.
Automated change actions or ticket closure
Named human approval per action. Time-limited approval windows. Full audit trail with reasoning.

The mistake to avoid: treating HITL as a slider that gets dialled to "more" or "less." It's not a single setting. It's a policy that varies by action class, with explicit approvals, named owners, and consequence-proportional rigour. A single HITL configuration applied uniformly is either too restrictive (where automation could safely run) or too permissive (where it shouldn't).

Three common HITL failures to watch for:

The single test Pick a recent week of AI activity. Ask: who reviewed which actions, when, and what did they decide? If the answer is "we have logs we could review," HITL exists in theory only. Real oversight produces a record of decisions made, not just events recorded.

5 Regulation

How do we comply with the EU AI Act and similar regulations for ITSM AI?

Short answer: Most ITSM AI is classified as limited-risk under the EU AI Act, requiring transparency obligations (disclosing AI use to users) and basic documentation. AI that affects employment decisions or critical infrastructure may rise to high-risk, requiring conformity assessment. Vendor compliance does not transfer, your organisation remains responsible for deployed use.

The EU AI Act, in force from 2024 with phased application running through 2026 and beyond, classifies AI systems by risk. Most ITSM AI features fall into the limited-risk category, which is significantly less onerous than the high-risk category that has dominated public discussion. But "limited-risk" still carries real obligations, and the boundary between limited and high-risk in ITSM is closer than most teams realise.

How ITSM AI typically maps to EU AI Act risk classes:

ITSM AI use
Likely EU AI Act classification
Ticket summarisation, suggested replies, KB surfacing
Limited risk. Transparency obligations apply.
Sentiment analysis on customer interactions
Limited risk if used internally. Higher scrutiny if it affects customer treatment.
Automated routing or categorisation
Limited risk in most cases. Documentation requirements apply.
AI scoring of agent performance for HR or compensation decisions
Likely high risk. Triggers employment-related provisions. Conformity assessment required.
AI affecting access to essential services (utilities, healthcare, finance)
High risk. Full Annex III treatment required. Substantial obligations.

Three principles to apply regardless of regulatory regime:

The vendor-compliance trap "Our vendor is EU AI Act compliant" is not the same as "our use of their tool is compliant." Compliance attaches to the deployer, not just the provider. A compliant tool used in a non-compliant way produces a non-compliant deployment. The vendor's certification protects the vendor; your governance protects you.

Get governance fundamentals right from day one.

Run our free assessment to understand the data and process baseline you're governing. We score it across categorisation, resolution quality, completeness, and noise, the foundation that any governance framework has to sit on top of. Browser-based, nothing leaves your machine.