AI Readiness for Service Management: From Scorecards to Agents
AI Readiness for Service Management is the discipline of getting AI into day‑to‑day service work safely and measurably. The practice uses an outcome‑first approach that baselines current usage, aligns ownership and controls, and moves from scorecards to the first tuned agents, showing value on a steady cadence.
How AI programmes start
You were asked, “What’s our AI plan?”
You put in a plan and the budget was approved. You then bought AI-enabled products and services but turning that into clear wins has been harder than expected.
You’re not alone.
Others are feeling it too, which is why the biggest players OpenAI, Anthropic, Google, and Palantir have shifted to a toolbox play. Their services teams bring powerful models Forward Deployed Engineering (FDE) to co-design with your people and then productise the patterns. They co-create IP with very large enterprises, fold it back into the toolbox, climb the stack, and price higher, often around $10m+ a year in AI services. That play sets the tone on spend and speed.
There’s evidence to back this up, too. In 2025, adoption is early and results vary. MIT’s 2025 study found only about 5% of enterprise generative‑AI pilots delivered rapid revenue impact. Gartner expects around 30% of generative‑AI projects to be abandoned after proof of concept by the end of 2025. S&P Global reports 42% of firms scrapped most AI initiatives this year, up from 17% last year.
As a leading service management consultancy, we see the same signals across 100+ organisations we’ve interviewed and benchmarked, spanning automotive, telecoms, retail, manufacturing, insurance, banking, defence, media and healthcare; the patterns are consistent.
Here’s what matters: bubble or no bubble, AI is now part of how services run. It speeds resolution, lifts self-service, cuts cost per request, and creates room for new services. Service management is central.
The task for leaders is to start early, build capability across teams, and make progress visible with a few simple measures each quarter. Before we get into the fixes, here are the seven challenges we see most often:
- Fragmented and low-quality data
- Siloed systems with weak integration
- Inconsistent processes
- Low knowledge and automation maturity
- Legacy platforms and technical debt
- Weak governance and guardrails
- Limited sponsorship and ROI focus
Challenges in service management AI adoption and how to address them
AI adoption stalls for practical reasons. Data is messy, systems don’t talk cleanly, workflows vary, and content goes stale. Risk controls are unclear, platforms carry debt, and sponsorship wobbles when value isn’t visible. This section explains how these barriers show up across IT, HR, Finance and Facilities, why they block progress, and what leaders should look for before moving to the fixes.

1. Fragmented and low-quality service data
Years of system sprawl have left service data scattered across functions. HR cases, finance requests, supplier queries, IT incidents and facilities tickets all live in different tools, with inconsistent structures and limited ownership. Labels drift, templates vary, and information sits in email chains that no one can trace. Knowledge articles become outdated, and metadata is often missing or misleading.
Fixing this takes coordination across teams: agree a shared data model, apply standard templates, and assign clear owners for each major dataset and knowledge area.
Set measurable objectives for accuracy and freshness, move email-based work into a shared case model, and reuse the same integration patterns across HR, Finance, IT and Facilities so requests flow cleanly from intake to fulfilment. Track a small set of common measures, such as resolution time, self-service use and automation coverage—on one scorecard so progress is visible across the enterprise.
2. Siloed systems with weak integration
End‑to‑end work crosses IT service management (ITSM), enterprise resource planning (ERP), human resources information systems (HRIS), identity, finance, monitoring and collaboration. Weak integration means latency, mismatches and manual hops. AI simply exposes the seams.
Modern patterns (event streams, webhooks, integration platforms, documented contracts) let requests flow cleanly from demand to fulfilment to change to knowledge. Without this backbone, pilots stall and don’t scale.
3. Inconsistent processes
Where incident, request and change variants differ by team or region, automation has no stable path to follow. Categorisation and routing bounce, assignments hop, SLAs are breached.
The guidance from multiple industry bodies this year is consistent: simplify and standardise core workflows, and redesign them with automation built in rather than bolted on. That creates predictable paths an assistant can recommend or execute safely.
4. Low knowledge and automation maturity
Self‑service only works when the catalog reflects actual demand, articles are current and searchable, and there are runbooks behind the highest‑volume contacts. Many service desks still carry low automation and a bloated, low maturity knowledge base.
Data-led readiness models explicitly score these gaps because they cap deflection, first-contact resolution (FCR) and time‑to‑resolve until they’re fixed.
5. Legacy platforms and technical debt
Legacy, heavily customised, or on‑premise platforms lack the APIs, performance and upgrade cadence that modern AI features and integrations assume.
Many organisations find they must modernise versions, harden APIs, or migrate to a cloud model to get the basics (data access, security, scale) in place before any meaningful AI rollout.
6. Weak governance and guardrails
Boards ask the same questions: where can assistants act, where do they need approval, and how do we roll back? Without clear decision lanes, audit trails, retention and privacy controls, pilots stall at the proof-of-concept wall. Governance needs to set boundaries and enable progress at the same time.
That means clear accountability for every action an assistant takes, defined responsibility for review and escalation, and traceability through audit, monitoring and policy enforcement. It also means embedding the right guardrails, oversight, ethical risk management, and transparent data use, alongside enablers such as skills, infrastructure and investment. When these work together, AI can move faster, stay compliant and maintain trust across IT, HR, Finance and other service areas.
For a wider view, see OECD, Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions
7. Limited sponsorship and ROI focus
Change stalls when leaders don’t sponsor it, funding is piecemeal, and benefits aren’t evidenced. Reports this year show mixed outcomes: some programmes stall; others create measurable gains. The difference is usually a clear value narrative, a cadence of small wins, and open communication that demystifies the change for teams.
These are large problems, so move in small steps. Secure visible leadership sponsorship and a starter budget. Use it to deliver a tight set of improvements and two well‑chosen use cases. Measure the impact, communicate the results, and loop. That iterative cycle unlocks more funding and support and sets up the phased programme that follows.
AI adoption plan: from readiness scorecards to live chat and agents at scale
These challenges are large in scale. We address them through four steps: establish data-led baseline, prepare data and services in short iterations, enable live chat, and deploy AI agents with clear guardrails & cost controls.
Measure AI readiness with scorecards
How do you know where you stand across data, process, technology and culture? Start by measuring it. Use Fusion GBS’ AI Talos and your service management scorecards to analyse all relevant data from your current estate (structured and unstructured).
AI Talos in brief
Fusion’s AI Talos is a vendor‑agnostic benchmarking and insights platform. It ingests service data from ITSM, HRIS, ERP, monitoring and collaboration tools, normalises it, and applies a best‑practice library to score readiness across your service landscape. It can also run “what‑if” simulations to show the likely impact of fixes before you invest. Analysts recognise readiness benchmarking and value‑realisation tooling as important enablers for scaling AI in service management.
AI Talos tracks 400+ service metrics, highlights coverage and quality, and surfaces bottlenecks. An executive snapshot brings this together as adoption, outcomes and ROI.
You get three things: a clear picture of current performance, a way to test improvements before investing, and an easy way to measure progress over time.
Four phases to move from readiness to live AI operations
These four phases form a simple roadmap that connects early assessment to scaled deployment:
Phase 1 — Readiness: Establish a baseline with AI Talos and identify the key metrics and KPIs.
Phase 2 — Data and Service Preparation: Build stable foundations and raise maturity through short iterations.
Phase 3 — Live Chat Enablement: Expose improved catalog, knowledge, and automation to users.
Phase 4 — AI Agents Deployment: Introduce controlled autonomy with clear guardrails and governance.
Together, they form a repeatable cycle that builds capability, improves data, and steadily expands the role of AI across service management.

Executive action plan: implement and sustain the phases
Phase 1 — Readiness baseline
Use AI Talos to produce readiness scorecards and an operating model snapshot. Agree target KPIs (first‑contact resolution, self‑service adoption, deflection, time‑to‑resolve, automation coverage, change success rate, % auto‑routed tickets). From the baseline, select the first 5–10 fixes, name owners, and set a two‑iteration plan with clear measures.
From Phase 2 onwards, work in small, high‑impact iterations guided by that baseline. Each cycle should focus on five to ten items, deliver in weeks, and show the result.
Phase 2 — Data and Service Preparation
Focus each cycle on:
- Data foundations: standard categories and templates, named owners, cleaner Configuration Items (CI) relationships, simple data quality checks and shared dashboards.
- Process standards: one way to handle incident, request, knowledge and emails; clear “golden paths” that automation can follow.
- Knowledge and automation: fix the top services and articles, retire stale content, add automations for the highest‑volume requests.
- Platform and integration: upgrade what blocks progress, establish API patterns, set privacy and follow secure by design principles.
Evidence each iteration with a short value note: template use, routing accuracy, CI link health, self‑service resolution, deflection and automation coverage.
Phase 3 — Enable live chat in portal and Teams
Expose the improved catalog, knowledge and automations through the portal and Teams. Run a pilot with champions, train the team, then release when quality holds. Measure the basics every week: first‑contact resolution, time to answer, hand‑offs, and user feedback. Use what you learn to pick the next five to ten fixes (content gaps, new automations, routing tweaks). Keep ops and finance on the same page with cost & operational dashboards.
Phase 4 — AI Agent Deployment
Add controlled autonomy once the foundations are in place. Pick models with clear controls and define how assistants behave: where they can act, when they need approval, and how actions are audited and rolled back. Start with two assistants—Employee Navigator and Service Collaborator—in a time-boxed pilot. Track cost and quality, adjust prompts and policies, and iterate until the results are steady.
Then expand in small steps so learning carries forward. Keep standards tight as you scale: shared categories, usable templates, healthy CI links and disciplined knowledge. Re-run AI Talos each quarter to show movement against KPIs, refresh the baseline and set the next backlog. This keeps progress visible and funding on track.
Iterate across all phases: short cycles, visible progress, repeatable results
Work in short cycles. Publish a simple plan for the next few items, deliver them, and share the before‑and‑after. Each cycle cleans more data, stabilises workflows and widens automation. Re‑run the AI Talos scorecards quarterly to benchmark progress, highlight measurable outcomes, and showcase value to sponsors. This visible rhythm builds confidence and unlocks fresh support and funding for the next phase.
Conclusion
AI readiness is an ongoing journey that evolves with every phase of progress. Here’s a clear route: diagnose and benchmark your starting point, fix the blockers, and run a phased plan that brings AI into live service operations with discipline and confidence.
Quarterly benchmarking through AI Talos keeps the story visible. It shows measurable gains, gives leaders evidence to sustain investment, and helps teams see how their work connects to real business outcomes. Over time, this rhythm embeds AI into the operating model and makes service management more efficient, consistent and relevant to the enterprise.
About the authors
Muther Pola — Head of AI at Fusion GBS and a leader in the company’s AI programme for over eight years. He drives innovation, governance, and applied AI adoption across service management environments.
Keyvan Shirnia — Chief Strategy Officer at Fusion GBS, guiding strategy, growth, and customer value realisation with a focus on service management and digital transformation.