Black Book CORE KPIs 2026
The KPI blends concise narratives with role-based application guidance, evidence checklists, and satisfaction scoring tiers (1–10). Use it alongside your scoring tables to display actual results for each vendor across all 18 cross-industry qualitative KPIs.
KPI 1: Strategic Fit & Use Case Alignment
Overview
This measures how well the vendor’s solution advances the client’s mission, strategy, and priority use cases. Apply by comparing capabilities and roadmap to documented clinical, operational, and financial goals and verifying that funded work maps to those goals. Score 1–10 based on clarity of use-case alignment, evidence that governance steers the roadmap, and the degree to which delivered features demonstrably move stated objectives.
How to Apply (Role Perspectives)
Users: Does the product clearly help you hit your unit’s goals (quality, throughput, revenue integrity, patient experience), or do you work around it?
Buyers: Did the vendor map capabilities to the organization’s strategic priorities and keep that mapping alive in QBRs?
What to Review (Evidence)
- exec sponsor/QBR decks
- use‑case maps
- win/loss notes
- approved success criteria
Satisfaction Scoring (1–10)
- 1–2: Capabilities misaligned with stated goals; features distract; vendor pushes generic roadmap over your needs.
- 3–4: Some alignment on paper, little day‑to‑day relevance; priorities drift without correction.
- 5–6: Mapping to 2–3 priority use cases; periodic checkpoints; moderate realized relevance.
- 7–8: Clear line of sight from features to goals; trade‑offs managed in governance; roadmap reflects your priorities.
- 9–10: Co‑created portfolio with measurable impacts; vendor anticipates needs and brings options early.
KPI 2: Outcome Realization & Value Proof
Overview
This measures whether promised outcomes are achieved and sustained across quality, access, cost, revenue, and experience. Apply by validating baselines, measuring pre/post deltas, confirming time‑to‑value, and checking durability of results over 6–12 months. Score 1–10 on realized ROI versus commitments, breadth of outcomes across multiple domains, and independent or auditable proof of performance.
How to Apply (Role Perspectives)
Users: Did your metrics actually move (time saved, errors avoided, denials prevented)? How soon?
Buyers: Did the vendor meet promised outcomes/ROI and share auditable evidence?
What to Review (Evidence)
- before/after metrics
- value‑realization dashboards
- independent audits
- outcomes‑based payments
Satisfaction Scoring (1–10)
- 1–2: Outcomes not realized; negative value or hidden costs.
- 3–4: Small, isolated wins; no sustained improvement; evidence anecdotal.
- 5–6: Several measurable improvements sustained ≥6–12 months; credible value story.
- 7–8: Broad improvements in ≥2 domains; reasonable time‑to‑value; vendor participates in risk/reward.
- 9–10: Durable, audited ROI with repeatability across sites; proactive expansion of the value envelope.
KPI 3: Workflow & Human Factors Fit
Overview
This measures how the product eases daily work, reduces cognitive load, and supports safe, efficient workflows. Apply by reviewing time‑motion data, error/override trends, usability findings, and accessibility conformance in real settings. Score 1–10 on measurable task‑time reduction, lower error/override rates and alert fatigue, and positive user feedback sustained after go‑live.
How to Apply (Role Perspectives)
Users: Do tasks take fewer steps/clicks? Is cognitive load lower? Can new staff learn quickly?
Buyers: Were real workflows studied and reflected (personas, accessibility, safety)?
What to Review (Evidence)
- time‑motion or click counts
- error/override logs
- accessibility checks
- usability findings
Satisfaction Scoring (1–10)
- 1–2: Increases workload; frequent workarounds; safety/usability concerns.
- 3–4: Net‑neutral or inconsistent; training compensates for design gaps.
- 5–6: Noticeable friction reduction in core tasks; acceptable learning curve.
- 7–8: Intuitive, role‑optimized workflows; measurable task‑time/error reductions; strong accessibility.
- 9–10: Best‑in‑class usability; users advocate for it; alert fatigue reduced.
KPI 4: Adoption & Change Enablement Maturity
Overview
This measures the vendor’s ability to drive broad, durable adoption without heroics. Apply by assessing role‑based training, super‑user networks, enablement playbooks, and adoption curves by persona and site. Score 1–10 on hitting adoption milestones on schedule, quality of enablement materials and coaching, and minimal post‑go‑live drop‑off.
How to Apply (Role Perspectives)
Users: Were you supported by clear role training, super‑users, and job aids?
Buyers: Was adoption planned and measured beyond go‑live?
What to Review (Evidence)
- enablement plan
- certification/completion data
- adoption curves by persona/site
Satisfaction Scoring (1–10)
- 1–2: “Train and pray”; poor materials; adoption stalls.
- 3–4: Basic training delivered; limited follow‑through.
- 5–6: Structured change kit; super‑users; clear adoption metrics.
- 7–8: Rapid adoption across roles; targeted refreshers; data‑driven interventions.
- 9–10: Consistent S‑curve adoption across multiple sites; adoption excellence is repeatable.
KPI 5: Interoperability Maturity & Network Participation
Overview
This measures scalable, standards‑based data exchange and practical participation in relevant networks. Apply by examining API coverage, interface catalogs, onboarding lead times, error rates, and data completeness at scale. Score 1–10 on speed to onboard partners, percentage of flows using standards, reliability and volume of exchange, and evidence of active network participation.
How to Apply (Role Perspectives)
Users: Do data exchanges work reliably without manual rework?
Buyers: What is the time/effort to connect new partners and participate in relevant networks?
What to Review (Evidence)
- API/interface catalogs
- connection lead times
- error rates/volume
- network participation proofs
Satisfaction Scoring (1–10)
- 1–2: Proprietary/fragile interfaces; long delays; frequent data defects.
- 3–4: Standards used inconsistently; heavy custom work; slow onboarding.
- 5–6: Repeatable standards‑based flows; reasonable onboarding times; clear support model.
- 7–8: Connect‑once patterns; public APIs; partner marketplace; reliable bidirectional flows.
- 9–10: Scalable, low‑touch interoperability; strong network participation; measurable value from data liquidity.
KPI 6: Data Quality, Provenance & Observability
Overview
This measures the trustworthiness, traceability, and monitorability of data powering workflows, analytics, and AI. Apply by reviewing lineage maps, data contracts, quality SLOs, drift/quality dashboards, and incident remediation MTTR. Score 1–10 on low defect rates, current and complete provenance, fast detection and correction of issues, and consistent data‑quality SLA adherence.
How to Apply (Role Perspectives)
Users: Can you trust the data you’re using? Can issues be traced and fixed quickly?
Buyers: Are lineage, data contracts, and quality SLAs visible and enforced?
What to Review (Evidence)
- lineage maps
- data quality dashboards
- incident RCAs
- data SLAs/SLOs
Satisfaction Scoring (1–10)
- 1–2: Frequent data defects; unclear root cause; long MTTR.
- 3–4: Basic rules exist; quality varies by feed/source.
- 5–6: Lineage known; alerting works; defects trending down.
- 7–8: Automated monitoring and remediation; data SLAs met; quality is a standing agenda item.
- 9–10: High reliability and traceability at scale; downstream analytics/AI consistently trustworthy.
KPI 7: Privacy, Consent & Data Rights Posture
Overview
This measures how explicitly and respectfully the vendor handles consent, access, IP, and portability. Apply by reviewing DPAs, consent models and audits, real egress/portability requests, and least‑privilege access patterns. Score 1–10 on clarity and fairness of terms, verifiable audit trails, fast and low‑friction egress/portability, and an incident‑free privacy record.
How to Apply (Role Perspectives)
Users: Are consent and access handled respectfully and transparently?
Buyers: Are data/IP rights, portability, and egress practical—not just contractual?
What to Review (Evidence)
- consent/audit extracts
- DPA clauses
- egress requests
- de‑identification/process documents
Satisfaction Scoring (1–10)
- 1–2: Opaque data use; consent violations or slow/blocked egress.
- 3–4: Policy exists but execution inconsistent; disputes over rights.
- 5–6: Clear consent models; timely audits/egress; reasonable terms.
- 7–8: Configurable consent; smooth portability; proactive privacy guidance.
- 9–10: Model posture; egress/consent changes are routine and fast; zero adverse findings.
KPI 8: Cybersecurity & Software Supply Chain Resilience
Overview
This measures the strength of preventative controls, supplier risk management, incident response, and recovery. Apply by examining security program alignment, SBOMs/secure‑SDLC evidence, patch SLAs, tabletop/DR test results, and post‑incident RCAs. Score 1–10 on timely patching, recovery that meets stated RTO/RPO, transparent communications, and demonstrated control over third‑party dependencies.
How to Apply (Role Perspectives)
Users: How did they perform in real incidents? Were you informed and protected?
Buyers: Do they show tested controls, SBOMs, and recovery that meets RTO/RPO?
What to Review (Evidence)
- pen‑test/red‑team summaries
- incident notifications/RCAs
- recovery tests
- patch SLAs
Satisfaction Scoring (1–10)
- 1–2: Material incidents with poor response; prolonged exposure/impact.
- 3–4: Reactive posture; slow patching; limited transparency.
- 5–6: Documented controls; timely patches; credible tabletop tests.
- 7–8: Threat‑informed program incl. suppliers; proven recovery; strong communications.
- 9–10: Exemplary resilience; rapid containment/recovery; independent attestations; sets the bar for others.
KPI 9: Reliability & Business Continuity
Overview
This measures real‑world uptime, graceful degradation, and continuity during failures. Apply by reviewing SLO/uptime history, outage communications, validated downtime modes, and disaster‑recovery tests. Score 1–10 on actual SLO attainment, low MTTR and variance, safe and usable downtime modes, and successful continuity exercises.
How to Apply (Role Perspectives)
Users: Did you experience stable uptime and clear communications during issues?
Buyers: Are SLOs met and downtime modes safe/useful?
What to Review (Evidence)
- status/SLO history
- outage communications
- DR test results
- harm reviews
Satisfaction Scoring (1–10)
- 1–2: Chronic downtime; unsafe or unusable during outages.
- 3–4: Meets some SLOs inconsistently; communications lacking.
- 5–6: SLOs generally met; planned maintenance well managed.
- 7–8: Robust architecture with graceful degradation; fast recovery and transparent comms.
- 9–10: Rare, short outages; validated continuity; near‑zero user impact.
KPI 10: Scalability, Performance & Cost Efficiency
Overview
This measures predictable performance at volume and transparency of unit economics as scale grows. Apply by assessing load/performance results, capacity models, peak‑period behavior, and cost dashboards tied to usage. Score 1–10 on throughput and latency compliance at enterprise scale, absence of degradation under spikes, and clear, predictable cost drivers.
How to Apply (Role Perspectives)
Users: Does performance hold at peak loads?
Buyers: Do costs scale predictably with usage/footprint?
What to Review (Evidence)
- load/performance results
- concurrency behavior
- cost dashboards
- capacity forecasts
Satisfaction Scoring (1–10)
- 1–2: Performance collapses at scale; unpredictable costs.
- 3–4: Acceptable most days; peaks painful; cost surprises.
- 5–6: Predictable performance within stated limits; cost drivers clear.
- 7–8: Elastic scaling; proactive capacity planning; FinOps guidance.
- 9–10: Smooth at enterprise scale; improving unit economics as you grow.
KPI 11: Platform Extensibility & Openness
Overview
This measures the ability to extend safely using APIs/SDKs/events and to upgrade without breakage. Apply by reviewing developer portal quality, versioned API coverage, event catalogs, partner ecosystem activity, and upgrade history. Score 1–10 on breadth and stability of APIs, time‑to‑extend for typical use cases, upgrade resilience of extensions, and evidence of an active ecosystem.
How to Apply (Role Perspectives)
Users: Can you safely extend/automate without breaking upgrades?
Buyers: Are APIs/SDKs/docs solid and versioned? Is there an ecosystem?
What to Review (Evidence)
- developer portal
- API coverage
- event catalogs
- partner solutions
- upgrade history
Satisfaction Scoring (1–10)
- 1–2: Closed or brittle; extensions break frequently.
- 3–4: Some APIs; gaps force heavy customization.
- 5–6: Documented, stable APIs; safe configuration patterns.
- 7–8: Events/SDKs; marketplace; short extension time with guardrails.
- 9–10: Rich ecosystem; low‑code/no‑code options; upgrades rarely require rework.
KPI 12: Implementation Velocity & Time to First Value
Overview
This measures the speed and predictability of delivery from kickoff to first measurable win. Apply by comparing plan vs actual timelines, tracking critical‑path blockers, and verifying accelerators and prebuilt integrations were used. Score 1–10 on on‑time go‑lives, short time‑to‑first‑value with objective evidence, and minimal change orders or rework.
How to Apply (Role Perspectives)
Users: How long from kickoff to your first real win?
Buyers: Were timelines realistic and hit without excessive heroics?
What to Review (Evidence)
- project plan vs actuals
- critical‑path blockers
- go‑live retros
- first‑value metrics
Satisfaction Scoring (1–10)
- 1–2: Major slippage; value unrealized; scope chaos.
- 3–4: Late and over‑effort; value only after remediation.
- 5–6: On time with manageable variances; first measurable win within expected window.
- 7–8: Predictable delivery; accelerators and prebuilt integrations shorten time‑to‑value.
- 9–10: Consistently fast across sites; early, repeatable first value.
KPI 13: Support Experience & Service Recovery
Overview
This measures responsiveness, resolution quality, and learning from incidents. Apply by analyzing SLA adherence, MTTR, first‑contact resolution, recurrence rates, RCAs, and proactive health checks. Score 1–10 on fast and final fixes, transparent communications and RCAs, declining incident trends, and prevention of repeat issues.
How to Apply (Role Perspectives)
Users: Are issues resolved quickly and finally, and do you feel looked after?
Buyers: Are SLAs met, RCAs shared, and systemic fixes implemented?
What to Review (Evidence)
- MTTR/reopen rates
- backlog transparency
- health checks
- RCA library
Satisfaction Scoring (1–10)
- 1–2: Slow, dismissive; issues recur; SLA breaches common.
- 3–4: Variable responsiveness; shallow fixes.
- 5–6: Timely resolutions; decent communications; improving trend.
- 7–8: Proactive support; RCAs drive product changes; fewer/higher‑quality tickets.
- 9–10: White‑glove experience; issues prevented before noticed; recovery exceeds promises.
KPI 14: Trust, Ethics & Cultural Fit
Overview
This measures transparency, ethical conduct, and partnership behaviors under pressure. Apply by reviewing negotiation and escalation conduct, roadmap candor, acknowledgement of limitations, and governance of AI or decision support. Score 1–10 on honesty and follow‑through, fairness in commercial/operational dealings, and credible leadership engagement in critical moments.
How to Apply (Role Perspectives)
Users: Are interactions respectful, transparent, and ethical—especially on limitations and data use?
Buyers: Do leaders show up in escalations and honor commitments?
What to Review (Evidence)
- escalation history
- contract/negotiation conduct
- transparency on limits
- ethics/AI governance evidence
Satisfaction Scoring (1–10)
- 1–2: Misrepresentation, evasiveness, or unethical behavior.
- 3–4: Over‑promising; hardball tactics; slow transparency.
- 5–6: Generally fair and honest; responsive leadership in issues.
- 7–8: Proactive transparency; clear trade‑offs; strong partnership behaviors.
- 9–10: Exemplary integrity under pressure; vendor advocates for your interests.
KPI 15: Talent, Delivery Capacity & Domain Expertise
Overview
This measures the capability, stability, and healthcare fluency of the vendor team. Apply by checking certifications, résumés and clinical SMEs, turnover and continuity, productivity/quality metrics, and subcontractor governance. Score 1–10 on low team churn, proven domain expertise, ability to surge without quality loss, and effective knowledge transfer.
How to Apply (Role Perspectives)
Users: Are vendor staff capable and stable, and do they understand your clinical/ops context?
Buyers: Is there depth on the bench and continuity across phases?
What to Review (Evidence)
- team résumés/certs
- turnover
- role coverage
- subcontractor oversight
- knowledge transfer
Satisfaction Scoring (1–10)
- 1–2: Frequent team churn; weak expertise; dropped handoffs.
- 3–4: Adequate delivery but dependent on a few individuals.
- 5–6: Competent, stable team; domain specialists available.
- 7–8: Strong cross‑functional bench; smooth transitions; measurable productivity/quality.
- 9–10: Elite, low‑churn teams; thought leadership; they uplevel your staff.
KPI 16: Roadmap Credibility & Innovation Cadence
Overview
This measures how reliably the vendor ships what it promises and whether innovations matter in practice. Apply by tracking shipped‑versus‑committed hit rate, release cadence/quality, co‑design with customers, and deprecation discipline. Score 1–10 on a high commitment hit rate, rapid adoption of new capabilities without regressions, and visible customer‑driven innovation.
How to Apply (Role Perspectives)
Users: Do releases solve real problems without breaking things?
Buyers: Do they ship what they promise, when they promise?
What to Review (Evidence)
- shipped‑vs‑committed hit rate
- release notes cadence
- beta programs
- deprecation policy execution
Satisfaction Scoring (1–10)
- 1–2: Slip after slip; flashy promises, little delivery.
- 3–4: Some delivery but misses on timing/quality; frequent regressions.
- 5–6: Reliable cadence; majority of commitments hit.
- 7–8: High hit rate (~80%+); customer‑coauthored features; safe rollouts.
- 9–10: Predictable, high‑impact innovation; operations can plan around cadence.
KPI 17: Regulatory & Compliance Agility
Overview
This measures timeliness and thoroughness in meeting evolving regulatory requirements. Apply by reviewing certification status, regulatory release notes, testing evidence, and clarity of customer guidance. Score 1–10 on on‑time compliance with minimal disruption, clean audit outcomes, and proactive education that prepares users ahead of deadlines.
How to Apply (Role Perspectives)
Users: Did regulatory changes disrupt you?
Buyers: Did the vendor anticipate and ship compliant updates on time with clear guidance?
What to Review (Evidence)
- certification roster
- regulatory release notes
- policy trackers
- customer education
Satisfaction Scoring (1–10)
- 1–2: Missed/late compliance; operational risk created.
- 3–4: Bare‑minimum compliance; late guidance.
- 5–6: Timely updates; workable guidance; auditors satisfied.
- 7–8: Proactive road‑mapping for upcoming regs; smooth deployments.
- 9–10: Flawless track record; trusted advisor on regulatory change.
KPI 18: Commercial Integrity & Sustainability
Overview
This measures fairness and predictability in pricing, renewals, and exit/egress, backed by financial health. Apply by analyzing price‑increase history, invoice accuracy, contract transparency on egress, and examples of executed exits or make‑goods. Score 1–10 on transparent terms with no surprises, predictable total cost of ownership, and clean, timely egress without lock‑in harm.
How to Apply (Role Perspectives)
Users: Do commercial policies feel fair in practice (uplifts, true‑ups, overages)?
Buyers: Are pricing, renewals, and exit/egress transparent and predictable?
What to Review (Evidence)
- price‑increase history
- unit‑economics/usage reports
- renewal terms
- egress SLAs and real egress examples
Satisfaction Scoring (1–10)
- 1–2: Opaque pricing; lock‑in tactics; punitive terms; contested invoices.
- 3–4: Renewal surprises; heavy uplifts; egress unclear or slow.
- 5–6: Transparent price books; reasonable uplifts; codified egress that works.
- 7–8: Outcome‑linked options; collaborative renewals; clean exits elsewhere.
- 9–10: Exemplary fairness; renewals earned on value; painless exits when needed.
