The Tools Nobody Approved
Across the Medicare Advantage industry, coding teams are using AI tools that their CIOs never vetted, their compliance teams never reviewed, and their security officers never approved. Browser-based AI assistants. Free NLP tools that summarize clinical notes. ChatGPT prompts that generate coding suggestions from pasted chart text. These shadow AI tools proliferate because coding teams are under pressure, the approved systems don’t solve every problem, and the unapproved ones are a browser tab away.
The risk is substantial. When a coder pastes protected health information into an external AI tool, the plan loses control of that data. When a coding decision is influenced by an ungoverned AI recommendation, the plan has no audit trail, no explainability documentation, and no ability to verify the reasoning behind the suggestion. If CMS audits the resulting code, the plan can’t produce evidence of how the decision was made because the decision was made in a tool nobody tracks.
CMS addressed this directly in its January 2026 HPMS memo, specifying that AI should serve as a “medical coder support tool” with human final determinations. Shadow AI tools don’t meet this standard because they operate outside the plan’s governance framework. The AI isn’t governed. The human isn’t making a documented determination. The process isn’t auditable.
Why Shadow AI Spreads
Shadow AI isn’t a discipline problem. It’s a gap problem. Approved coding systems have limitations that coders work around. Some systems are slow. Some don’t handle complex multi-condition charts well. Some produce recommendations without adequate context. Coders, who are evaluated on throughput and accuracy, find tools that fill the gaps their approved systems leave open.
The solution isn’t banning external tools through IT policy. It’s building approved systems that eliminate the reasons coders reach for unauthorized alternatives. If the approved HCC coding system provides evidence-mapped MEAT validation, pre-submission defensibility scoring, and two-way coding capability in a fast, intuitive interface, the incentive to use shadow tools disappears. Coders use unauthorized AI because the authorized AI doesn’t do enough.
CIOs should audit their coding teams’ actual tool usage, not just their approved tool stack. The gap between what’s sanctioned and what’s used reveals exactly where the approved technology falls short and where ungoverned AI is making coding decisions without oversight.
Governance That Works With Coders, Not Against Them
Effective AI governance in risk adjustment requires three elements. First, the approved system must cover the full coding workflow with enough capability that coders don’t need to look elsewhere. Evidence extraction, MEAT mapping, defensibility scoring, and two-way review should all live in the governed environment. Second, the governance framework must include audit trails for every AI-assisted decision, not just the final code submission. Third, the system must be fast enough that governance doesn’t become a productivity tax. Tools that add compliance value but slow coders down will get bypassed.
The goal is an environment where the governed tool is also the best tool. When explainability, speed, and accuracy converge in the approved system, shadow AI becomes unnecessary rather than prohibited.
The CIO’s Action Item
Plans that haven’t audited their coding teams for shadow AI usage are carrying unquantified risk in data security, coding defensibility, and regulatory compliance. Any HCC Coding Software strategy in 2026 must account for the ungoverned AI problem, either by building systems capable enough to eliminate the need for unauthorized tools or by accepting that coding decisions are being influenced by AI that nobody controls, nobody audits, and nobody can explain to a regulator.
