The Safest Code Is the One That Starts With a Patient Visit
There’s a defensibility hierarchy in Medicare Advantage risk adjustment, and it runs in one direction. A diagnosis documented during a clinical encounter, by a provider who examined the patient and made treatment decisions, carries the highest level of audit protection. A diagnosis identified during a retrospective chart review, months after the encounter, by a coder interpreting historical documentation, carries less. A diagnosis from a health risk assessment disconnected from ongoing clinical care carries the least.
CMS has made this hierarchy explicit. The OIG’s February 2026 Industry-wide Compliance Program Guidance flagged health risk assessments generating diagnoses never considered in patient care as a high-risk practice. The DOJ’s enforcement actions against Kaiser ($556M) and Aetna ($117.7M, March 2026) both centered on coding practices that prioritized diagnosis capture over clinical grounding. The regulatory direction is clear: the closer a code is to actual patient care, the safer it is.
That makes prospective risk adjustment, where coding originates from real clinical encounters, the most defensible path forward for any plan building a long-term strategy.
Why Retrospective Alone Isn’t Enough
Retrospective chart review will always have a role. Plans need to reconcile what was documented with what was submitted. They need to find diagnoses that providers captured in notes but that never made it to claims. They need to identify and remove codes that lack adequate documentation support. Two-way retrospective review is a necessary compliance function.
But retrospective review can only protect and clean what already exists. It can’t generate new, encounter-linked documentation. It can’t improve the quality of clinical notes after the visit is over. It can’t create the contemporaneous evidence trail that auditors value most. Retrospective coding is a safety net. Prospective coding is the foundation.
The distinction matters because CMS increasingly evaluates not just whether a diagnosis is accurate, but how it was generated. A code from a retrospective chart review six months after the encounter raises different questions than a code from a face-to-face visit where the provider documented the condition, ordered labs, and prescribed treatment, all on the same day.
Making Prospective Work Without Burning Out Providers
The history of prospective risk adjustment programs is littered with failures, and almost all of them trace back to the same mistake: treating providers as coding resources. Pop-up alerts during visits. HCC checklists unrelated to the reason for the encounter. Pressure to “close gaps” on conditions the provider wasn’t evaluating. These approaches generate provider pushback, documentation shortcuts, and codes that look suspicious to auditors.
Effective prospective programs use a three-phase approach. Before the visit, AI reviews the patient’s history and surfaces conditions needing clinical attention based on prior documentation, lab results, and medications. The provider gets clinical context, not a coding assignment. During the visit, decision support highlights documentation gaps rather than prescribing codes. After the visit, post-encounter review catches mismatches between what was documented and what was coded, flagging issues before submission.
The principle is decision support, not automation. The provider retains full clinical authority. The AI provides information. The coder validates the output. Nobody in the chain is pressured to code something the clinical evidence doesn’t support.
Building the Foundation
Risk adjustment is shifting from a revenue function to a clinical discipline. The enforcement actions, model changes (V28 at 100% as of January 2026), and regulatory guidance of the past two years all confirm that direction. Plans that align their programs to this reality early gain a structural advantage that compounds over time.
The plans investing in Prospective Risk Adjustment as their primary growth strategy are placing the right bet. They’re generating codes from clinical encounters, supported by contemporaneous documentation, validated by explainable AI, and connected to active care plans. That’s the standard CMS is enforcing. That’s the standard auditors apply. And it’s the standard that separates plans building defensible programs from plans still running programs built for a regulatory environment that no longer exists.