Autonomous AI coding is transforming the revenue cycle, particularly in radiology, where high volumes, complex documentation, and payer scrutiny collide. For many organizations, traditional computer assisted coding, or CAC, has reached its limits. It improves productivity but rarely delivers the promised direct-to-bill rates or the level of AI coding compliance leadership expects. Today, solutions built around autonomous medical coding offer a more advanced path forward, especially as radiology coding automation becomes essential for sustainable revenue cycle performance.

In a recent discussion, Stacie Buck, VP of Coding Compliance at Maverick Medical AI, and Michael Brozino, Co-Founder and CCO, explained how specialty-trained autonomous coding models can deliver both efficiency and robust compliance, without asking leaders to trade one for the other.

Why CAC Plateaued And What Comes Next

CAC has been in the market for more than 20 years. It was built on natural language processing and rules, designed to assist coders by highlighting keywords and suggesting codes. In theory, that should have driven major gains in throughput.

In practice, many organizations experienced something very different. Stacie shared that her prior company was an early CAC adopter in radiology, with a vendor promise that 70 percent of radiology encounters would be coded direct to bill. Three years later, they were still reviewing 100 percent of cases. Across the industry, direct-to-bill rates commonly hover around 30 percent, with some long-term users nudging up to 50 percent only after building hundreds or thousands of custom rules.

Instead of reducing dependence on human coders, CAC often added complexity and oversight burden. The technology could not reliably interpret full clinical context, and compliance teams were understandably hesitant to let it run unattended.

Autonomous coding changes that model. Rather than rules and keywords, it uses specialty-trained AI to learn from historical coding patterns and full report context, then independently assign complete and compliant codes. This shift is foundational to revenue cycle optimization in radiology, especially for groups aiming to achieve sustainable improvements.

How Autonomous Coding Learns From Your Reality

True autonomous coding is not rules-based. At Maverick, the model is trained on two years of a client’s historical coded data. The AI learns from the actual patterns of documentation and codes assigned by human coders, including all the nuances of that organization’s internal guidelines, payor-specific expectations, and regional practices.

Because the model reads the entire report in context, it is not simply looking for trigger words. It is recognizing clinical meaning and mapping that to appropriate CPT and ICD-10 codes based on how the organization has historically coded similar cases. That is why Stacie, initially skeptical of claims like 85 percent direct to bill, ultimately helped design and validate Maverick’s engine and now routinely sees clients in the 85 to 92 percent direct-to-bill range at approximately 95 percent accuracy.

For sites where historical data quality is too poor to use for training, the out-of-the-box model can still achieve around 70 percent direct to bill, thanks to tens of millions of radiology reports already used to train Maverick’s specialty-specific language models.

Building Safeguards, Audits, And Trust

Compliance leaders rightly ask what safeguards exist from day one. Stacie is clear that AI coders must be audited just like human coders, at least monthly or quarterly. The advantage is consistency. Once the model learns to code a scenario correctly, it repeats that logic every time, rather than varying coder-by-coder.

Maverick bakes quality into the implementation process. Historical data is run through the model, then every variance between the AI output and human coding is reviewed. Stacie determines whether the AI or the human was correct, identifies documentation gaps, and collaborates with clients on policy-driven customization. Often, this process uncovers coding errors that prompt education and internal guideline updates.

When a site goes live, clients typically audit 100 percent of AI-coded encounters in week one. Stacie reviews all changes with high frequency, feeds that back to R&D, and the team tunes the model accordingly. Ongoing, Maverick performs quarterly audits of direct-to-bill encounters, while clients can set their own QA sampling rates in the platform.

Encounters are automatically routed to manual review when certain risk flags appear, such as mismatches with incoming RIS codes, NCCI or LCD edits, complex interventional radiology, or instances where the model lacks sufficient confidence. This safeguard strengthens direct-to-bill accuracy without compromising compliance.

Transparency Instead Of A Black Box

One of the strongest compliance concerns around AI is the black box effect, where it is unclear why a particular code was chosen. Maverick addressed this by designing a glassbox user interface.

Within the platform, coders and auditors can see:

  • Color-coded highlights on the report text that correspond to each CPT, ICD-10, and supply code, showing exactly where the AI derived each element.
  • Multiple coding layers: the input layer from the client system, the encoder layer from the AI engine, the post-processor layer for payer and client-specific rules, and the coder layer where human overrides are captured.

This layered view provides a complete audit trail and makes it easy to determine whether adjustments are needed at the model level, in post-processing logic, or in coder education. Auditing is performed directly in the Maverick platform, with no need for external spreadsheets, and reporting is robust enough to support both compliance monitoring and operational ROI tracking.

Documentation Improvement And Continuous Learning

Autonomous coding is only as strong as the documentation it reads. During implementation, Stacie frequently uncovers documentation deficiencies that limit coding specificity or force coders to look outside the radiology report for diagnoses. Addressing those gaps early improves immediate coding quality and creates higher value training data for the model.

The AI continuously improves through coder feedback. Every time a coder corrects an encounter, that change can be used to refine future behavior. For rare or entirely new services, Maverick can temporarily use targeted rules or synthetic data to accelerate learning until sufficient real-world examples exist, then revert to fully data-driven behavior.

For radiology groups and revenue cycle leaders, the result is a coding operation that is faster, more consistent, and more auditable, without sacrificing compliance rigor.

If you are looking to increase direct-to-bill coding, reduce denials, and strengthen compliance with explainable AI, explore how autonomous coding can fit into your radiology revenue cycle and request a demo if you’re interested.

Request A Demo