The AI Denial Machine: How Insurers Weaponized Automation Against Patients

BK
Bobby Kuzma
April 8, 2026

Last week I posted about UnitedHealthcare’s Cardiology and Radiology Imaging Guidelines. 3,472 pages. Too large for a human to reasonably navigate.

Turns out, that might be the point.

84% of health insurers now use AI in their prior authorization processes, according to the AMA. And a Stanford report on AI in healthcare administration found that denial rates are 16x higher when AI is involved in the decision-making process.

Let me say that differently. Insurers adopted AI and denials went up sixteen-fold. Not 16%. Sixteen times.

The technology isn’t the problem. The incentive structure is the problem. AI just made the existing dysfunction faster.

How the machine works

It’s straightforward and ugly. Insurers train AI models on historical claims data — data that already reflects decades of denial-heavy decision-making. The model learns that denying claims is the default. And it gets very efficient at finding reasons to do so.

A physician submits a prior authorization. The AI scans it against criteria that are opaque and proprietary. It flags anything that doesn’t match — a missing field, an unusual code, a treatment it hasn’t seen frequently enough. Denied. Or sent back for “additional information.” The physician’s office spends hours gathering docs and resubmitting. Repeat until the physician gives up or the patient’s condition worsens.

The AI doesn’t need to be right. It just needs to create enough friction that a percentage of valid claims never get resubmitted. At scale, that percentage is billions in retained premiums.

The receipts

61% of physicians say unregulated AI in insurance has increased denials. Not “might increase.” Has increased. Present tense.

UnitedHealth Group — the largest health insurer in the country — is facing legal action over its use of AI in claims processing. The allegations: their AI system denied claims with a 90%+ error rate, meaning the vast majority of denials were overturned on appeal. Internal communications allegedly showed the company knew and kept going.

Think about what that means. A system that is wrong 90% of the time, deployed at scale, on purpose. Because even if every wrongful denial gets appealed and overturned, many won’t. And the ones that don’t are pure profit.

States are fighting back

Minnesota is considering legislation to ban AI-driven claim denials entirely. The reasoning is simple: if AI is being used primarily to increase denial rates rather than improve accuracy, it’s a tool of harm, not efficiency.

Other states are watching. The federal conversation is heating up. But regulation moves slowly, and the denial machine runs 24/7.

The asymmetry

Here’s what makes this particularly infuriating. One side has enterprise-grade AI running around the clock, processing denials at a scale no human workforce could match. The other side has a medical assistant on hold for 45 minutes with Aetna.

The asymmetry is intentional. Every dollar an insurer spends on denial automation generates returns through avoided payouts. Every dollar a physician spends fighting those denials is pure overhead.

At Artificer Health, we’re building AI that works for the provider, not against them. Same technology. Different side of the table. If you’re tired of fighting a machine with a fax line, let’s talk.


Sources

Interested in solving prior auth?

Join our founding cohort of pilot partners and help shape the future of PA automation.

Apply for the Pilot Program
← Back to Blog