Our new risk adjustment software cost $2 million. It had AI, machine learning, predictive analytics, and more dashboards than a Tesla. It also couldn’t calculate HCC scores correctly when members had more than five conditions.
We discovered this during a routine audit. The manual calculation showed $847,000 in missed revenue. The software showed everything was fine. Turns out, its “sophisticated algorithm” had a basic arithmetic error that nobody caught for eight months.
The Demo Deception
Software demos are perfectly choreographed lies. Clean data flows smoothly. Every click works instantly. Results are always accurate. Then you install it in the real world and everything breaks.
Our vendor demoed with 50 perfect patient records. Our actual database had 247,000 records where half the fields were wrong, missing, or contained mysterious notes like “see previous chart” with no link to any previous chart.
The AI that accurately identified conditions in demo data? It flagged pregnancy in 73-year-old men because someone had entered gestational diabetes codes incorrectly fifteen years ago and the system never questioned it.
That beautiful dashboard showing real-time risk scores? It crashed whenever we filtered by more than three parameters. The vendor’s solution? “Don’t filter by more than three parameters.”
The Integration Nightmare
“Seamless integration with your existing systems,” they promised. What they meant: our team wouldn’t sleep for four months.
The software needed data from eight different systems. Each spoke a different language. Patient ID in system one was MRN. System two used Enterprise ID. System three had both but neither matched systems one or two.
We spent $400,000 on integration alone. That wasn’t in the budget. The vendor acted surprised: “Most clients already have standardized data architecture.” No they don’t. Nobody does. That’s why we need software to help.
Six months in, the integrations were “complete.” Except data updated overnight, so morning reports showed yesterday’s information. Real-time analytics weren’t real-time. They were yesterday-time with a fresh timestamp.
The Feature Cemetery
The software had 347 features. We use twelve. The rest are expensive decorations that complicate training and slow performance.
There’s a “predictive modeling suite” that predicts which members might develop conditions. Sounds useful until you realize it’s wrong 68% of the time. We’d have better accuracy flipping coins.
The “automated workflow optimization engine” routes work based on sixteen factors including moon phase, apparently, because nobody can explain its decisions. Sarah gets 100 charts while Kevin gets twelve. The system insists this is optimal. For whom?
My favorite useless feature: “collaborative annotation workspace.” It lets multiple coders annotate the same chart simultaneously. Why would anyone want that? We asked the vendor. They didn’t know either but assured us it was “enterprise-grade functionality.”
The Simple Alternative
After the $847,000 calculation error, we built our own replacement in six weeks using mostly Excel and some basic scripting. Cost: $30,000 plus pizza for the developers.
It does five things: ingests charts, identifies HCCs, calculates scores correctly, tracks submissions, and generates one useful report. That’s it. No AI. No predictive analytics. No collaborative annotation workspace.
Betty, our lead coder, can actually explain how it works. When it breaks, our internal IT fixes it in hours, not weeks. When we need a new feature, we add it ourselves instead of paying $50,000 for a “customization package.”
The homegrown system is ugly. It’s simple. It’s also 100% accurate and 300% faster than the $2 million solution. Turns out, most risk adjustment needs basic functionality executed perfectly, not advanced features that work sometimes.
Your Software Reality Check
Open your risk adjustment software right now. Click through every menu. Count features you’ve never used. If it’s more than 50%, you’re paying for shelfware.
Ask your team to explain how the software calculates risk scores. Not vaguely, exactly. If nobody knows, you’re trusting black box math that could be wrong. Remember our $847,000 lesson.
Calculate your true cost per chart: license fees, integration costs, maintenance, training, and lost productivity during crashes. Include everything. If it’s more than $15 per chart, Excel might legitimately be better.
Check how many workarounds your team has created. Separate spreadsheets for tracking? Manual processes because the automated ones fail? Post-it notes with the “real” workflow? Each workaround is evidence the software doesn’t actually work.
The perfect risk adjustment software doesn’t exist. But the worst risk adjustment software is the expensive one that promises everything and delivers complications. Sometimes the best solution isn’t the most sophisticated. It’s the one that actually works, even if it’s held together with Excel formulas and the prayers of your IT team.


