Simple claims are ideal candidates for improved processes that enable efficient settlement. As they’re usually the most common claims too, it’s no surprise that insurers have been exploring digital options for automating settlements.
What we’re also seeing at the moment is a PR game to become the fastest to settlement. It’s billed as great customer service. That has an odd logic. Does speed always equate to the great customer service that such automation is often said to deliver? Yes, of course if it removes unnecessary hours and days from the claims process, but not if it moves settlement times from 4 minutes to 2 minutes, let alone 4 seconds to 2 seconds.
In fact, customers are likely to become suspicious of very fast settlements. Questions like ‘have they given proper consideration to my claim’ come to mind. After all, if such digital decision systems can settle a claim in seconds, doesn’t that also mean that they can turn down a claim in a similar time? And in that situation, words like fair and objective are not the first to spring to mind.
An investigation by journalists at Propublica explored this question. They’re reporting that a leading US insurer CIGNA has been using claims automation to turn down simple health claims at a rate of 1.2 seconds per claim. So what does 1.2 seconds mean in practice?
Researchers have found that the average silent reading speed for non-fiction in English is 238 words per minute. This means that in the roughly 75 seconds it’s taken you to read the 299 words so far of this article, the CIGNA automated claims system could have turned down the claims of 62 people.
The insurance regulations of most US states expect the medical directors responsible for reviewing health claims decisions to examine patient records, review policy wordings and use their expertise to either approve or deny claims. The idea behind all this is to avoid unfair denials.
In the past, insurers have often just been settling simple claims, because the cost of reviewing them was more than the cost of the claim itself. Propublica reports that CIGNA coded a rule set into their claims system that flipped this around. Their algorithm looked for mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Those denials should then have been reviewed by a company doctor, but the system had been configured to allow those reviews to be signed off as denials on a batch basis. According to corporate documents and interviews with former Cigna officials...
“Over a period of two months last year, Cigna doctors denied over 300,000 requests for payments using this method, spending an average of 1.2 seconds on each case...”
Some of you may think that if the insurer’s rule set said the claim should be denied, then that should be enough. Yet the role of company doctors is to review such denials, and it is that process that was at the heart of Propublica’s investigation.
One question raised by their investigation is about how such rule sets are conceived in the first place. In other words, what clusters of significances influence the algorithm to output a denial? The Propublica article doesn’t drill down that much into this, but it is actually just as important as their questions about the review process.
What the insurer should have prepared, if only for internal consumption, was some pretty detailed research that showed that their rule set’s treatment of claims for test X or Y was reasonable. There’s no mention of such research by either Propublica or Cigna. Of course, if the review process is as flawed as Propublica report, that points to the rule set being either near perfect or a mess masked by automated review denials.
Claims automation is here to stay. To ensure that it isn’t then imploded by campaigner research, the rule set around which the automation is designed, and the meaningful involvement of human review, needs to be looked at very carefully. And by carefully, I mean not by the supplier of the automation software, nor the insurer’s claims or legal people, but by someone capable of bringing an independent mind to the situation.
What Claimants Want
Take this example. When I was head of insurance for Europe’s biggest motor fleet (circa 600,000 vehicles), the insurer and I commissioned a substantive and independent review of what customers wanted from the claims service. A good ‘claims service’ was at that point being measured solely in terms of telephone pickup times (not my doing!). What the review found was that these consumers’ priority one was having their car assessed and priority two was having it repaired. Speed to answer the phone came in as priority 22. What this told us was that claimants wanted both speed and fairness. And that is what the insurer based the changes coming out of that review around.
Speed and volume needs to be balanced with fairness and accountability. Propublica’s investigation found evidence pointing to that balance being missing in Cigna’s health claim review system. They also felt this was not just a Cigna thing. So if you’re asked to look at an automated claims decision making systems, what should you be looking for? Here are three things for starters...
Firstly, examine the role of humans at various points in decision making. How substantive is that role? How is that role monitored and who gets to see what results? What interventions are available and how often are they used, for what outcomes?
Secondly, what evidence do you have that shows how you’ve tested the rule set that is driving automated decision making? Is that testing ongoing and who reviews those outcomes? Has that rule set been subject to some robust scrutiny?
Thirdly, how has the rule set been evaluated to ensure that the outcomes it results in are fair? And how has your firm determined what represents fairness in the circumstances?
Nothing rocket science here. Insurers just need to make sure they’re doing it, and can evidence this to both internal, external and regulatory audiences.
Why bother, some of you may ask? After all, it’s not much different to what happened in pre-digital days. The way to think about this is that digital has changed more than just how insurers go about their work. Digital offers external audiences a window into the hard wired systemisation of claims practices, from which evidence in support (or not) of campaigners concerns can more easily be drawn. It’s emerging now as a two way street.