For the last 5 years, I’ve viewed claims as the function that is generating the most ethical risk for a typical UK insurer. It has not been in the limelight, but it has been using decision systems in ways that raise significant questions. Here are nine practices that are red flags.
Before looking at each of the nine ethical red flags, let’s address why I think claims is the function generating most ethical risk for a typical UK insurer.
Firstly, claims has been doing pretty similar things as underwriting, in terms of how decision systems have been designed and put to use. It’s underwriting though that has been receiving the media attention and regulatory bans. That’s mainly because consumer groups have targeted pricing, not settlements. They can access pricing outcomes more easily than settlement outcomes.
Secondly, claims is different to other functions, in that it is centred around a one-to-one engagement with the customer. What underwriting does is more of a one-to-many engagement. This difference means that when an unethical outcome happens through a systemic failure (such as in how a decisions system has been designed), the detriment is more significant from the one-to-one type.
To expand on this a little, in what I call a one-to-many engagement, the other party still has options. In a one-to-one engagement, the other party has few if any options. In underwriting, the phrase would be ‘they could always have gone elsewhere’, while in claims, there is no elsewhere. This changes the relationship and hence the nature of the impact.
So, let’s look at each of the red flag issues, in no particular order.
Claims walking involves the use of data and analytics to ‘walk’ settlements lower in common claim types in relation to a certain metric(s), such as complaints. I use ‘walking’ very deliberately, as it is bears a remarkable resemblance to pricing walking, which has now been banned in the UK.
As price walking has been banned, what do you think are the prospects, over the next two to three years, of claims walking being left to carry on as part of ‘business as usual’? Not a lot, at best. And think of it from a consumer perspective. It involves deliberately under-settling claims on a systemic basis. It presents a very easy story that would resonate with the public at large. A media feast day.
Is this already on the regulator’s radar? The FCA has found it in analogue procedures in the past and it has noted more recently that insurers have built the capacity to walk claims into their digital decision systems (more here). So the issue is primed. All it takes is the trigger. That doesn’t shout ‘risk under control’ to me.
We know that in the UK market, discriminatory pricing practices are in the regulatory spotlight. What is less well known is that in the US market, discriminatory claims settlement practices and claims service practices are now under a legal spotlight (more here). The largest US home insurer is facing just such a legal challenge. If the case acquires class action status, then that spotlight will become much brighter, with a whole range of practices brought to the public’s attention.
Is the risk the same across claims and underwriting? I think it is, for both feed upon similar data about the customer, and both will use decision systems designed around similar principles. The reason for discriminatory claims practices not coming into the spotlight is largely because consumer groups find it easier to focus on pricing outcomes. Until the US case that is, and as some of the analytics firms caught up in it are also big players in the UK market, I think it is only a matter of time before it blows up here. So again, is this really a ‘risk under control’?
While claims walking looks at bundles of similar volume claims, claims optimisation is much more of a personal practice. This is about using data and analytics to identify those claimants who might prefer a quicker but lower settlement, and offering it to them (more here). The logic is something along the lines of ‘if they are prepared to accept a quicker but lower settlement, then why not offer it?’
What happens though is that claims decision systems correlate ‘quick but lower settlements’ with low income families or those with credit problems. And the result is that the insurer, to be brutal but honest, profits from those circumstances, by skimming some percentage points off their settlement offer. I doubt whether it’s a practice talked about much with family and friends.
So while in relation to underwriting, people talk of the poverty premium, in claims you have the poverty settlement. Again, it is less in the spotlight because consumer groups are focussing on price. But again, is this really a ‘risk under control’?
Human State Sensing
Human state sensing is a term that captures the many ways in which digital systems attempt to capture something about the character of a person, using facial, voice, emotional and biometric data. The rationale is that knowing more about the character of a claimant allows the insurer to then tailor their service and settlement more to that person.
This of course has an upside, in terms of better service to, for example, vulnerable people. But there is a downside too, for the technologies used in human state sensing have been judged by the UK’s data protection regulator to be weak at best. None of those examined by the ICO were found to comply with data protection regulations (more here). And their Deputy Commissioner said that the only use in a business setting he could think of for such technologies was as a game at the office party. Ouch.
Any use of human state sensing technologies in claims decisions needs therefore to be very tightly controlled. Some of these technologies may originate from counter fraud measures, but their influence on claims outcomes is what matters here. Some such as voice analytics may be long established (I’ve experienced their use myself), but ‘we have always done it this way’ has no guaranteed shelf life.
Underwriting at Claims
As insurers seek to smooth the acquisition of new business by asking fewer and fewer questions of the customer, so more reliance is then put on external data sources. This is secondary data – it may be data about the customer, but it was not disclosed by them in a primary, insurance setting. It can be far less relied upon than the primary data given directly by the customer to you their insurer.
What then happens is ‘underwriting at claims’ (more here). This secondary (perhaps even tertiary) data is compared with data coming in for the claim that’s just been submitted. Discrepencies turn into declined claims. Yet just think about it. That decline is based upon second or third hand data from one, two, three or even more years ago.
To put it plainly, conducting detailed underwriting on only those customers who submit a claim may be less expensive, but it is also less accurate (despite the detail) and less fair. It amounts to an ‘after the event’ practice, like waving at the horse from the stable door to try and make it come back.
The reality is that if the insurer chooses not to ask material questions when the policy is incepted or renewed, then it exposes itself to challenge by trying to rely on asking them when the policy (a contract remember) is used for a claim.
Genetics and Large Loss Settlements
You may wonder why genetic data is included here, given that the UK insurance sector and the Government have a long established agreement limiting its use. Well, if you read the agreement, it’s clear that it deals only with underwriting (more here). Claims is barely mentioned. As a result, suppliers of legal services to the sector have been exploring ways of using genetic data for structuring large injury settlements.
Two points stand out here. Clearly, these legal suppliers are ignoring the underlying spirit of the agreement. Not a good tactic when the other party is the UK Government, lulled (one could say) by the sector into seeing the agreement as working as intended.
Secondly, the science behind using genetic data to judge future health outcomes is at best in its very early stages, at worse nonsense. So it’s really not something that an insurer should be contemplating, without having first done a lot of soul searching and a lot of reading of academic papers.
Incentivising staff in ways that result in lower settlement rates or amounts is unfortunately still a feature in parts of the market. I’m sure that several aspects of claims customer service are performance managed too, but they don’t offset issues around the former.
Poor performance management practices in claims functions are a red flat for even poorer management of that most fundamental of ethical risks in insurance, conflicts of interest. The former wouldn’t happen if the latter was being managed properly.
The FCA have recently had some harsh words to say about performance management in relation to motor total losses (more here), so it’s likely that they’re keeping an eye on both the issue and the senior managers behind it. It’s also possible that this will extend to how claims decision systems are being designed and implemented.
Three Lines of Defence
Another reason why performance management issues have persisted is because the three lines of defence approach is not working properly (more here). Claims people being incentivised to lower settlements, fraud staff being incentivised according to the fraud they identify – these would not be happening if the 3LoD were being used properly.
Problems with the 3LoD are actually problems to do with conflicts of interest management. So either they’re being ignored, underestimated or just not understood. Either which way, the result is a widening gap between actual and reported risk.
And any risk as fundamental as conflicts of interest that is being ignored, underestimated or not understood is also signalling something about the organisation’s ethical culture. At best, people are being allowed to get away with what are called rationalisations (excuses for bad decisions); at worse, they just don’t care.
Smart Devices and Maintenance Warranties
And finally, a warning flag issue is starting to take shape around the use of smart devices and their linkage with maintenance warranties. The scenario is pretty simple – customer installs smart device, insurer received data, insurer judges customer to not be taking reasonable care etc ; customer receives higher renewal or has their claim declined.
On the face of it, this seems only reasonable. If you’re not looking after your pipes enough, your escape of water claim will be turned down. The problem with this perspective is that it is too linear. The provision of smart devices complicates matter; it breaks up that linear aspect. How it does so, and what this means in terms of settlement decisions, is something that claims and underwriting functions need to understand and manage through their firm’s product and data governance arrangements. Is this on their radar?
I said at the outset that claims is the function that is generating the most ethical risk for a typical UK insurer. These nine red flag issues are the chief reason for this. There are other issues, but these nine give claims a most definitely top place.
So what should a typical insurer do? The obvious first step is to review just where their claims function stands in relation to each of these nine issues. Yet the equally obvious outcome will be a ‘no problem’ conclusion to that review. That’s because practices such as these don’t just happen. They are designed, tested, trialled, signed off, implemented and monitored within a culture that unfortunately lacks enough challenge to ask that simple ethical question – ‘we can do this, but should we?’
The ’should we’ part of that question is not going to lie still and keep quiet. It needs to be gauged against a dynamic background. In my keynote speech at a recent claims and fraud conference, I highlighted two dynamics that give shape and depth to the exposure it presents.
The first is forced transparency, caused by the appointment of a statutory consumer advocate for some or all of insurance (more here). The second is algorithmic destruction, in which an organisation is told to stop using an algorithm that has been designed to unfairly exploit consumers in some way (more here). And by ‘stop using’, I don’t mean a ban progressively introduced, as with price walking. I mean someone having to hit the kill switch.
So my advice is twofold – challenge yourself more (much more) and view the risk through a more informed lens (you don’t have full control).