Nov 13, 2023 6 min read

Being Human - Insurance and the Right to Move On

The digital world that insurers operate in sees data being retained for long periods of time. That view is based upon the influence of moral hazard and character. Yet part of being human is to evolve and move on, to leave behind who we used to be. I examine this dilemma and identify the challenges.

the right to move on
Moving on - we all do it

These questions are important because the role of character is central to the digital strategies of many insurers. Data about who we are, what we like, what we feel strongly about and what we do – all of this is brought together and fed into underwriting, counter fraud and claims models.

A strong reason for this is moral hazard – making sure that we are the type of person who will reasonable steps to protect their health, their property, their car and so on. Signs of poor moral hazard should trigger higher premiums, less cover or being declined altogether.

Yet when people talk about character, it’s never as if it’s in some way immutable. It’s about who we used to be, about who we are now, and about who we could become. This is vital to our development as both children and adults. We gather experiences, learn from them, move on and become better people. If that were not to happen, then hey, there’d be way too much stroppy teenage behaviour hanging around.

I’ve written before about how important it is for insurers to recognise an ethical issue like autonomy. I want here to look at this in a slightly different way, so less about who we are and more about who we no longer are. This is not so much about the right to be forgotten, but about being remembered for who we are, rather than who we were.

This is about how an underwriting model handles progressively conflicting data about who we are. For example, if we have cancer, but then don’t have cancer; if we have past convictions, but then don’t have past convictions, and if we shopped there but then no longer shop there. It is about letting go, and also failing to let go.  The actuarial calculations may be highly sophisticated, but how useful is that if a growing amount of the data is out of date?

When does data become out of date then? In broad terms, when the actions and behaviours it is being used to track no longer represent who we now are. We move on. So should the data kept about us.

In Insurers’ Interest

Yet why should the sector bother with this, if holding all this data is relatively inexpensive? One classic example relates to the resettlement of offenders. Without resettlement legislation, people with spent convictions would end up being penalised in all sorts of ways. That’s not what insurers want, for the resettlement of offenders makes it much less likely that they will return to their bad old ways.

This is therefore a right to move on that is very much in insurers’ interests, for resettlement reduces future claims. Yet I understand some insurers still trawl datasets for signals of past convictions (spent or otherwise), on  the basis that past convictions point to a higher risk of fraud against insurers.

The same goes for medical conditions both physical and mental. Take a person who once suffered from depression, but no longer suffers from it. If they end up forever paying a higher premium rate for that past, then they will feel themselves unjustly penalised for their past problems. They will question why one piece of data (having depression) should have weight over another piece of data (not being depressed), and who decides this.

Will underwriting then end up as an accumulation of all the bad stuff, without equal recognition being given to the good stuff?

Once Bitten Twice Shy

Insurers will of course respond by pointing out that things like criminality and depression do return, that if someone has been there once, then they’re more likely to go there again. They’re just applying something like a precautionary principle in their particular risk environment, goes the narrative. It’s an argument that has legs, but one that also tends to not always stand up. For example, a great many of the people who have ‘been there before’ are much more likely to be ready to not go there again, and so are alert and proactive around this. Once bitten, as they say.

The sector will end up being challenged around holding so much ‘forever data’. So for example, to what degree does the evidence back up insurers’ worries around relapse? And in public policy terms, is this blanket ‘you never know’ approach tenable, even from a private market?

Four Realities

Those seeking answers to those questions need to confront four realities:

  • insurers have the capabilities to build and exploit as much as they can around past events in our lives (think lots of data storage and analytics);
  • insurers have the strategic model and the professional justification for doing so (think personalisation and moral hazard);
  • the market believes that the risk and long term nature of their business means that holding progressively as much historic data about us as possible is justified in relation to data protection law;
  • and insurers have the incentive to keep on doing so (the returns to be earned from greater selection in a competitive market, that also happens to be in the sights of big tech firms)

On the face of it then, the market is unlikely to move. They’ll point to signposting arrangements, but these were only ever going to be a sticking plaster. That’s because signposting is a distribution solution to a manufacturing problem. That problem is not just still around, but steadily getting worse, for reasons like those summed up in the four bullets points above.

That’s it then, some of you may be thinking. Well, not necessarily. There are a few things the market doesn’t know it doesn’t know. One of them relates to the AI skills gap. I’m told that people with those skills tend to be a rather neurodiverse lot. How many then will choose to work in a sector that is progressively making it more difficult for them to buy the life insurance needed for a house purchase? It doesn’t take a data scientist to work that one out.

Another thing the market doesn’t know it doesn’t know relates to their executive teams. How many of them have grappled in some way with mental health challenges in their lifetimes? About a fifth to a quarter would be a reasonable guess. Not the best of forums then to justify rating mental health risk out of the market, which is where it is heading.

What the market probably does realise now is that positions of certainty in relation to socially important issues (such as mental health) often turn out to be less robust when subject to challenge. The market no longer has the final word when it comes to fairness and moral hazard.

Personalisation Again

Now, some insurers will respond by asking how else are they to deal with the greater risk people with mental health challenges are said to represent? Yet it’s a question being asked the wrong way round. Reverse it and it becomes ‘what are we doing that makes it necessary for us to rate people with mental health in this way?’ The answer of course is the drift towards ever greater levels of personalisation. The market is choosing to compartmentalise risk attributes and then label them as a growing problem. Yet the growth of such problems is driven by that compartmentalisation.

Take the Motability fleet as an example. I used to be its head of insurance. There you have 650,000 people with significant disabilities, on a scheme that in most years buys 10-20% of all new cars in the UK. Historically, these people with disabilities were individually seen as poor risks. Repackage them into one big fleet and they become a very manageable and desirable book of business. How you see risk depends on which side of the lens you look at it through.

I was on a panel last week at an ABI conference about responsible AI in insurance. At the end, I was asked what one thing I would encourage insurers to address around responsible AI in 2024. My response was that they should take mental health and explore all the different sorts of way in which they’re handling it, in underwriting, marketing, claims and count fraud. That will give them a cross cutting understanding of the ripple effects being caused through the way in which AI models and big data are being utilised.

To Sum Up

The right to move on is something that we all value, for we all benefit from it in some way. Consider the very famous entertainer with a criminal conviction; the banking chief executive with mental health problems; the once feared political influencer with depression. The public has allowed each of them to move on and has benefited from doing so. The insurance sector has to learn to do something similar, otherwise it will fill up with out-of-date data trying to infer something about our long left behind characters. Just because the data is big, and the analytics so sophisticated, doesn’t mean the end result is accurate.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.