May 11, 2022 4 min read

Machine Unlearning - a New Digital Capability for Insurers

Some serious exposures are looming on insurers’ reputational horizons. Most stem from how their digital technologies have been configured. If any of those exposures are realised, insurers will have to both delete ‘bad data’ and remove its influence on their models. It’s called machine unlearning

machine unlearning
Every machine needs a reverse gear

So what do I mean by ‘bad data’? It’s a simple term best explained by describing some attributes of good data. The right consent and the correct handling (directly or indirectly) of protected characteristics would be two such attributes.

Consent is a live exposure for insurers. This is illustrated by the Information Commissioners Office report on how the UK’s three main credit referencing agencies were handling consent (more here). The findings were pretty damning.

And two recent cases in the US have shown just how serious the implications are for firms training algorithms upon data collected without consent. This has raises the spectre of algorithm destruction (more here)

Discrimination is another live exposure for insurers, illustrated by the ethnicity penalty research by Citizens Advice. That situation is still in its opening stages, but the exposure, should it turn pear shaped for insurers, is clear.

Forms of Repercussions

Let’s jump ahead and find ourselves in the situation of an insurer being faced with the realisation that they’ve been using data in ways that they shouldn’t have been. Keep it as simple as that. The obvious question then is this: what regulatory repercussions might it face?

A fine is one obvious repercussion. They’re pretty common in UK financial services. Yet in a ‘bad data’ situation, I doubt that a fine on its own would be deemed sufficient. After all, the ‘bad data’ would still be out there, and the models trained upon that ‘bad data’ would still be influencing decisions. Something more than a fine seems inevitable.

The ‘keep you awake all night’ repercussion would probably be algorithm destruction. In other words, any models trained upon that bad data would have to be destroyed.

Let’s assume that the UK regulator would balk at that, especially if they only had the capacity to investigate and penalise one insurer, while knowing that others were probably doing something similar. What other options exist, of a more ‘middle ground’ nature?

Note that that middle ground would not just involve the deletion of data. The models trained upon it (and so perpetuating it) would have to be involved to.

Unlearning a Model

Enter the prospect of ‘unlearning a model’. This involves the insurer keeping the model but taking steps to ‘unlearn the model’ in respect of the ‘bad data’. In other words, to reserve engineer the influence of the bad data on how the model has been trained and the decisions that then flow from that.

Now some of you may be thinking that this sounds little different from just destroying the model. Whether that is true or not will depend on how the model has been designed. In simple language, this would be something along the lines of a model with both a forward and back gear, compared with a model with only a forward gear. Which model the insurer has will depend on what it specified when drawing up the model’s design. The saying ‘you pays your money and takes your choice’ comes to mind.

More than a Struggle for Google

I’m reminded of a case from several years back, involving those experts in AI at Google. A case emerged of direct racial discrimination in relation to how a person’s photos had been tagged. Google held up its hands in horror and vowed to address it. Nearly three years later, they had to admit that the only way of resolving the situation was to delete the categorisation tags at the heart of the case. They had found it impossible to ‘unlearn their models’ of the discrimination.

So is there any difference between unlearning a model and having it destroyed? As I said, that depends on how it was designed. And I’ve seen evidence of Google and other big tech firms building levers and dials into their decision systems in recognition of the problems that AI has been found to be producing. Big tech is starting to become a bit precautionary.

So the obvious question for insurers is: what levers, dials and reverse gears did you build into the your models? What levers etc can you engineer into them now? Do you know the answer to either of these questions? Given my earlier comments about the consent and discrimination exposures the sector is facing, I think it very likely that some bright spark on your board will ask the obvious question: if this happens to us, what can we do about it? Or to put it another way, if we’ve been so keen on machine learning, did anyone remember to incorporate some machine unlearning as well?

The Unlearning Algorithm

Research into unlearning is centred round the idea of an algorithm that takes as its input a trained machine learning model. The output of this algorithm is a new model from which the influence of specified data has been removed. This starts with the unlearning algorithm working out the influence of the specified data. As most machine learning is incremental, this means that most unlearning is also incremental.

The challenge of course is to know when the unlearning is complete. Techniques such as sharding and slicing can help, but don’t ask me to explain them! What my reading in this area does make clear is that we’re not talking about absolutes here. Some compromises for practical computational reasons should be expected.

Here and Now

Some of you may be thinking that this is all a bit too nebulous, too early in the run of things, to start preparing for the worst case scenario. ‘We’re busy enough’, you all protest! That may be, but as I’ve mentioned on several occasions in recent years, discrimination represents a truly worst case scenario for insurer reputations. The last few months have seen it draw several steps nearer.

Those insurers who reacted to my warnings back in the mid-2010s will be familiar with the exposures I’ve outlined above. Their systems will hopefully now be more adaptable, more tuned into these types of risks. My point is that these risks have not landed in the last few months through the ethnicity report. They’ve been on the radar for several years.

Will some form of unlearning of models be on the regulator’s list of response options? Perhaps not in technological terms. It will much more likely be in conceptual terms, such as ‘get that out of your model’, or ‘stop your model until it’s fixed’. In other words, push the means onto the sector, but still judge it by the ends. Insurers should start thinking about how they would handle this.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.