Mar 17, 2022 3 min read

Algorithm Destruction – the Implications for Insurers

A regulatory trend is emerging that could have profound implications for insurers. We’ve seen regulators issue fines for obtaining and using data in ways that fall outside of the rules. Now something much, much more ominous is starting to be used by regulators. It’s called algorithm destruction.


I first noticed algorithm destruction being used back in January 2021, in the US’s Federal Trade Commission’s (FTC) ruling on a photo storage app that had been using its users' photos to develop and train facial recognition technology (more here). The FTC told the firm to not just delete photos and videos of users who deactivated their accounts, but to also delete any facial recognition algorithms developed with those users’ photos or videos. Furthermore, the firm was to also delete all “face embeddings,” described as “data reflecting facial features that can be used for facial recognition purposes” derived from users’ photos who hadn’t given consent for their use.

Now the FTC has acted again, in a ruling this month on WW International, formerly known as Weight Watchers (more here). WWI has to destroy the algorithms it built using personal information collected through its Kurbo healthy eating app, from kids as young as 8, without parental permission. The agency also fined the firm and ordered it to delete the illegally harvested data.

Two Ethical Triggers

This is algorithm destruction. In both cases, the ethical issue that triggered it was consent. It’s one reason why I’ve been calling on insurers to review their consent strategies and how well their three lines of defence are working in relation to it.

The other ethical issue lurking around cases like these is discrimination. If, as I have been reliably informed, some insurers are using data that equalities legislation doesn’t allow them to use, then  serious repercussions are to be expected. Algorithm destruction could be one of them.

Sure, I know that some of you will be saying that both of the cases referred to above are in the US, which is different in many ways to the UK. Yet ‘many’ is not ‘all’ and consent, in my opinion, is one of the overlaps. And I know that UK regulators are regularly exchanging ideas and plans with their US counter parts, both inside and outwith of financial services.

So while we have not yet seen a case requiring algorithm destruction here in the UK, I see this as more ‘when’ rather than ‘if’. And the reason for this is the pressure on the FCA to produce results, both directly through enforcement notices and indirectly through warnings to the market. Algorithm destruction is a regulatory warning par excellence.

What does it Encompass?

While the term ‘algorithm’ is often used with an air of sophistication, it simply refers to a list of rules to be followed in order to solve a problem. The code containing those rules is in effect a software application designed to perform a set of actions. Bring many such applications together and you’re then talking about a model.

What the FTC sees as an algorithm encompasses everything from code to model, in full or in part. If the model is the product, then it encompasses the product too. Models are expensive to develop and the data and algorithms that make them up are part of a firm’s intellectual property. In some cases, a large part of their IP. This makes algorithm destruction so potent – it is a direct threat to a firm’s value.

What should Insurers Do?

Despite the danger of sounding like a stuck record, I urge insurers to review their consent strategies.  I believe some firms could be doing things with data and algorithms that fall outside of what the consent they’ve obtained allow them to do.

I also urge insurers to bring much more challenge into their three lines of defence. There are weaknesses in those three lines. In essence, I’m not convinced they're working in practice as they were meant to work on paper (more here).

Then insurers should apply test programmes and datasets to their data lochs to identify any data that directly references (or indirectly proxies for) race or gender. Then evidence the results, for without them, I believe the FCA is preparing to do something akin to this to your systems themselves.

Finally, I would urge insurers to look at their business continuity plan and test it around the scenario of a regulator requiring a key AI model be destroyed. It be prove difficult, but then, that in itself should tell you a lot!

I foresee a case in which failings accumulate within a firm across the four points I’ve just mentioned, and as a result, a regulator tells it to destroy an algorithm or model, perhaps any products derived from it as well.

Now is the time for a firm to make sure it’s not them.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.