Sep 27, 2023 8 min read

Why This Approach to AI, Insurance and Regulation is Flawed

A leading insurance institution has set out its views on how insurers’ use of AI should be regulated. Their aim is to give insurers a ‘right to innovate’. Their case is cleverly constructed, but underlying it are some flawed interpretations. I critique the report and offer a more realistic view.

regulation
Will EU policymakers listen to the Geneva Association?

The Geneva Association is (in its own words) a member’s club for the chief executives of insurers and reinsurers. Its ‘Regulation of Artificial Intelligence in Insurance’ is a typical Geneva Association (GA) paper, being well presented and argued, covering both opportunities and risks, but ultimately, arranged very much to promote insurers’ interests. This is not research, but public policy.

Their audience is EU policymakers and in particular those associated with the forthcoming AI Act. What the GA wants is for AI in insurance to be regulated through insurance regulators; what’s called vertical regulation. At the moment, the EU wants this to happen through the AI Act; in other words, horizontal regulation.

This is something that EIOPA, the advisory body for the EU on financial services, has previously called for (more here). I thought at the time that they were asking a lot and not presenting an overly strong case. As it turns out, EU policymakers weren’t convinced either and went ahead and labelled life insurance as high risk. This means that that part of the market will face more onerous regulations around its use of AI.

Will the GA case have more of any impact? I doubt it. We’re past the time when the GA could argue for things to go into the wording of the AI Act and now at the time when they have to argue for things to taken out of the wording. This, as the saying goes, is a very different ball game. Perhaps then the GA is seeking to influence EU politicians who are questioning the AI Act’s scope and depth.

The Core Point

What the GA report attempts is a defence of insurers’ ‘right to innovate’. It argues that insurers are using AI in ways that support consumer protection, deliver more affordable products and have more satisfied customers. At the same time, it argues that existing regulations work and that risks like fairness and discrimination are, respectively, too subjective and have long been complied with.

New horizontal regulation like the AI Act will put all of this at risk, goes the narrative. In short, more innovative uses of AI equals happier customers, while more AI Act type regulation means less AI, less innovation and less happy customers.

Unfortunately this is a narrative built upon several flawed assumptions. Let’s begin with perhaps the boldest of them all.

Reversible Decisions

“While there are risks linked to the use of AI by insurers, including potential bias, discrimination and exclusion, the largely reversible nature of AI decisions in insurance mean they are of a very different nature to those in other domains.”
“This is in contrast to other, much less regulated sectors – such as the technology sector – where AI decisions are irreversible and have severe potential consequences.”

It will be a surprise to many outside the sector (and indeed many inside the sector) that AI decisions occurring within the insurance sector are said to be ‘largely reversible’. And this to an extent that the GA says is in contrast with other sectors like ‘big tech’. Here’s why this is the case...

“If customers are denied coverage or disagree with a claims settlement, they have multiple redress opportunities. The fact that a decision was made by an automated system rather than a human does not make a difference in this regard.”

There’s more...

“There is also the ‘human in the loop’, which forms part of the industry’s tight risk management frameworks. Insurers need to ensure that AI-generated risk assessments and premiums are actuarially sound, which means that the algorithms and their outcomes are subject to human scrutiny and oversight. The human in the loop is a vital component of the extensive redress mechanisms that exist within the insurance sector, either internally or through external entities like an ombudsman.”

During forty years in insurance, I’ve never before heard of insurance decisions being uniquely reversible. For sure, legal action by policyholders can reverse decisions, but that involves insurers also defending their decision. Ombudsman services can only be accessed in the UK after the policyholder has already had their compliant against the insurer’s decision turned down.

Human scrutiny and oversight is common across all business sectors. If anything, insurers are using AI to reduce it as much as possible, so that underwriting, counter fraud and claims decisions can be made as quickly as possible. Recall though this article about how an AI powered claims acceptance decision of 2 seconds needs to be seen against the AI powered refusal of health claims in under 2 seconds.

Insurance decisions, AI powered or otherwise, are neither more or less reversible than in other sectors. For the GA to suggest otherwise borders on the surreal. After all, Ombudsman services exists because of systemic problems with the decisions being made against consumers by firms in a sector. And the regulations for insurers that the GA is promoting exist because of conduct problems in the past.

AI Governance

The GA report puts much emphasis on the sector being able to rely on its “tight risk management frameworks” for the handling of AI.  Yet only last month, insurers in Ireland were telling the regulator there that outwith of a data governance focus on data protection, little was being done around data ethics issues like fairness.

So on the one hand, the GA report talked about..

“All models within a company are registered in such a catalogue, and changes to existing models or the use of a new model are documented there. Such catalogues form a powerful risk management tool.”

...while insurers told the Central Bank of Ireland that...

“in relation to models, most firms did not have an enterprise-wide model inventory or enterprise-wide model risk procedures in place.”

The obvious question to raise here is whether insurers in Ireland are atypical of insurers across Europe. The Irish market is certainly a unique mix of national and international firms, so while not a huge market, it certainly experiences the big trends in the wider sector. So what insurers told the Central Bank cannot be discounted as ‘that is just Ireland’.

This picture of two stories – one big on accountability and governance, the other not so big on delivery – will cause policymakers to be reluctant to take the GA report at face value. This I think is a misjudgement by the GA. After all, the EU intends to bar insurers from access to secondary health data. If there have been doubts before about insurance, data and accountability, why be so bullish around the AI Act?

The Benefit of the Doubt

I think the GA wanted to present a bullish case on AI governance to policy makers because of a remarkable pivot that the sector has been engaged in for several years. This is the move from actuarial fairness to behavioural fairness. For sure, it’s still a work in progress – most behavioural fairness measures used so far simply result in an adjustment to the actuarial rate. That said, it has become a key narrative about the digital transformation that the sector is engaged in.

What I’m picking up in reports like that from the GA is that the sector’s senior leaders appear to see the AI Act as a threat to that key narrative. Conditions applied to the sector’s ‘right to innovate’ might derail the shift over to behavioural fairness.

Or to put it another way. There’s evidence that insurers started leaning towards behavioural fairness in order to try and avoid mounting concerns about actuarial fairness and discrimination (more here). The AI Act could undermine that move, hence the robust case put forward in the GA report. It’s understandable, but that doesn’t mean it adds up!

Does AI Support Solidarity?

The GA report goes on to assert that AI safeguards the principle of solidarity. Their argument is based around that idea of behavioural fairness that I’ve just been discussing...

“AI allows for more detailed and individual risk assessments, and thus a shift away from solidarity-based risk pools. This has sparked debate on how far individualisation should go. It can be argued that individualisation is most fair in situations where the insured party can influence their risk level, for example by adopting safer behaviours. AI in personal insurance allows for a more distinct separation between unchangeable risk factors that are covered under the principle of solidarity and risk factors that can be influenced by behaviour, such as reckless driving in motor insurance or high-risk activities in life insurance. By creating a clear understanding of intentional adverse behaviour, AI can actually safeguard the principle of solidarity”

Two things stand out here. Firstly, the GA report felt it necessary to construct a counter argument to the now widely accepted case that the individualisation of risk that AI enables, acts against solidarity.

And secondly, their argument relies on insurers using behavioural fairness techniques solely to distinguish between unchangeable risk factors and changeable risk factors. I’m sorry, but that just does not happen. I’ve been told on a number of occasions of ways in which insurers and their supply chain partners have been seeking to exploit unchangeable risk factors.

Take genetic data as an example. Four years ago, at a lecture in the Old Library at Lloyds of London, I listened to a law firm talk about how it was helping its insurer clients use genetic data to adjust large injury settlements (more here). If that’s not unchangeable, what is?

The “more detailed and individual risk assessments” that the GA talks about are indeed ‘what it says on the tin’. The ever growing individualisation of risk assessments that underpins the sector’s move to personalisation aims to treat the risks relating to each policyholder in ever smaller pools. This is the opposite of solidarity.

To Sum Up

There’s more in the GA report that is open to question, but I’ll move on consider how individual insurers might want to put the GA report into some form of context. Insurers know that the sector is facing a growing number of challenges, and that some of these challenges strike to the core of the digital transformation that the sector is engaged in. No surprise then that one of the sector’s leading institutions puts up a forthright case for keeping AI regulation with insurance regulators.

Yet there’s a real danger that the GA report will hinder insurers rather than help them. EU policymakers will have judged life insurance to be a high risk in AI terms on the basis of the evidence put to them. The GA report, in indirectly challenging that decision, delivers a lot of bold statements about insurers’ safe practices, but fails to evidence their efficacy. Should EU policymakers look at the evidence or accept the statement? It’s not difficult to decide which one they're most likely to take forward.

Individual insurers should not take their eyes off the challenges being brought to the sector. The GA report will have little to no impact on them. What they should do instead is undertake a serious review of the governance and risk management around their digital strategies. They will then be more informed about where they are and where they need to soon get to, when the challenges forming, at least in the UK, reach a head, in about 18 to 24 months.

The EU is not going to give insurers a free hand to innovate. That’s because, to quote what the UK data protection regulator told insurers here several years ago, big data is not a game played by different rules. Insurers need to accept conditions on their right to innovate through AI, just as they’ve had to accept conditions on their right to underwrite. It would have been far better for insurers if the GA had chosen to work proactively within that reality, rather than challenge what EU policymakers have already made their minds up on.

If you’re holding an internal workshop about the EU AI Act, bring me in as an ethical and independent voice. This broadens the perspectives that decisions will be based around. Get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.