May 15, 2023 8 min read

The EU AI Act is full of Significance for Insurers

As the EU works towards enactment of its AI regulations, the impacts for insurers are becoming more apparent. Some markets have been labelled high risk, while some technologies are being closed off. What insurers need to do now is understand how ethics is changing the sector’s digital landscape.

The EU AI Act is full of Significance for Insurers
The European Union - all aflutter about Artificial Intelligence

The AI Act’s ‘Compromise Text’ published last week was hailed by the EU as a key step towards the first rules on artificial intelligence. And it contained four parts that have obvious implications for insurers. What’s interesting about these four parts is just how different they are. This reinforces my belief that insurers need to understand what I call the ethical landscape that their digital strategies are experiencing, rather than just this event or that.

The four significant implications for insurers in the compromise text are...

  • the use of AI systems in life and health insurance markets is to be classified as high risk;
  • the use of AI systems for the detection of fraud in financial services is not in itself to be classified as high risk, but will still be affected by other requirements;
  • some uses of biometric data are to be classified as high risk, while other uses will be prohibited altogether;
  • a ban on social scoring is significant for underwriting in several markets.

What we have therefore is a market being address (life and health), a technology being addressed (biometrics), a function being address (counter fraud) and a practice being addressed (social scoring). Yet while each of these seems stand-alone, dig a little deeper and the interconnections become apparent. One example of this is the growing use of biometric data in counter fraud. That will now have to meet the high risk criteria.

I’ll go on now to dig deeper into each of these four implications. They will of course directly affect insurers doing business in the EU, but I also expect them to have some form of knock-on affect for UK only insurers over the next three years or so.

The Implications of being High Risk

Let's start with a quick look at the risk labels being used. There are just two - high risk and low risk, along with outright prohibition. The difference for a sector of having their AI systems labelled as low risk or high risk is significant. Here’s what I wrote about this last year...

“For low risk AI systems, only “very limited transparency obligations” are being proposed, such flagging when an AI system is involved in interactions with consumers... For high risk AI systems, what’s being proposed are requirements for high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness. Some of this will of course be already covered in existing legislation. As a result, system designers and users will have to comply with new ‘horizontal’ regulations like the AI Act, plus relevant existing vertical regulations.”

So what sort of ‘risk’ is being referred to in ‘high risk’? The AI Act text has this to say in relation to life and health insurance...

“...AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance may also have a significant impact on persons’ livelihood and may infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics”

You can think of this another way. For a life and health insurer, the case they’ll have to make for their AI system being acceptable under the high risk criteria needs to be based around people’s fundamental rights, as set out in key EU directives. So the bar is being set pretty high, and it’s got ethics at its heart.

Targeting Life and Health

The EU certainly has concerns about the life and health insurance markets. You may recall their recent proposal to ban insurers from access to secondary health data (more here). Add in a ‘high risk’ classification under the forthcoming AI Act and the message being conveyed is twofold.

Firstly, that insurers use of data and analytics in life and health has to-date not been reassuring enough for EU policymakers. In particular, it has not been proportionate enough to the impacts that could result for individuals and society. So insurers in those markets need to now take a critical look back over their digital work to-date and challenge themselves on whether they’ve been doing enough around the obvious ethical issues.

Secondly, the message being conveyed is that insurers need also to take a critical look at the digital innovations in their life and health pipeline. This is not just about ‘are they accurate enough’. It’s about whether the science underlying their performance is robust enough. Likewise in relation to the quality of data. Are the provenance procedures and criteria good enough, thorough enough, challenging enough, to then achieve that ‘high quality’ label?

The temptation for insurers in the life and health markets is to rely on the confidence with which their designers, suppliers and technology advisers handed over these systems. That would be a risky thing to do – there’s unlikely to have been enough robustness of challenge in those processes. Certainly not enough expected for a ‘high risk’ system.

Biometrics

The UK and the EU are now viewing biometric based systems with the similar critical light. Last October, the UK’s Deputy Information Commissioner said that the only permissible use he could think of for such systems in a business setting was as a game at the office party. Now the EU has taken a similar view. They’re handling it in three ways.

Firstly, AI systems being used for one-to-one verification are to be classified as low risk...

“AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises...”

Then there’s the use that is to be classified as high risk...

“AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems... should... be classified as high-risk.”

And then there are the prohibited uses...

“The indiscriminate and untargeted scraping of biometric data from social media... to create or expand facial recognition databases... can lead to gross violations of fundamental rights, including the right to privacy. The use of AI systems with this intended purpose should therefore be prohibited.”

Uses in the Market

Insurers have been looking at uses for biometrics for several years. Voice analysis was adopted early and widely, but now, under this AI Act, can only be used for one-to-one verification. It cannot, for example, be used as a counter fraud measure.

Image analysis was being researched by a leading European insurer several years ago (more here). They went on in an annual foresight report to say that ‘emotions can in no way lie’. It’s a view that clearly holds no water with EU policymakers or UK regulators. An expensive investment that is now worthless.

And has there been an insurer or two looking at scraping biometric data for use in underwriting, claims, counter fraud and marketing? Yes, of course. Again, a digital project whose ‘best by’ date is about to expire.

To be rather brutal, it was obvious that the sector’s use of biometrics was always going to be massively disrupted by regulators and policymakers. I was researching it from late 2017 and published this article on it in 2019. The UK’s leading authority on emotional AI is Professor Andrew McStay and in 2019, I submitted a joint paper with him to the UK Government’s Centre for Data Ethics and Innovation, on the use of emotional AI in insurance.

Mixed News for Counter Fraud

As mentioned earlier, the use of AI systems for the detection of fraud in financial services is not to be classified as high risk. On the face of it, that looks like a green light to counter fraud initiatives by insurers et al. But is it?

The AI Act is to prohibit this use of AI systems...

“...the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons;”

Now I’m no lawyer, but from my time negotiating the fine detail of risk transfer contracts, that reads to me as AI systems being prohibited from predictively assessing the fraud risk presented by a person, but allowed in relation to a assessing the fraud risk of a person with whom you’re engaging with in a claims situation.

The big question then is just how this will impact application fraud strategies. These involve everyone seeking an insurance quote being assessed for fraud from the moment they start typing their details into a price comparison website. Is that going to survive the EU AI Act?

Underwriting and Social Scoring

Outwith of life and health insurance, how else might the sector be affected by the AI Act? Consider this practice to be prohibited under the Act...

“the placing on the market, putting into service or use of AI systems for the social scoring, evaluation or classification of the trustworthiness of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that unjustified or disproportionate to their social behaviour or its gravity;"

So what is social scoring? Here’s how it’s defined by the AI Act...

“‘social scoring’ means evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics;”

Again, the ethical implications of it have been clear for a long while. I wrote this article about it in 2014. Since then, I have to say that the sector has sailed on somewhat blind to the obvious ethical issues involved. Once fully enacted, the EU AI Act will change all that.

My point is that most insurers have arrived far too late to the world of data ethics. Few if any have grappled with social scoring. This means a lot of painful and expensive systems conversion work going on soon. Or to put it another way, all that ethical debt will soon become due for repayment.

So why has the sector been so blinkered about social sorting? I believe it's because an insurance business is based so much around differentiation of risk, adverse selection and moral hazard. Insurers see each of these as being influenced by aspects of character. So if AI could uncover some of those aspects of character early enough, then portfolios could become more profitable.

What they failed to take onboard was not in relation to why they wanted to uncover aspects of character, but how they decided to go about it. Again, this is a failure to properly recognise and assess the ethical risks involved.

To Sum Up

The EU AI Act will be mixed news for insurers when it enters fully into force. Overall, I believe that most insurers will experience more and bigger downsides, compared to the upsides. Life and health will most certainly not be the only market affected.

At the heart of what the EU is doing through its AI Act is levelling the playing field around fairness. It is seeking to rebalance fairness more towards the interests of the consumer. At least in the EU, policymakers are not going to allow the digital success in insurance to be won at the cost of fairness.

And I’m afraid this was entirely predictable. I spent much of last summer researching fairness and how it could be handled better. Out of that work came a detailed paper for the Institute and Faculty of Actuaries called ‘Revolutionising Fairness to Enable Digital Insurance’ which you can find here. I’ll be discussing the paper’s key points at the main actuaries conference next month (say hello if you’re there).

Success in digital insurance will not be achieved through speed. It will be achieved first and foremost by going in the right direction. For EU insurers, the AI Act will cause some serious navigational adjustments.

If you’re holding an internal workshop about the EU AI Act, consider bringing me in as an ethical and independent voice. This broadens the perspectives that decisions will be based around. Get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.