Sep 6, 2022 5 min read

Asking the Wrong Question - EIOPA and High Risk AI

A tussle is emerging around whether the artificial intelligence systems used by insurers should be classified as high risk or not. It’s an EU thing, but of clear relevance to non-EU insurers too. The implications of a high risk designation are huge, so insurers need to track these developments.

AI
Is EIOPA looking the wrong way? 

In 2021, the European Commission published a proposed framework for the regulation of artificial intelligence. Their intention was to ensure that the risks from the use of AI systems were considered alongside the benefits of such systems. At the heart of their proposal was a risk based approach, differentiating between uses of AI that created a) unacceptable risk, b) high risk and c) low or minimal risk. And the risk in question is framed in relation to “…the safety or fundamental rights of citizens…”

The difference for a sector of having their AI systems labelled as low risk or high risk are significant. For low risk AI systems, only “very limited transparency obligations” are being proposed, such flagging when an AI system is involved in interactions with consumers. Chatbots are an obvious example here.

For high risk AI systems, what’s being proposed are requirements for high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness. Some of this will of course be already covered in existing legislation. As a result, system designers and users will have to comply with ‘horizontal’ regulations like the AI Act, plus relevant existing vertical regulations.

The Tussle around High Risk

So where’s the tussle happening then? In June 2022, EIOPA, the European insurance regulator, wrote to the co-legislators of the AI Act asking for insurance not to be included amongst the use cases being developed to illustrate high risk AI systems. Instead, EIOPA is seeking that designation role for themselves. And their justification for this is twofold:

  • “…the specificities of the insurance sector and the need for insurance specific knowledge for such a technical area.”
  • “…the already existing sectorial governance, risk management, conduct of business and product oversight and governance arrangements.”

In other words, insurance is too complex so leave it to us. And we’re doing a fine job so don’t muddy the waters. Both are bold statements that on the one hand EIOPA are bound to say, and on the other hand, they can now expect to be scrutinised.

So in the context of what is being expected from the proposed AI Act, how well might EIOPA come out of such scrutiny? Not brilliantly, I suspect. Sure, they convened a ‘consultative expert group on data ethics in insurance’ and published the related report in 2021. However the group was dominated by insurer interests and not much has come out of it since that report.

And while I expect EIOPA to be supportive of digital innovation in insurance, I find the balance of their output between the benefits and risks of AI in insurance to be very much weighed towards the former. Have they perhaps fallen victim to narrative capture by the lobbying of insurers and big four consultancies? It sometimes feels that way.

High Risk or Worse?

I think EIOPA has a weak case for saying that existing legislative frameworks at both EU and national levels are sufficient to justify insurance not being classified as high risk. Insurance is no longer seen as so technical that it has to be left to specialists to be looked after. And in particular, regulators and the market have been too slow to grasp issues around fairness and discrimination arising from the capabilities of digital decision systems in insurance.

What EIOPA should focus on is not the argument around whether AI systems in insurance should be classified as high risk or low risk, but on the possibility that some current and emerging market practices might be classified as unacceptable risks.

It doesn’t take a rocket scientist to look at the three often quoted examples of unacceptable risk in the proposed AI Act and find links to insurance practices. Let’s start with the example of “…subliminal, manipulative, or exploitative systems that cause harm.” There’s an argument that price walking was an AI driven practice that exploited people’s relationship with their finances.

Then there’s the example of “…real-time, remote biometric identification systems used in public spaces for law enforcement”. Split out the public spaces and law enforcement aspect (not hard for a legislator to do) and you’re left with the real time remote biometric identification systems that the Californian Insurance Commissioner has specifically referenced in relation to claims settlement decisions.

And finally, there’s the example of “…all forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.” Given how much insurers have invested in data and systems to understand more about consumer behaviours, it wouldn’t be hard for a challenge to the sector to be raised around this example.

The Two Sides of Risk

Of course, some of you will immediately protest that this is necessary for risk purposes. Yet others are now challenging whether it really has only been about risk, and not about profit and loss. Given how the price walking challenge turned out, the sector seems to be on the back foot on that one.

I would mention that I researched social sorting in insurance back in 2014 and wrote this article warning of a lack of control and reflection about its consequences. It’s part of the horizon scanning work I’ve been doing for quite a while now.

What’s Needed Now?

I would suggest three things are needed now.

Firstly, EIOPA should seek a more thought through approach to the AI Act. Their present position is not as strong as they think – good talk but not enough walk. As is happening here in the UK at the moment, it’s not difficult for consumer groups to find practices to challenge.

Secondly, insurers with any footprint in EU member countries need to build their capabilities around ethical risk management in digital decision systems. In other words, get up to speed on data ethics and be better at assessing the risks it produces. As I’ve said, there’s a lot of challenge happening and it’s only likely to get more sustained.

And thirdly, insurers in the UK need to watch these EU development carefully. In terms of insurance regulation, the UK is falling behind their US and EU counterparts, both of whom seem to be taking a more robust position on the uses to which AI is being put. The FCA has backed away from addressing data ethics in insurance and will, I suspect, end up being forced to play catch up at some point. That will be painful for the unprepared UK insurer.

Will EIOPA Succeed?

Will EIOPA succeed in taking responsibility for developing use cases in relation to whether insurance AI systems should going to be labelled high risk or not? In one word, no.

I don’t think they will succeed because there are just too many questions circulating around the sector at the moment. And insurance is loosing its ‘special case’ mystique.

Instead, I think the proposed AI Act will result in them having to raise their own expectations on how AI is used within the sector. In the world of horizontal legislation (like the AI Act) and vertical legislation (like for insurance), part of the rationale for new horizonal legislation is to pull up vertical legislation that has insufficiently addressed the cross sector issue of concern. That’s what I think will happen here.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.