Jul 7, 2023 3 min read

Insurers’ Discrimination Exposure - Greater than Thought

Is the challenge to insurers around discriminatory pricing soon to become more serious? The ethnicity penalty campaign has been orientated around indirect discrimination, but what happens when the distinctions between direct and indirect are revisited? I look at recent research on this.

discrimination
An important paper from Oxford scholars

When the ethnicity penalty campaign emerged, most insurers would have turned to their data scientists, seeking reassurance from them that their models and data have been arranged so as to minimise discrimination where ever possible. That reassurance would have relied heavily on a variety of tests and metrics for what is called ‘algorithmic fairness’.

Most of those tests and metrics for algorithmic fairness relate to disparate impact. Along with disparate treatment, these form the core tenets of US discrimination law. In Europe, disparate treatment is similar to direct discrimination and disparate impact is similar to indirect discrimination.

Legal scholars have usually been comfortable mapping those US concepts of disparate treatment / impact over onto European concepts of direct / indirect discrimination. Yet what if it turns out that those two legal concepts don’t map onto each other as completely as previously thought?

So what, some of you may ask, thinking perhaps that this ‘sounds a bit technical to me’. A recent paper by three Oxford law and computer science scholars is a bit technical, but it is full of implications. Their paper is being described as one of the most significant in the overall field of data ethics for several years.

Time to Tune In

There are two linked reasons why insurers need to tune into the points being raised in this paper.

The first relates to the problem of having tests and metrics for algorithmic fairness that were devised around a set of US legal concepts, but then using them in an EU / UK context. Insurers who have been relying on them could find that they’re not working as well as originally thought.

The second reason relates to the possibility that some cases that would have been treated as disparate impact in a US context, could be treated as direct discrimination in a European context. What this does of course is expand the legal and reputational exposure that UK / EU users of algorithms have to charges of direct discrimination.

The authors point to two types of discrimination that would be labelled disparate impact in the US, but direct discrimination in the EU / UK. These are...

  • inherently discriminatory, where a decision maker uses variables in their evaluation that are indissociable proxies for a protected characteristic.
  • subjectively discriminatory, where a person’s protected characteristic influences the decision maker’s conscious or subconscious mental processes.

I’m not going to explore these concepts in detail (I’m no legal scholar), so please read the paper to find out more about them. My point in raising them here is to show that the mechanisms by which insurers would have traditionally relied on for achieving some level of algorithmic fairness in their models may not be as thorough as they have thought.

The Likely Exposures

So what are the likely exposures here?

The first is that an insurer’s algorithm could be doing things that are inherently discriminatory, but which are not picked up by the tests and metrics for fairness. The authors highlight failing to pick up on indissociable proxies as an issue here. These are proxies that cannot be regarded or treated as separate or unconnected from what they are standing in for (examples in the paper).

The second is that the insurer’s algorithm could be doing things that are subjectively discriminatory. Bear in mind that these are not things that those tests and metrics for algorithmic fairness are designed to pick up. Instead, this is down to what I can the management and governance aspects of data ethics.

Now many of you will hold your hands up at this point and go ‘hey, that’s not something that our sector would do’. And I have met an awful lot of good people in the sector, yet on two separate occasions in recent years, I’ve been told by very senior and reliable sources in the sector that this is happening. And what they said was unequivocal.

To Sum Up

What this adds up to then is that the sector could well be facing a much greater risk relating to direct discrimination than its executives ever felt could be possible. Everyone has been focussing on indirect discrimination. The reason for them now needing to look very carefully at direct discrimination is because their tests and metrics for algorithmic fairness have been devised around non-UK/EU concepts of discrimination, and because their three lines of defence model of compliance has a serious flaw at its very heart.

The paper's full title is 'Legal Taxonomies of Machine Bias: Revisiting Direct Discrimination' by Reuben Binns, Jeremias Adams-Prassl and Aislinn Kelly-Lyth. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency: June 2023

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.