May 10, 2023 4 min read

Right Question ; Wrong People – A Warning on Data Ethics

Insurers are working on ways to ensure that their digital decision systems deliver outputs that are fair and non-discriminatory. In doing so, they find that this involves some difficult choices. Yet how are those choices made? It's important that insurers don't approach them the wrong way round.

fairness
How you look at fairness determines what you see

Addressing fairness and discrimination in digital decision systems is a big challenge facing insurers at the moment. To meet this challenge, insurers are in the habit of turning to the wide range of mitigation techniques now available for addressing algorithmic fairness. These are largely technical tasks that adjust the algorithm or the data in order to produce a fair, or more likely fairer, output.

This work is important, but it is also secondary. Reading a recent paper by Boris Ruf and Marcin Detyniecki of AXA – “Towards the Right Kind of Fairness in AI” – reminded me of why this is so. I’ll explain why here and address my own challenge to the sector: are you going about this the wrong way round?

The AXA paper seeks to find a way through the myriad of fairness mitigation techniques that can be applied within AI, in order to arrive at a universal ‘fairness compass’. There’s a lot of good stuff in the paper, but at the same time, it reinforces just how important it is for fairness not to be seen as just a ‘technical choice’ to be left to AI people.

The Fairness Objective

Early on, their paper raises a key question:

“Besides the technical task of adjusting the algorithms or the data, an equally important philosophical question needs to be settled: what kind of fairness is the objective? Fairness is a concept of justice and a variety of definitions exist which sometimes conflict with each other. Hence, there is no uniformly accepted notion of fairness available.”

As I wrote in my ‘Revolutionising Fairness to Enable Digital Success’ paper for the Institute and Faculty of Actuaries, fairness is indeed a complex thing and there is no single way of looking at it. Insurers have tended to focus exclusively on merit, but are now starting to learn that it is wider than that. It is very much about justice and so, not surprisingly with something like justice, a range of interpretations, sometimes conflicting, exist.

The key mind step to make with something like fairness is to not see the variety of forms it can be expressed in as conflict, but as competition. And not as a competition between individuals, but competition within a team. How that competition rolls out will be influenced by views and aspirations relating to the context in which the team is to perform. There is no universally accepted team, yet teams still perform. Something similar holds true for fairness.  

The AXA paper presents a 'fairness compass' as the authors' way of answering that 'fairness objective' question (aka determining their best fairness team). It's a decision tree “...which outputs the best suited option for a given use case after settling a few crucial questions on the desired type of fairness.” And it’s a decision tree designed for use by ‘AI stakeholders’ which I take to be those involved with an organisation’s AI project.

Right Question ; Wrong People

This approach, in my opinion, is secondary because it is putting the right question (what kind of fairness is the objective?) to the wrong people. That question can only be properly answered by the people who are affected by, or who could affect, the answer. These are not AI stakeholders but stakeholders in general, by which I mean consumers, policy makers, regulators, experts and so on.

It is they who will determine what mix of fairness types goes into your fairness objective (aka the best people for your team). And from that mix of fairness types (merit, access, need, time and crowds (more here)) will emerge the parameters that should then shape the decision tree questions and outputs that the authors were aiming for.

Now of course this range of stakeholders are not going to be convened every time you design or adjust an algorithm. The firm needs to adopt a process for both questioning and listening to the output from such stakeholders, and then incorporating it into their approach to design, testing and implementation. This is not rocket science – there will be engagement protocols out there for firms to follow.

Looking Outwards

What does all this mean for your firm then? Well, if it has a team working on data ethics, they really should be spending more time looking outwards rather than inwards. To answer that right question properly – what is our fairness objective here? – they should first be weighing up how to arrive at an equitable balance to the different types of fairness that I mentioned earlier. This would then involve that process of generating output from the right people – the stakeholders representing those most affected by the use to which the decision system is to be put.

Wieldy you may think, and to begin with, it may well feel so. Most things are first time round, but that will soon drop away. And of course it will feel somewhat political, social, even philosophical at times. That’s part of how justice relating matters can often feel, but remember, as your handling of them matures, they become pretty much like part of ‘how we do things round here’. Just think of your third party liability teams – they handle justice matters every day as a norm.

SMF Hats

One audience that I expect to be asking more and more about how fairness has been configured within the about-to-be launched AI system will be the board or executive level sponsor. In effect, a senior management function under SMCR. They should now be recognising that fairness isn't something that the firm has 100% control over. If the history of UK insurance over the last five years tells them anything, it is that people and groups outside of the firm can, and are prepared to, influence events.

So their fairly obvious question will be around how those outside influences have been taken account of in designing and testing the AI system. A process orientated around just technical fairness mitigation techniques is unlikely to answer that.

Retro-fix Risk

It may feel tempting to forge on and hope that any changes can be retro-fitting later. That’s a big ask of any decision system, let alone several that are often very interlinked - underwriting, claims, counter fraud and marketing. A tester’s nightmare.

It may feel slow and long winded, compared with a smart internal process that gives you what you need at the touch of a few buttons. Progress however is a combination of speed and direction. Going off at a fast pace in the wrong direction doesn’t help.

To receive articles like this every fortnight, subscribe to the free 'Ethics and Insurance' newsletter. Just your name and email is needed.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.