Dec 1, 2023 7 min read

Data Ethics & Insurance : New Initiatives, Hard Decisions

Several initiatives on data ethics will be launched in the UK insurance sector in 2024. That’s great, but those commitments need to reflect what consumers are concerned about. This means a lot of risk assessment and implementation work. So what are the hard decisions that will inevitably be faced?

Data Ethics & Insurance : New Initiatives, Hard Decisions
Signing a code of data ethics will create some hard choices

I’m going to look at four hard decisions that insurers will face in relation to these data ethics initiatives…

  • assessing the risks from the ethical issues involved;
  • building implementation around a moving topic;
  • judging time and expectations;
  • getting the leadership side right.

Codes and principles are only as good as the mechanisms to deliver on the commitments being made in them. That’s not to say that an insurer shouldn’t sign up to a code ethics before it’s actively delivering on each of the commitments. That would sink most codes of ethics from the outset. Rather, the insurer should be signing up to a code of ethics with the firm intention of delivering on those commitments as soon as possible. All fine then, you may say, but to achieve that, some hard decisions will invariably be faced.

Hard Risk Decisions

The first such hard decisions relate to how the risks associated with those ethical issues are assessed. To be honest, I doubt if that many insurers carry out a regular assessment of ethical risks. For sure, many assess for compliance risk, but that is quite different. One is about what you have to do, while the other is about what you should be doing.

Can a compliance risk assessment simply be extended to cover the ethical side as well? Not really, but I know many do it that way. The dynamics around ethical risks are quite different, so you end up more with ‘compliance plus’ rather than ethics. In that difference lies the reason why functions like pricing just didn’t see the loyalty penalty coming. And to be honest, pricing walking was simple and straightforward compared with data ethics.

An ethical risk assessment is needed because when an insurer signs up to a code of ethics, it needs to know what exposures its present ways of working represent in terms of the commitments being made. If it turns out that it is carrying significant levels of under-managed data ethics risk, then it faces either the ignominy of signing up to something it then struggles or fails to deliver on, or a massive change programme to unpick those unethical practices from how it works.

Gross Net Risk

This is a form of gross-net risk. In other words, the difference between what is said and what is done. And while an insurer may start off with the best intentions to contain that gross-net risk, it will face two challenges. The first is to control the rather natural inclination to let self interest interpret ethical risks from too rosy a perspective. This has undermined the effectiveness of the three lines of defence and will undermine an ethical risk assessment as well.

The second challenge is that the insurer may be so committed to a digital strategy that it finds the notion of an ethical risk assessment just too disruptive. Yet another way to think of this is in terms of ‘better late than never’. And it’s a way of thinking that ‘has legs’ because of the emerging interest of investors and their advisers in the ethical side of AI. They will want to see how the big data ethics risks are being managed. And I emphasise ‘see’ – they’ll want to see the evidence.

Implementing Around a Moving Topic

The second set of hard decisions will involve the challenge of implementing a set of commitments at the same time that some of the underlying issues are still ‘works in progress’.

Issues like discrimination are already  pretty clear and set on the responsibilities side (albeit complex on the delivery side). After all,  equalities legislation has been on the statute books for a number of decades and most if not all insurers employ DEI specialists.

Issues like fairness may seem relatively clear, but in reality, they still have a long way to develop. Remember that five years ago, the market, the regulator and the courts saw a fair price as being the one set by the market. How times have changed! And how they will continue to change. We are only at the end of the beginning with regard to fairness. This means that insurers need to be prepared to evolve their approach to fairness in line with expectations of what fairness means in relation to a sector like insurance.

And then there are ethical issues like autonomy. Insurers have placed their understanding of character at the heart of some significant digital transformations – in counter fraud for example. Yet they have not been positioning their work on ‘character’ within a wider understanding of what scholars refer to as personhood. In other words, who we are, why we are and how that is understood. Autonomy and identity are very much part of that (more here and here). While it is the case that scholars are still evolving their understanding of autonomy, insurers have, to be honest, barely got off the ground. So it is a data ethics issue that insurers have a lot of learning to do, rather quickly, as it shapes some pretty significant discourses around data, models and ethics.

Time and Expectations

Building and implementing mechanisms to enable an insurer to deliver on the commitments it has made around data ethics is not something that will happen overnight. It takes time and much testing. Yet some insurers will hesitate to sign up to a code of data ethics in case it is then immediately castigated for not meeting expectations. That would be premature criticism and most consumer groups that I speak to would allow time for things to change, on the condition of course, that the insurer was open about how things were progressing.

So this is a case for some strong expectation management. What insurers should then prepare for is a fairly obvious challenge, along the lines of ‘you have time to make change happen, but in the meantime, be open about practices that clearly fall outside of this code’. In other words, insurers will come under pressure to confirm that they do not do (or will now close down) certain practices. These would be practices that clearly fall outside of what a code of data ethics would deem acceptable.

Untenable Practices

As most ‘data ethics discussion’ in the market at the moment is around claims, I’m going to illustrate this with three examples of claims practices that are pretty ethically untenable.

Settlement walking – this involves large batches of similar claims being settled at progressively lower amounts, until a point is reached whereby an increase in complaints is then triggered (more here). FCA reports confirm that insurers have mechanisms are in place to make this happen, and feedback I’ve received point to it already happening.

Settlement optimisation – this is the individualised version of settlement walking and is sometimes referred to as ‘willingness to accept’. It involves data about individual claimants being used to determine if they were likely to accept a quick but reduced settlement (more here). Credit record data plays a big part in claimants being offered these quick but reduced settlements.

Genetic test data – most people think that insurers are only allowed to use genetic data in extremely restricted ways, such as underwriting someone who has Huntington’s Disease. Unfortunately that’s wrong. The agreement between the ABI and the UK government only applies to underwriting, not claims. And some advisers to insurers have been open about their plans to use genetic data to structure large injury settlements (more here).

All three of these practices would fail to pass a data ethics risk assessment. Insurers signing up to a code of data ethics need then to ask themselves these questions

  • is this something we do?
  • is this something we can manage to stop doing?
  • what do we do when someone challenges us to confirm that we don’t do them?

In a way, this is a sort of stress test for data ethics commitments. And tests like this create a dilemma for insurance executives. On the one hand, do we take the test, sign the code and make the changes, which then sends a message to internal and supplier audiences ; or do we not take the test and not sign the code, which then sends a message to external audiences, investors being one of them.

Getting the Leadership Right

There’s a lot of activity around AI ethics at the moment – announcements from the White House in the US, AI Safety summits in the UK and an AI act on the horizon in Europe. There will be few insurance executives who have not watched this and wondered how this applies to their work.

This raised profile for AI / data ethics will move some of those insurance executives to want to have a say in it all from an insurance perspective, perhaps at a forum or a conference. This is great of course, as this type of support sends signals down to middle management about what they should be paying attention to.

The hard decision here is perhaps an unexpected one, in that it involves the leadership team thinking carefully about the role they should play in their firm’s approach to data ethics. Some executives will want to act on their personal interest in data ethics, but that would be a mistake. There’s nothing wrong with ethical leadership, but it has a limited and relatively short term impact on what their firm’s delivers on data ethics. What their firm needs (and it’s a subtle but important difference) is leadership on ethics.

The difference is that leadership on ethics involves the executive doing what they’re paid to do, which is to give leadership to something, in this case data ethics. Ethical leadership is an aspect of the character of leadership that the executive delivers. I explain this in more detail in this guide about five key aspects of leadership on ethics.

To Sum Up

It’s great to see that data ethics initiatives are about to emerge in the insurance sector. They will help insurers learn more about what data ethics involves, and about how to embed it into their digital strategies and decision systems. In making this happen, some hard decisions will be faced, and I’ve highlighted four of them here.

Perhaps the most important one is the one I’ve just covered - ‘getting the leadership side right’. It will be a big influence on the other three. The hardest of those other three will be ‘time and expectation’, for it will put some insurance executives ‘between a rock and a hard place’. Some form of external input (more here) for handling this will make a big difference.

If you’re holding an internal workshop on data ethics, bring me in as an expert and independent voice. This broadens the perspectives that decisions will be based around. Get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.