May 30, 2024 5 min read

The Keystone for Delivering Effective Data Ethics

Data ethics has lots of dimensions. You may be tempted to think that expertise in each of those dimensions will reap the results being sought. Only to a degree though. Running through all of them is an issue that is age old to insurance. If you don’t get this one right, all others will be suffer.

data ethics

Imagine an old stone bridge. Each stone in the arch supports the others, but the central stone is the key that stops it all collapsing. The analogy with data ethics goes something like this. Each stone is a dimension of data ethics, interacting with and supporting the others, but the keystone in this case is that age old ethical issue for insurance, the conflict of interest.

Think of it this way. There may  be lots of dimensions to data ethics, but within each of them are judgements to be made. Take consent as an example. The different ways in which consent can be handled lie on a spectrum, at one end of which is the consumer’s interests, and at the other end of which is the insurer’s interests. In between lie several judgements relating to how generic or specific a consent wording should be, perhaps even now still relating to how explicit or implicit it should be. That’s the primary data side of things; the secondary data side adds in lots more consent judgements on top of this.

How you handle those judgements will be influenced, like it or not, by the conflict between serving your interests and serving those of the consumer. To put it one way, you may ‘get the issue’ around a particular dimension of data ethics, but what you then ‘do’, having ‘got it’, is what most determines the outcome. That is why how you manage conflicts of interest is the keystone to what you deliver for your firm around data ethics.

More Examples

I opened with consent as an example, largely because how it has been handled to-date by insurers does raise questions about how some of them have judged that conflict of interest. Here are some more examples to think about.

There’s innovation versus governance. At one end of its spectrum is delivering innovative solutions to ensure competitive advantage. And at the other end is the need for risk management and governance frameworks to ensure the solution is at least safe, robust, secure and effective.

Then there’s group data and inferential analytics versus personal data and known attributes. At one end of its spectrum there’s the ease and effectiveness of ‘knowing enough’ about a consumer to make a decision. And at the other end are issues like accuracy, transparency, quality and autonomy.

One that features in a lot of those data ethics dimensions is significance. In other words, how you set the levers of your data governance in relation to quality, completeness and relatedness. And with models, around setting their levers that determine the shape and strength of its outputs. All of these are influenced by judgements made around significance, and each of those judgements has a social, economic and political side to them. A lot of numbers are involved, but determining their significance will be based upon how the conflicts of interest are handled.

I could go on, but I think you'll have got the point by now.

Sector Score Card

So how well have insurers (and let’s not forget reinsurers) been doing with regard to handling the conflicts of interest that permeate data ethics? Their score card isn’t reassuring. Let’s take each of the above examples and bring in some context.

Consent: the sector’s approach to-date has been weighed very much to insurers’ interests. We will see, over the next few years, a multi-front push back on this by legislators.

Innovation and governance: recent surveys point to a real tension developing between these two aspects (more here), and we will see, beginning in US insurance markets, a sustained move by legislators to unwind that tension and enforce better governance (more on this soon).

Group versus personal data etc: the EU AI Act is likely to set the tone for a legislative push on this, and insurers' particular favouring of group data and inferential analytics will mean that as a sector, they will be affected more than most others.

Significance: there will be less evidence of this being overtly addressed by legislators, due largely to it being a feature of so many dimensions per se. That said, it underpins many of the concerns upon which legislators are focussed.

In short, the sector has a lot of work to do around handling the conflicts of interest inherent in delivering an effective data ethics programme.

Digital Unknowns

One retort to all this is that digital innovation often involves grappling with lots of unknowns. And if innovations took place under a precautionary principle, not a lot would prove to be successful. I get that, but only to a certain degree.

Innovation involves taking risks, and in taking a risk, one naturally leans towards one’s own interests. This is not the ‘be all and end all’ of it though. Insurance has certainly seen innovations that leaned far too much towards insurer interests. Take referral fees for instance. While that was successful in delivering short terms gains for insurers, it proved a complete and utter disaster in the mid to long term, by helping to fuel TPA PI claims all over the sector.

Innovative forms of analytics often serve the direct interests of insurers and only the indirect interests of consumers. That’s fine; behind the scenes improvements always matter. Yet what also matters is to understand not just those indirect gains, but the more direct losses. So a piece of innovative analytics may deliver in the insurer’s interest only by down playing or ignoring the long term interests of consumers. Insurers who use emotional and facial analytics is one example of this; that’s why legislators are moving to contain its impacts.   

So, the unknowns of innovation are not an excuse to address consumer interests as an after thought, or something too complicated or sticky to deal with.  

Think Layers of Risk

Every data ethics risk is influenced by how the conflict of interest inherent in it has been handled. Addressing that risk therefore means breaking down that conflict of interest into its component parts. For example, is it an actual, potential or perceived one? Is it a personal or organisational one? And so on.

Mapping that risk involves mapping those conflicts of interest. Some form of influence mapping or social network analysis can be useful here. The key to this is bringing in sufficient knowledge and challenge, otherwise it becomes a tick box exercise that harms the business in the long term.

To Sum Up

Data ethics is not a new thing. It brings together many ethical issues that have been around for a good while in some form or another. What it adds in though is an understanding of their interaction in a digital context. The ethical issue that is more interactive in all of this is the conflict of interest. How you address them will determine the success of your data ethics.

It’s time to take out that conflict of interest policy and procedures, and give them a dust down and work over for this digital age.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.