May 30, 2023 8 min read

Why Data Ethics is at a Crossroads

Is data ethics working in insurance? Perhaps it’s just at an early stage, needing more time. Or is it always going to struggle to gain traction? Some see ethical guidelines for data and AI as the solution. Others see them as ineffectual. So what’s needed instead? How should a firm decide this?

data ethics

Guidelines for data ethics / AI ethics are all the vogue at the moment. Governments, international organisations, trade bodies, professional bodies and regulators – all have been releasing documents full of principles, promises and processes. For firms, it’s not a matter of finding one, but of choosing one.

The obvious question of course is... ‘do they work?’ A recent study found that most do not. That’s because such guidelines lack the mechanisms to reinforce their own claims. The study also found that such guidelines did not change the behaviour of professionals from the tech community.

“AI ethics is often considered as extraneous, as surplus or some kind of “add-on” to technical concerns, as an unbinding framework that is imposed from institutions “outside” of the technical community.”

So why are such guidelines produced in the first place? In the main, they’re produced as an exercise in internal self-governance. In other words, we’ll look after the ethics of AI ourselves, in our own way. The reason for doing so relies heavily on a view that these people know best, because they are the ones who know most about AI, about their business sector and about the digital trends affecting it.

This amounts to what is called a ‘law conception of ethics’, in which ethical guidelines and principles are utilised as a replacement for regulation. As a softer version of the law, they are felt to be more in tune with evolving technologies, more linked to what is needed now. That’s the narrative, and it’s a narrative easily communicated to policy makers, consumer groups and others interested in how the problematic outputs from digital decision systems are being handled. It also makes them very suitable for marketing and public relations.

Avoiding Obligations?

Another narrative challenges this. The recent attention being given to AI ethics is said to be all about avoiding legally enforceable restrictions of controversial technologies. This is about avoiding the imposition of boundaries and hard lines. One way in which this is done is to make sure that sectorial interests dominate the process of creating guidelines, as illustrated below...

“during deliberations at the European High-Level Expert Group on Artificial Intelligence (EU-HLEG), industry was heavily represented, but academics and civil society did not enjoy the same luxury. And while some non-negotiable ethical principles were originally articulated in the document, these were omitted from the final document due to industry pressure.”

So what’s happening in insurance? Some non-industry participants in EIOPA’s consultative expert group on data ethics in insurance had similar experiences to those on the EU’s HLEG. The Bank of England’s AI Public Private Forum was full of representatives from industry and technology firms. Industry people dominate discourse more widely as well, as the former chair of the Financial Conduct Authority wrote last year...

“Financial businesses or their lobbyists met Treasury minister nearly 200 times last year. In the same period Treasury minister met consumer organisations fewer than a dozen times. UK Finance and the Association of British Insurers (banking and insurance lobby groups) were granted ten times more meetings by ministers than Citizens Advice.”

It’s Outcomes that Count

The real danger from all this is that these guidelines will fail to deliver the outcomes that those producing them have promised. And at the end of the day, it is outcomes that matter. The difference between say ten years ago and the present, is that there are now organisations focussed on watching for those outcomes and waiting to hold people and organisations to account.

Some may question whether it is in fact the ethical side of all this that could fail, rather than the ‘guidelines’ approach. Answering that pivots on how you position ethics in the first place. Is it indeed just a soft form of law? Or it is more than that? It depends on what type of ethical approach you make use of. The deontological approach is based on rules and duties. On the other hand, virtue ethics avoids rules and codes, and instead focusses on the individual person and the relational context in which they are acting.

Gender has a role to play here. The development of many AI ethics guidelines is dominated by men, and the application of those guidelines in technological situations is dominated by men. This results in a strong lean towards deontological ethics. Those guidelines produced by women have more of a lean towards virtue ethics.

Asymmetries have a role to play here as well. AI is often portrayed as technologically complex, capable of being understood only by those who work on it. That information asymmetry turns into a power asymmetry when applied to the development of ‘soft law’ guidelines for AI ethics. One thing to always look out for with such guidelines is the way in which those power asymmetries have been handled.

What is the Alternative?

If AI ethics guidelines aren't likely to work, and I suspect that will turn out to be the case, what should we do instead? I’m going to draw on an interesting paper by Anaïs Rességuier and Rowena Rodrigues called “AI ethics should not remain toothless! A call to bring back the teeth of ethics.” They see “the real teeth of ethics” as lying in a “constantly renewed ability to see the new”. They go on...

“Ethics is primarily a form of attention, a continuously refreshed and agile attention to reality as it evolves. Ethics in that sense is a powerful tool against cognitive and perceptive inertia that hinders our capacity to see what is different from before or in different contexts, cultures or situations and what, as a result, calls for a change in behaviour (regulation included). This is especially needed for AI, considering the profound changes and impacts it has, and is bringing to society, and to our very ways of being and acting.”

In short, ethics, rather than being used in guidelines, can instead help us to recognise and answer the questions that are emerging out of the technological changes taking place. Ethics as an evaluative process is therefore about exploring, questioning and guiding those technological changes. At the moment, there’s not enough of that.

'Seeing What Happens' is not Accountability

Now some of you will be thinking that this is all a bit slow and cumbersome. Why bother asking questions when you can just get on with it and see what happens. There’s a bit of mileage in that, but not a lot. It’s not the best way to bring enough customers with you to build a sustainable business. And ‘seeing what happens’ doesn’t work when legal and regulatory accountability starts from day 1, not the day after the outcome trends become clear.

What I do think we’re starting to see now, after all these principles and guidelines have been put in place, are attempts to work out what is happening and why. Of course, these principles and guidelines have had a role in spurring on such questions, but equally, I do not see them being able to deliver much of an answer. That needs to come not from a technological perspective (such as here), but from more of a social sciences one. In other words, the disciplines that help us “see what we do not otherwise see”. That’s why I’ve been engaging with social scientists for about eight years now.

The Example of Fairness

We know that fairness is important in insurance, from both a consumer and regulatory perspective. The events of the past five years have reinforced that. Yet at the same time, many in or associated with the market have seen fairness as too difficult or nuanced to handle. Perhaps they’re looking at it the wrong way.

Looked at from a principles and guidelines perspective, it can seem a bit of a challenge. Is that the right way of looking at it though?  Perhaps instead, we should, to use Rességuier and Rodrigues’s words, draw on “...our capacity to see what is different from before or in different contexts, cultures or situations and what, as a result, calls for a change in behaviour...”.

That is exactly what I wrote about in my recent paper published in January by the Institute and Faculty of Actuaries, called ‘Revolutionising Fairness to Enable Digital Insurance’. I explored the different dimensions that exists around fairness in relation to insurance, and challenged the sector to stand back from its sharp focus on the ‘fairness of merit’. Drawing on the relations between insurance and society, I put forward the mechanism of common pool resources as one way in which those ‘different contexts, cultures and situations’ could be brought together.

Tell Us What To Do

Insurance people often complain about regulators not telling them what to do, in relation to the regulatory principles they have to work within. This helps explains the sector’s relative enthusiasm for principles and guidelines. They suggest to (rather than tell) people what to do - the processes to build and the metrics to report on.

Yet what if, as looks likely, these principles and guidelines don’t actually deliver the outcomes promised by those who draw them up? Is not the answer to this, that insurance people need to learn new skills, need to strengthen their attentiveness, need to learn how to stand back and reflect on what they are doing in relation to things like consumer trust? In short, a new form of data ethics, that is more bottom up rather than top down.

Some insurers are tuning into this. The data ethics lead at one leading UK insurer said recently that ‘fairness isn’t a statistical concept but a human value’. That’s spot on – fairness has always been, and always will be, a social thing, not a technological thing. The key step then is to work out how best to handle that human value.

I mentioned above, that insurance people need to learn new skills. So what skills are those? In my opinion, these skills are primarily around ethical decision making and the handling of ethical dilemmas. For those in leadership positions, it’s about leadership of ethics (very different to ethical leadership). These three things are at the heart of the training programmes I provide to insurers. And professional bodies too – they form the core of the Business Ethics Programme I wrote for the Chartered Insurance Institutes’ Fellowship qualification.

Looking Ahead

Insurers need to address fairness – the questions will not just go away. Principles and guidelines, no matter how comprehensive, will not be enough. That’s because they invariably lack the mechanisms to reinforce their own claims. It’s outcomes that matter in the end.

I’m not saying that principles and guidelines should be scrapped. They can help, but only with the more standard, less complex issues. Issues at the intersection of digital technology and fairness are not well suited. This means data ethics has to find a new way of addressing those questions.

That ‘new way’ relies on seeing data ethics differently – not as a form of extended compliance, but as a method for engaging with the sector’s challenges in a more sustainable way. In my ‘Revolutionising Fairness’ paper, I put forward common pool resources (CPRs) as one possible mechanism for bringing this about. It’s an idea that I’d researched, but hey, if it doesn’t look like CPRs will work in this way, let’s redouble our efforts and find some other way.

People will have to be trained in how to do this. Firms will need the support of new types of experts. Trade bodies will need to lend their support, as will regulators, consumer groups and policymakers.

The outcome for all this effort? Insurers will have digital systems that reflect and embed the values that matter to their firm. Consumer trust in digital insurance will be rebuilt. People will want to move closer to insurers, seeking their products and services to help with modern digital life.

At first glance, this may sound rather ‘pie in the sky’, but think about it. If your digital systems don’t reflect the values you have, if consumers don’t trust firms like yours, if they keep at a distance to what you offer, then, to put it bluntly, why bother?

If you’re holding an internal workshop about data ethics, consider bringing me in as an independent voice. This broadens the perspectives that decisions will be based around. Get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.