Oct 21, 2020 6 min read

Data and Power pt2 – the two big threats to insurers’ digital strategies

It is clear that the two ethical issues over which insurers are going to find their digital strategies most challenged are discrimination and access. Many in the sector will feel this is manageable, for discrimination is never something they would contemplate, and working groups are helping the sector address access. Yet that view of these issues as manageable is a dangerous one. The sector has a long way to go on both issues, and some signals from the market point to gaps widening rather than closing. In this second post of the ‘Date and Power’ series, I examine where problems lie and outline how progress can be made.

Let’s begin with why I’m addressing discrimination and access together. They’re often seen as two separate issues, but they actually share many underlying characteristics. This is particularly so when it comes to data and analytics. As this post progresses, you’ll see why.

A Long Way to Go

And you may question why I see the sector management of these issues as having ‘a long way to go’. There are four reasons for that assessment. Firstly, a small survey I undertook a few years ago found that the sector was paying little to no attention to how it was handling discrimination in respect of consumers. Secondly, sources have signalled to me that parts of the market have been going backwards on discrimination, with data playing an influential role in this.

Thirdly, the issue has been discussed at Parliamentary committee only last year. They tend not to do that for non-issues. And finally, questions emerge from how the sector tends to react to cases in which questions are raised about discriminatory outcomes. The ‘we are good people, and so would never do that’ approach can signal a culture of disengagement with the issue.

Let’s look at that last point another way round. Do I believe that there are plenty of good people in insurance? Absolutely yes – I’ve worked with some great people during my time in the market. Do I believe then that some of those good people could design systems and products that bring about discriminatory outcomes? Unfortunately yes, it can happen, and how this could be happening is what I’ll look at now.

Not Neutral and Objective

Our starting point has to be two key realisations. The first realisation is that the systems built upon data and analytics are not neutral and objective. From them can emerge discriminatory outcomes, and the evidence that this has been happening across many facets of life is now widespread and accepted. The second realisation develops on from that first one, in that it recognises that the bias in our data and analytics resides not in the information, hardware and software, but in the people and processes that cause them to come into existence and give them their shape and purpose.

Whether or not this is a hard truth to take on board will, to a large degree, depend on your position in the world and the lived experience this has given you. Professional white males like myself can sometimes struggle to see the extent to which they have been favoured by life. Not so women, or people from black and ethnic minority communities, or people with disabilities, or people from disadvantaged communities, whose lived experiences have invariably been far more challenging.

This means that insurance people working out how to seriously get to grips with data ethics issues like discrimination and access must first examine the viewpoints that they are bringing to that task. To what degree have the lens through which they are framing this task been shaped by privilege or challenges? As they explore how to get to grips with data ethics, they must first weigh up the risk of them replicating the same approaches that could be said to have created some data ethics problems in the first place.

Examining Standpoints

Now you may think… “hey this is data – where are the questions in that”, but think about that example of facial recognition from the first post in this series. If the data shaping your facial recognition analytics comes largely from light skinned males, then it is their position within those tech firms, and the power inherent in deciding what data to use and how to use it, that has given rise to the obvious discrimination. It’s their standpoint on what mattered that gave rise to that problem, not the mathematics.

These two realisations will have all sorts of knock on influences, and the first two we’ll look at are around what data is collected and how it is collected. To begin a process of addressing discrimination and access, we must be prepared to challenge ourselves on both these knock on influences. Asking ourselves questions about what data we are collecting leads to a recognition that this data is not raw, but already emerged out of a complex set of decisions.

Take those with little to no digital footprint. The data we have as insurers is partial because it includes little to nothing from what are called ‘internet non-users’. The UK’s Office for National Statistics says that 10% of the UK adult population fall into this category. This points to your digital data being not raw, but pre-baked already. Extend that thinking to the circumstances in which the collection of data of light skinned people is favoured over dark skinned, of male over female, of able bodied over less able bodied, of well off over less well off, of young people over old people, and what you realise is that the data you’re starting with is well baked before it lands in your servers.

Doing Data Science Correctly

I know I’ve quoted the following words in previous posts, but they are absolutely worth repeating here. In a lecture at the Alan Turing Institute in 2019, Professor Terras of Edinburgh University’s Futures Institute had this to say:

“All data is historical data: the product of a time, place, political, economic, technical, & social climate. If you are not considering why your data exists, and other data sets don’t, you are doing data science wrong.”

So to do your data science right, to do your data ethics right, you must recognise that your data is not raw, but pre-baked, that it is not clean, but messy, that it is not complete, but full of holes, that it is an output first before it becomes an input into your analytics. The significance of this, and what you attribute it to, is that it should, if you’re doing your data science right, have a big influence on how you then put the data you have to use.

Let’s move on. The first thing that you will do with this data is to attach labels to it, in order to classify it for use by your analytics. And this is a process imbued with all sorts of ethical risks. To paraphrase Professor Terras and develop what she said a bit further – ‘if you’re not considering why certain classifications exist, and other classifications don’t, you’re doing data ethics wrong.’

The Shadows of Power

The process of classifying data is replete with all sorts of social, economic and political assumptions. And this becomes very evident when looking through the descriptors that data brokers use for the classifications their data is packaged up in. Just as with facial recognition, they betray what the philosopher Michel Foucault called the ‘systems of power’ under which they have been shaped.

What this adds up to is that your firm’s data, and the classifications into which it has been organised, and the way in which your analytics have been configured, have within them what can be called ‘shadows of power’. And it is those shadows of power that lie behind many of the discrimination and access issues that have to be addressed in data ethics. This means that a commitment to data ethics must recognise those shadows of power and involve a critical look at their influence and how it can be unpicked.

Now some of you might be asking whether this is all a bit over-blown. The answer to this lies in how firms are currently addressing discrimination and access issues within their own diversity and inclusion programmes. How your firm approaches these issues in terms of employees, should tell it what it needs to do to approach these issues in terms of customers.

A Lot of Involvement

What I can fairly confidently say, is that those diversity and inclusion programmes are not being shaped and delivered by a bunch of white male insurance executives in a board room setting. What is happening is a lot of engagement with, and a lot of active involvement by, people within the firm from a diverse range of communities. There is a lot of engagement, of reflection, of critical thinking going on in respect of equality and employees. The same needs to happen in respect of customers and the underwriting, claims and counter fraud systems that handle them.

From this will emerge different perspectives about why your data is messy, about what those classifications say about the data broker that wants to partner with you, about the way in which the levers of significance are being set within your analytics. This will of course throw up a lot of questions, some of which will have no easy answer. Being good at ethical dilemmas, at critical thinking, at inclusive decision processes, can help with this.

Summing Up

So, to sum up with these four bullet points…

  • Your firm’s use of data and analytics will contain within it a series of data ethics issues relating to discrimination and access.
  • These two issues are serious enough to warrant careful examination of their impact. They represent the main reputational issues facing many insurers.
  • That examination must be built around active engagement with those impacted by those discrimination and access issues.
  • Your firm’s people must be skilled at handling the difficult and challenging questions that will arise from this.

Insurers are going to come under real scrutiny on discrimination and access issues over the next few years. Yet there is a danger that the response will too often be cursory, designed simply to make the regulator go away. If they want to tackle data ethics in ways that reduce it as a significant threat to their digital strategy, then they need to think differently.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.