Feb 13, 2018 4 min read

Gender bias in artificial intelligence: a vital trust issue for Insurers

Insurance is being transformed. Underwriting, claims and distribution are being remodelled to take advantage of digital tools like artificial intelligence. Yet this comes with ethical risks, that don’t just lie ahead, but are present now. One significant risk that researchers have confirmed is gender bias in artificial intelligence. In this post, we’ll explore how it arises and what insurance people can do about it.

The transformation of insurance is being powered by the vast amounts of data we are producing as part of our everyday lives. And the success of that transformation is being measured not by the size of that data, but by the insight you can draw from it. For this, insurers are turning to a variety of tools under the heading of ‘artificial intelligence’.  Algorithms are one such tool, being a structure of mathematical formulae for identifying relationships within data.

The data available to insurers has been growing ever more vast and varied – think of all that social media chatter and all those wearable devices. The capacity of humans to create ‘hand written algorithms’ capable of dealing with so much raw data is slowly but surely being reached.

The solution that data scientists have come up with is ‘machine learning’. This involves algorithms learning for themselves where those relationships within data lie. Yet for machines to learn, they must first be trained on historical data. Think of all those recommendations we see on retail websites. They come from algorithms interpreting and remembering your purchase patterns for next time.

The Spread of Algorithms

Let’s step back for a second and look at the spread of algorithms across the insurance market. Underwriters have been using them to expand the risk factors that influence quotes, and to reduce the questions needed to provide a quote. Claims people have been using them to reduce the time to assess and settle claims, and to spot fraudulent claimants. And marketing people have been using algorithms to personalise marketing campaigns and predict consumers’ behaviours.

Three trends are worth noting. Artificial intelligence introduces a more automated level of decision making. It also pays attention not only to what we say and do, but the feelings and contexts that drive those opinions. And finally, it is using machine learning to convert social media chatter into virtual identities for each of us.

The common denominator here is the vast amounts of historical data needed to create this automated dimension to the market. This data comes from the decisions, preferences and opinions we have been making in our digital lives. While many of those will be conscious decisions, the cleverness of artificial intelligence lies in identifying the unconscious element that often underlies those conscious decisions.

So an algorithm that looks at recruitment decisions will not only learn who got which job and who didn’t, but it will also pick up the underlying patterns that have been influencing those decisions. It does this by looking at the correlations woven into many millions of recruitment decisions.

Good and Bad Decisions

This is a double edged sword. If you train an algorithm on historical data, it will learn not only the good decisions we have made, but the bad decisions as well. It will learn the biases in society.

Research published in the April 2017 edition of the journal Science illustrates this. Researchers looked at a machine learning tool called ‘word embedding’ that is transforming the way computers interpret speech and text. What they found was that word embedding helped an algorithm learn how words for flowers were clustered around words linked to pleasantness, while words for insects were closer to words linked to unpleasantness.

They then looked at words like ‘female’ and ‘woman’ and found them more closely associated with arts and humanities occupations, while ‘male’ and ‘man’ were more closely associated with maths and engineering occupations.

The researchers also found that AI powered translations to English from gender-neutral languages such as Turkish led to gender stereotyped sentences. For example, Google Translate converted these Turkish sentences with gender neutral pronouns “O bir doctor; O bir hems ire” into these English sentences: “He is a doctor; she is a nurse”.

What this, and other research like it, tells us is that artificial intelligence will learn the gender biases ingrained in historic data and then propagate this into the automated decisions that insurers will be making in underwriting, claims and marketing.

The Impact of Gender Bias in Artificial Intelligence

The propagation of gender bias in artificial intelligence across insurance decisions could result in women paying more in premiums, receiving less in claims settlements and facing a greater exposure to mis-selling.  Such detriment will vary in scale from insignificant to significant, and in volume from occasional to widespread. That’s missing the point though. Gender bias in analogue decisions is illegal. It’s just as illegal in digital decisions.

Surely insurers already have controls in place to guard against such bias? This is true, but only to a degree. Two trends make this a real risk for insurers to address. Firstly, some of the insight that artificial intelligence generates will come from real historical data, while some of it will be ‘manufactured’ from the clusters of correlations that the algorithms identify. And secondly, both those real and manufactured insights will then be injected into automated decision making, the workings of which becomes more and more complex and opaque.

The challenge this presents is of ‘black box’ processes giving insufficient consideration to the ethical concerns that the public take for granted. Detriment blends into the normality of busy firms using complex systems. Complexity may be the new norm, but that doesn’t mean gender bias should be so too.

What should insurance do?

Insurance firms should work together on a structured response to the challenge of gender bias in artificial intelligence. Key principles should be adopted and business leaders should publically emphasise the need to follow them. Tools for testing both historic data and algorithms for discriminatory features should be adopted. Responsibilities should be allocated and training provided. Suppliers and business partners should be told to follow suit.

Individual professionals should keep in mind their obligation to work in the public interest and to uphold the Code of Ethics. Understanding the challenge and how it might affect their particular responsibilities is the starting point. Pressurising for positive change, both individually and collectively within a firm, is the next step. And providing leadership across the sector to address the issue is a natural continuation.

Think Trust

Many insurance leaders are “extremely concerned” about how trust could influence their firm’s growth. Tackling gender bias in artificial intelligence is an issue that needs to be part of every insurer’s trust agenda.

This blog post is based upon an article I wrote for the Dec.17/Jan.18 edition of The Journal of the Chartered Insurance Institute. I am grateful to the CII for their permission to reproduce it here.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.