Jan 11, 2023 7 min read

Regulatory guidance on discrimination for Australian insurers

The Australian Human Rights Commission issued guidance last month about discrimination in insurance pricing and underwriting. Its strength lies in the ‘lines of thinking’ that it encourages insurers to follow, yet there are some curious interpretations that raise concerns.

guidance
Big skies offer big opportunities

The guidance was developed in collaboration with Australia’s professional body for actuaries, the AIA. It’s great to see the two working together, although the guidance itself does have a strong regulatory feel to it. It is of course centred around Australian law, but there are things in it that will be of interest to a wider audience.

Those ‘lines of thinking’ I refer to are very much along the lines of ‘this is how insurers need to think about discrimination when considering how to use artificial intelligence’. It has some useful worked through case studies for motor, travel and life insurance, but again, these are largely about how equalities legislation would interpret certain rating approaches relevant to those lines of business.

This then is a guide to equalities law, with a particular focus on how insurers’ exemptions might be interpreted in the context of the use of AI and big data. Perhaps then it might more appropriately be called a ‘reminder’, rather than ‘guidance’.

Reminders like this are valuable for a market that is doing its best to digitally transform as fast as possible. Here in the UK, insurers are vulnerability to having their exemptions under equalities legislation questioned or curtailed as a result of discrimination practices introduced through poorly controlled data and analytics practices (more here).

I’m not going to examine the guidance’s key strength here (those ‘lines of thinking’), for they’re very much based around the wording of Australian law. Instead, and at the risk of appearing rather ‘pot half empty’, I’m going to set out seven points that I think the Australian Human Rights Commission and the Actuaries Institute Australia need to think carefully about. I’m going to justify this by reference to both parties wanting to develop further and more detailed guidance on the back of this one. So, here we go.

1 - Just Pricing and Underwriting

The report is titled “Guidance Resource – Artificial Intelligence and Discrimination in Insurance Pricing and Underwriting’. So what about other insurance functions like claims and counter fraud? There is as much a risk of discrimination in claims and counter fraud as there is in pricing and underwriting. In fact, for the last few years, it’s been my view that the risk in claims and counter fraud will be bigger (more here).

There’s no harm in a report having a focus, but this seems a bit like going to buy a new car and only being allowed to look at the front of it, not the back. Given the influence of counter measures for application fraud (more here) and the trend towards ‘underwriting at claims stage’ (more here), there’s a risk here of policy development being based upon only half the story.

The same happens here in the UK. The Government and insurance trade body are happy to re-sign agreements on the use of genetic information in insurance underwriting, while law firms explore using genetic data to shape large loss settlement agreements (more here).

2 - When does AI begin?

Consider this paragraph from the report…

“Insurers have long relied on data, statistical analysis, and models to determine risk and set prices for their policies, even prior to modern computing. Interpreted broadly, a wide range of technologies and techniques traditionally used by insurers in pricing or underwriting may be considered AI, or may otherwise form part of AI-informed decision making.”

I’ve heard this argument on several occasions, and by and large, it’s one I’ve been happy to go along with. Until, that is, I consider this subsequent paragraph…

“Discriminatory outcomes can be less obvious and more difficult to detect in AI-informed decision making, particularly where there are difficulties in providing reasons or explanations for AI-informed decisions. This problem is often referred to as ‘opaque’ or ‘black box’ AI.”

What this alludes to is the notion that discriminatory outcomes in AI informed decision making have been around from before modern computing, and that as such outcomes are difficult to detect, then the present situation is just part of a long running difficulty.

Except that it isn’t. Discriminatory outcomes have been a problem for insurance for a long time and, especially in the US, insurers have paid heavy penalties for that. The opaqueness lay in how the sector worked, not the tools it was using.

Move forward to the present day and the danger emerges of discrimination being labelled as a long running, difficult to resolve problem that is unlikely to be sorted out any time soon. And this then leads to a ‘we may just have to live with it’ mind-frame.

AI informed decision making in pre-computing insurance is a hugely over stretched notion that should be dropped quickly.

3 - Handling Complaints

Exemptions that Australian insurers enjoy under their equalities legislation are balanced with expectations that if a complaint of discrimination is made to the human rights commission, then the insurer would disclose the source of the data that was relied upon. From one direction, that all sounds fine. But think of it another way round. If an Australian citizen was to bring forward such a complaint, how would the insurer be able to determine what data is relevant to that complaint? That answer is buried in their analytics, and even then, it would be much like looking for a paper needle in a very large haystack.

The digital decision systems used now by insurers pull together large numbers of databases and analyse them with ever more sophisticated analytics. They’ve got no more chance of isolating the data that may have caused a person's discriminatory outcome, than I have of scoring the winning goal at this year’s FA Cup.

Regulators need to update their procedures and capacity to handle complaints, to take account of how underwriting and claims decisions are being taken. If people are to be accountable, then the framework which is to hold them to account must be able to do so in practical terms.  

4 - What is Accuracy?

On several occasions in the report, reference is made to AI being able to ‘analyse large amounts of data… accurately…’. That word ‘accurately’ sends out all sort of messages, most of them linked with interpretations of truth. This is because accuracy is about correctness, and so tied with a singular point around which things are referenced.

So if I say that my analyse is accurate, it implies that it is correct, a true picture of the thing I am analysing. However, while my analysis may present a picture of someone like you, that picture will not be you. It cannot even be said to correctly represent you. That is because the analysis will have been based on some primary data, an awful lot of secondary data and even more unstructured data scraped off internet platforms. And there will not be one you, but several based upon the context in which you are in. I explore this more here in a post about personhood and identity.

My point here is that there’s a danger of thinking digital decision systems being more accurate that the humans that are the subjects of the analysis. What this then feeds into is a view that insurers have to collect ever greater amounts of data about us, on the basis that they can no longer trust what consumers themselves say (more here). And this view then makes the extension of data collection seem like an absolute necessity, something that the sector relies on.

Is there an alternative term? Well, I'd give 'statistically accurate' a try, so long as how that is set is transparent.

5 - Some unlawful discrimination?

Consider these words from the section on ‘Mitigating against algorithmic bias and discrimination’…

“ AI systems should generally be designed, where possible, to avoid discriminatory outcomes.”

What struck me was whether “AI systems should generally be designed…” or ‘AI systems should be designed..’. The difference is significant (you can tell perhaps that I worked on insurance and risk transfer wordings for several years!) ‘Generally’ implies most but not all. And ‘where possible’ implies much the same. It feels like a double opt-out.

The law in Australia does allow insurers some leeway for lawful discrimination, but while not a lawyer, I don’t recall the legislation there (or here in the UK) conditionalising those exemptions with phrases like ‘where possible’ or ‘generally’. Why here? It doesn’t seem right.

6 - Blaming the Data

In that same section mentioned in the sub-section above, reference is made that… ‘insurance pricing and underwriting decisions are driven by data’. Except they aren’t. Sure, such decisions make use of data, but they are entirely driven by people. These are the people who decide what data to collect, how to collect it, how to grade it and how decision systems make use of it.

Am I being a bit pernickety? No, because we’ve got to be careful with how these things are phrased. ‘Driven by data’ can mean two very different things, depending on the perspective you bring to the reading of it.

I’ve been told that significant underwriting decisions have been taken because ‘that is what the data told us’ and ‘we had to use a new database’. What this implies is that humans are on the side-lines and that what the data says determines the inevitable outcome for this or that insurance policy.

If insurers are to engage with other audiences to determine the best way of addressing discrimination risk in insurance decisions, then they need to move beyond this ‘driven by data’ narrative.

7 - The Cost of Missing Data

“Data sets may be inaccurate if affected by selection bias, such that the data is not representative of a population. Notably, individuals or groups that have faced systemic discrimination may be inaccurately represented, or under-represented in data sets.”

This paragraph makes an important point. You can’t examine discrimination in digital decisions systems without a clear understanding of what data is missing or under-represented, and why that is so. This matters both at the ‘tree level’ and at the ‘forest level’.

If this is not done, then the problem will stick around for much longer than an insurer would want it to. And it is certainly something that an external person such as a regulator or ombudsman, or an internal person like an auditor or compliance person, must take into account before they can confirm that the mitigation is working, that the problem has now gone away, and the like.

Looking Forward

The real strength of this report lies less in what it says and more in what it represents going forward, which is a joint platform upon which some of the real challenges coming out of the digital transformation of insurance can be discussed and pathways to their resolution explored.

It is very much a starting point, in that it sets out what the law expects of insurers and guides them towards the important ‘lines of thinking’ in relation to how they design and deploy AI. Hopefully, out of the collaboration that the report promises will come an ironing out of the points I raise above.

We can see at even this early stage just how inter-connected some of these issues can be. The report references privacy and the regulations that apply to it. And let’s not forget fairness, something which I see as a tectonic plate underlying much of the ethical concern about AI in insurance. So one of the challenges for this collaboration will be boundary control – not too narrow a focus on ‘my back yard’, not too wide a scope that complicatedness bogs things down.

Roll on the next stage and the commissioning of further guidance that the report promises.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.