Nov 24, 2023 8 min read

Fairness as a Service : a Step Forward for Insurers

A data and analytics provider to the insurance sector has launched fairness as a service. It’s orientated around the US requirement that insurers do not unfairly discriminate. It’s the sort of service that a recent UK Gov. report called for. I examine how it works and what it points to.

fairness as a service
What impact will fairness as a service have?

Verisk describe their new product, FairCheck, as a fairness as a service (FAAS) solution for insurers to “test their personal lines models and variables to respond to regulatory changes”. In particular, it address the recent legislative changes in Colorado, which require insurers to test their rating models for signs of unfair discrimination.

The FAAS product grew out of analysis Verisk did on its own auto database for signs of unfair discrimination. They then had their methodology reviewed by insurance academics at Saint Joseph’s University. And they published a paper online about how they went about it. So there’s some level of transparency here, which is good. 

The Wider Background

Interest in reviewing datasets and algorithm models for bias has blossomed in the US in recent years, largely because of regulatory trends in US states like Colorado. What insurers saw emerging was a scenario whereby regulatory approval for rate change applications would become increasingly conditional on evidencing that those rates weren’t unfairly discriminatory.

It’s worth remembering that US insurers have been under an obligation for many years to rate in ways that weren’t unfairly discriminatory. The change emerging now is for this to be evidenced.  The presumption is that this would have been something they were already doing internally and so the regulation would just spur them to share it externally.

The UK Government

A problem for insurers both in the US and elsewhere has always been the lack of direct personal data on protected characteristics like ethnicity and gender. Insurers often didn’t capture this, which obviously made it difficult to evidence their compliance.

The same problem was identified by the UK Government, who last month issued a report looking at how this problem could be addressed. Their Centre for Data Ethics and Innovation (CDEI) report looked at two key ways in which UK firms in general (not just insurers) could approach this problem. One was through data intermediaries and the other through proxies.

I think it’s worth raising at this point the obvious question as to why this ‘problem’ is only now receiving the attention it has long deserved. After all, in both the UK and US, as well as many other countries, equalities legislation is long standing. I think two trends have brought about this attention…

  • the broad social trend that wants firms to not just tell consumers that they are complying, but to show them how they are, even to provide proof of this;
  • a principles based approach to regulation that was not supported by enough analysis and evidence gathering by regulators to ensure that the principles were being actively applied.

As a result, questions started to be raised about insurers giving insufficient attention to equality obligations in relation to customers. My 2015 survey of six leading UK insurers illustrated why it seemed reasonable to raise those questions. In summary then, the present attention coming from organisations like the CDEI is valuable, but it is also long overdue.

The Proxy Approach

Verisk appear to have used what the UK CDEI report describes as the proxy approach to demographic bias detection. This is how the CDEI describes the proxy approach…

“In contexts where collecting this demographic data directly is hard, an alternative is to infer it from data you already hold, using proxies for the demographic traits that you are interested in.”

Verisk has some very large internal databases on auto rating and losses. To bring in demographic data on ethnicity, they accessed US census data and used name based geocoding to impute ethnicity. Their analysis then found no unfair discrimination in their databases. Their FairCheck FAAS service on offer to insurer clients has been built around this approach.

So how good is this proxy approach? The CDEI report (which is worth reading) lists several benefits and risks associated with it.

Benefits

Utility – in some cases, proxies can be more revealing about ‘perceptions of race’ and its influence on decisions

Convenience for service providers – the proxy is a piece of data already held by the service provider or one of their suppliers.

User experience – consumers don’t have to provide their own demographic data, something which they often have misgivings about.

Data quality – consumers who have experienced discrimination in the past might provide inaccurate data or none at all, so making it difficult to work out whether the bias they experienced is still continuing. Proxies get round this problem.

Barriers and Risks

Legal – most demographic data inferred through the use of proxies is likely to be classified as personal or special category data under the UK GDPR. This may still be the case even when the data is held at ‘affinity group’ level.

Accuracy - proxies can generate inaccurate inferences which can obscure or even exacerbate bias in AI systems when used for bias detection. Research commissioned by the CDEI found that the reported accuracy rates for many proxy methods were unreliable.

Privacy – a public survey commissioned by the CDEI found significant concern about the use of individual level proxies, particularly with regard to the more sensitive demographic categories such as race.

Transparency and Autonomy – the use of proxies to infer demographic data is inherently less visible to people than collecting demographic data directly from them.

Public Trust – proxies are controversial and so the public are less trusting about their use. A lot depended on the type of proxy, the demographic trait involved and the type of organisation using proxies.

Data Quality - poor quality data undermines bias detection, and can even introduce new bias, particularly when marginalised groups are poorly represented in the data. 

How Good is FairCheck Then?

What the CDEI report makes clear is that firms have to be really careful when using the proxy method to detect bias. Yet it remains at present one of the two main methods for doing so. Verisk have invested in both self applying the method and then packaging it into a product.

On the face of it, this earns them a tick for initiative. So what about a tick for accuracy as well? That I can’t weigh up, as I’m not a data scientist and have no access to the detail. The review by St Joseph’s University is nice, but I note that the board of its Maguire Academy of Insurance and Risk Management is entirely drawn from the insurance sector. This introduces a lingering question about how independent and challenging it really was.

The really important review will be the one undertaken by data scientists working for the National Association of Insurance Commissioners (NAIC). Individual US states will be looking for their views on FairCheck so as to weigh up the findings from it that they encounter with Verisk insurer clients.

An Incomplete Picture

I would recommend the NAIC data scientists think carefully about a key feature of the Verisk FAAS product. FairCheck is based upon loss cost, not premium rates. This is not a problem so long as premium rates are actuarially derived. We know however from the price walking episode here in the UK that personal lines insurers have not always followed a purely actuarial approach. Factors associated with ‘willingness to pay’ were allowed to become part of the premium calculations. Those ‘willingness to pay’ factors could potentially be a significant source of bias.

Further bias can be introduced through expenses, claims service and practices, and counter fraud practices. This case in Illinois is an example of this. So for me, Verisk’s FAAS product is a good but incomplete start. Insurers using it (and I expect it to be adapted to the UK market before not too long) need to understand those incomplete aspects in order not to misinterpret its outputs.  

Data Intermediaries

The other method examined by the CDEI here in the UK was data intermediaries. And by this, the CDEI don’t mean data brokers and software houses, but an array of governance structures to facilitate greater access to or sharing of data…

“a (demographic) data intermediary can be understood as an entity that facilitates the sharing of demographic data between those who wish to make their own demographic data available and those who are seeking to access and use demographic data they do not have.”

Examples would be data trusts, data custodians, data cooperatives and personal information management systems. Their role could go beyond simply facilitating data sharing, through harnessing several types of technical expertise and leveraging cost efficiencies.

I won’t go into any detail about data intermediaries here, other than to point out their obvious difference to the proxy method. It is that they use data provided by individual members of the public. As a result, they tend to be more trusted by the public.

The present day problem however is that no substantive ecosphere of data intermediaries has yet to emerge. This quote from the CDEI report sums up why…

“Given the relative immaturity of the market for data intermediaries, there may be a lack of awareness about their potential to enable responsible access to demographic data. In addition, the incentives driving data sharing to monitor AI systems for bias are primarily legal and ethical as opposed to commercial, meaning demand for demographic data intermediation services relies on service providers’ motivation to assess their systems for bias, and service users’ willingness to provide their data for this purpose.”

Start with Proxies

This means proxies have to be the way forward at least for the moment. And this is unlikely to be controversial. After all, the analysis Citizens Advice undertook for their ethnicity report used the proxy method (and the CDEI report explains how they did this). The key challenge then is to make sure that it is being used in a carefully considered way. Here are some steps from the CDEI report that insurers could use in a ‘carefully considered’ approach to using proxies...

  • establish a strong use case for the use of proxies as opposed to other alternatives
  • select an appropriate method and assess associated risks
  • design and develop robust safeguards and risk mitigations

The CDEI expands upon each of these steps in their report. They also quite rightly point out that regulators need to engage with firms to ensure that steps like these are applied in a fair and robust way. Sharing good and poor practice would be one way for a regulator like the FCA to send the right signals to the market. Another way would be to forcefully address cases where the use of proxies for bias detection ignored those three steps.

Whose Fairness?

A weakness in the CDEI report is the rather ‘after thought’ way in which the role of civil society is brought in. The danger here is that bias detection is thought of as a technical solution to a social problem, to be applied by governmental and sector experts. Tackling bias, and fairness in general, needs to be a joint enterprise, organised through engagement with a range of stakeholders, of which civil society groups are one of the most significant.

A similar weakness can be found in the Verisk approach. What they are using FairCheck to look for is actuarial fairness, the premise of which can be less technically described as the fairness of merit. While making sure that loss costs are not actuarially unfair is important, the fairness of merit cannot be weighed up on its own.

As I explored in some detail in this paper for the Institute and Faculty of Actuaries (IFoA), fairness has several dimensions which need to be considered. This wider perspective would allow what I call an equality of fairness to be more carefully approached. Insurers and their suppliers need to recognise that that significant stakeholder – civil society groups – will be thinking in terms of those different dimensions of fairness.

Isn’t that all too complicated though, I hear some of you mutter. And I agree, it is complicated, but it is also better to move forward on fairness and bias in ways that recognise the wider landscape, not just the immediate surroundings. It’s easy to get lost otherwise.

Structural Fairness

Let’s look forward a bit and assume that a lot of good technical solutions have been found to the challenge of bias in data and models. Does that mean ‘problem solved’ then? It doesn’t because of something called structural fairness at the market level. This means that even if issues of fairness in data and models are resolved, the problem still remains of fairness in relation to how that data and those models are used. Insurance is an example of this par excellence, which is why researchers are looking closely at the implications of how insurance is digitally transforming itself.

To Sum Up

To achieve really transformational momentum in its digital journey, the insurance sector needs to tackle the question of bias in data and models. Remember that Citizens Advice chose to orientate their ethnicity penalty work around insurance because they saw it as the sector that best illustrated the problem. Neither the problem nor the questions are just going to go away.

Verisk’s move on FairCheck is a positive one. Yes, there are shortcomings and questions, but sometimes these are best tackled when something is on the move, rather than switched off and waiting. The key thing is to build a robust learning loop into the product’s evolution. That is what I would like to see them doing next, and being open about it.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.