A Significant Development on Bias in US Insurance

I’ve been emphasising to clients and in conference speeches recently that insurers need to pay particular attention to their use of secondary data. This is data collected by someone else for some other purpose, but which is then bought in by an insurer for insurance purposes.

Legislators in the US state of Illinois are seeking to pass two laws. If passed, House Bills 4611 and 4767 will significantly change how auto insurers doing business in the state rate their auto policies. The impact on insurers is in five areas, which I’ll explain in turn.

Protected Characteristics

These two bills would prohibit auto insurers from...

“unfairly discriminating based on age, race, color, national or ethnic origin, immigration or citizenship status, sex, sexual orientation, disability, gender identity, or gender expression”.

That is a wide range of characteristics. Here in the UK, and in the EU, the use of race and gender are specifically prohibited, while factors like age and disability are allowed so long as a number of actuarially related exemption conditions are met. So Illinois’s bill would go way beyond what insurers here and in many other countries are use to.

And the Illinois bills are clear about this prohibition extending across an auto insurer’s operation, covering what happens in “marketing, underwriting, rating, claims handling, fraud investigations, and any algorithm or model used for those business practices.” It’s not just rating, as can sometimes be the case.

Remember also that Illinois is where State Farm are facing a significant legal challenge in court that relates to discrimination in household claims service and settlement (more here).

Secondary Data

The proposed legislation also seeks to stop the use of external consumer data and information sources in a way that unfairly discriminates policyholders. External data is a legislator’s way of referring to secondary data.

The Illinois bill doesn’t stop insurers using secondary data, but it explicitly links it with the requirement not to unfairly discriminate. In so doing, it pushes insurers to up their game in relation to due diligence of data brokers and software houses. They’re being told to be much more active and direct in ensuring that their data sources meet the ‘not to unfairly discriminate’ rule.

A survey from not too long ago found that a surprisingly significant number of US insurers are not proactively monitoring that ‘not to unfairly discriminate’ rule. These bills seek to address that.

Credit Scores

A long running issue influencing these two bills is auto insurers’ use of credit scores. Campaigners have long been challenging the sector on their use (more here) and several states do already ban their use. Again, credit scores are a form of secondary data and these bills put the onus on insurers to ensure that their use is not unfairly discriminatory. From what I’ve been told by both UK and US legal scholars, that would be pretty difficult to do.

Examine and Investigate

Another interesting angle to these Illinois bills is the power they could give the state’s insurance regulator to “examine and investigate an insurer's use of external consumer data and information sources, algorithms, or predictive models…”.

This is the flip side of the obligation on insurers to not unfairly discriminate. While in the past, insurers operated in a ‘tell me’ world, they are on the cusp between a ‘show me’ world and a ‘prove to me’ world.

So while in the UK, the regulator is still at the stage of asking some leading motor insurers how they ensure that their rates aren’t being discriminatory (more here), in the US they want the power to get to the actual data, to get inside the actual models being used.

This is the move I’ve been talking about for several years now (more here). It is also one that US legislators and regulators are more actively addressing than their counterparts in the UK and EU. Some of you may find this surprising, expecting it to have been the other way round. Yet US state regulators and their membership organisation the NAIC have always been more proactive.

Personalisation

A narrative that has clearly settled in the minds of Illinois’s state legislators is illustrated by this remark from the state’s Secretary of State:

“…an individual’s driving record should serve as the primary factor that’s analyzed when setting rates”

The logic here is that it is only fair that one’s own driving records should determine rates, not their personal characteristics or the driving records of others. It is a logic that needs to be handled with care.

There’s a broad narrative that underpins much of the digital innovation happening in insurance markets at the moment, that behavioural fairness should take precedence over actuarial fairness. In other words, how you drive should matter more than what your age, disability, gender or race is.

This narrative comes across as attractive in terms of fairness – why should you pay for less careful drivers? Yet it is a narrative with some fundamental flaws in it in relation to fairness, and back in 2020, I examined those problems in this article.

Be Careful What You Wish For

If you take the personalisation of auto rating to its logical conclusion, it will mean that we each pay for our own losses. This would introduce much more price volatility into the premiums paid by consumers than most observers of the sector’s digital transformation anticipate. Factor in the third party injury exposures that drive a lot of the auto premium calculation and the auto insurance market could veer towards collapse.

Some form of balance between pooling and personalisation is needed to accommodate society’s interest in both personalisation and solidarity (more here on the latter). So the question then turns to one of how this is then achieved (more here). It will be a tricky one to answer, but not impossible. What I’m finding is that people are starting to understand that it needs to be thought about, which is a step forward from even a few years ago.

To Sum Up

The Illinois bill is close to the EU’s proposed AI Act in that it addresses secondary data and social scoring head on (more here). Insurers need to have this hugely significant trend firmly on their radars. Unfortunately, not many do at the moment.

One angle that both of these pieces of legislation touches on is what I call structural fairness. This looks beyond how fairness in model design can be ensured and instead, addresses fairness issues relating to the simple capacity of insurers to collect and analyse data like never before.

Legislative moves like those in Illinois are examples of how the tectonic plates of insurance have been set into motion by issues around fairness. As one regulator said eleven years ago…

“…for leaders today – both in business and regulation – the dominant theme of 21st century financial services is fast turning out to be a complicated question of fairness.”