Sep 2, 2022 3 min read

A Warning Shot on Data Ethics from California

A warning shot has been fired across the bows of US insurers by the Californian Insurance Commissioner. Its recent investigations into “..potential bias and alleged unfair discrimination in many lines of insurance…” seem to have unearthed problems.

A Warning Shot from California

In late June, Ricardo Lara, California’s Insurance Commissioner issued a bulletin to all insurers operating in the state. Its message was pretty blunt…

Insurers “…must avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’…”
“…when insurers use complex algorithms in a declination, limitation, premium increase, or other adverse action, the specific reason or reasons must be provided.”
“…before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, (insurers) must conduct their own due diligence to ensure full compliance with all applicable laws.”

You could of course say that points like these have been raised before, but this time it’s from the state insurance regulator, which carries a lot more weight than academics, journalists and commentators. It’s a line in the sand.

Claims is Very Specifically Included

Here are some interesting features of the announcement…

  • it very explicitly covers marketing, pricing, underwriting, counter fraud and claims practices. So pretty much everything insurance related that an insurer does.
  • it covers both residents and businesses, so personal as well as commercial lines.
  • it specifically references cases where “insurers are unfairly flagging claims from certain inner-city ZIP Codes and referring these claims to their Special Investigative Unit. Many of these claims are then denied or the claimant is offered unreasonably low settlements.”
  • it specifically references cases where “…insurers are using biometric data obtained through facial recognition technology to influence whether to pay or deny claims.”
  • it specifically references cases "where… insurers …and insurance marketing institutions are collecting biometric and other personal information unrelated to risk in the marketing and underwriting of insurance policies.”

Note those two very specific references to claims practices. This is not just about price and marketing.

The commissioner associated two types of harm with these three specific practices…

  • they “may intentionally or unintentionally result in a disproportionate number of unfair claims delays and denials to claimants from socioeconomically-disadvantaged communities”
  • they create “a risk that eligibility could be denied based on race, gender, disability, or other protected classes.”

Not a Game Played by Different Rules

What Ricardo Lara is doing is saying to insurers something along these lines: investigate and document these practices yourself, be prepared to justify what you’re doing to my investigators, and face the clear consequences if you’re outside the law.

It reminds me of a warning that the UK’s Information Commissioner gave to insurers several years ago: “big data is not a game played by different rules.” In other words, insurers can’t hide behind the complexity and opaqueness of big data and analytics.

Lara goes on to address that ‘data unrelated to risk’ issue in more detail.

“Many external data sources used by insurers …utilize geographical data, homeownership data, credit information, education level, civil judgments, and court records, which have the strong potential to disguise bias and discrimination. Other models and algorithms purport to make predictions about a consumer’s risk of loss based on arbitrary factors such as a consumer’s retail purchase history, social media, internet use, geographic location tracking, the condition or type of an applicant’s electronic devices, or based on how the consumer appears in a photograph. The use of these models and data often lack a sufficient actuarial nexus to the risk of loss and have the potential to have an unfairly discriminatory impact on consumers.”

Worth Listening To

So how big an impact will this announcement have? Well, California’s insurance industry is worth about $371 billion. Compare that with the UK’s insurance industry (Europe’s largest), which comes in, according to the ABI, at $283 billion. In essence, when California talks, insurers need to listen.

And it won’t just be insurers listening. Consumer groups and regulators will be taking notice too. The FCA keeps in regular contact with its US counterparts. And it’s interesting that in the Californian announcement, specific reference is made to research by the Consumer Federation of America and the Center for Economic Justice. They’re already being listened to.

While California feels like the other side of the world to UK insurers, this particular stone will send ripples out across many insurance markets, as they wait to see what comes next. If there was one piece of advice I’d give to insurers at the moment, it would be to develop a more questioning, more challenging narrative for their engagement with third party providers of data and analytics. Beyond that one step... well, get in touch.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.