Mar 9, 2023 7 min read

Our Digital Footprint – a Pandora’s Box of Data?

I gave a keynote speech at a claims and fraud conference this week, on whether insurers’ exploitation of our digital footprint could turn into a Pandora’s box. Insurers see these new lines of insight as creating competitive advantage, but could they instead cause real problems?

emotion analytics
Are there lessons for insurers in Greek mythology?

Pandora was a character in Greek mythology. She found a box, thought it was a gift, opened it and unleash six curses upon the world. So could certain technologies, to paraphrase Greek mythology, bring a curse down upon the insurers who use them?

It’s a big statement, which I will justify in a minute, but first, let’s look at the three technologies I’m talking about.

Analysing Character

The first is sentiment analysis. This looks at what we write and say online, and what meaning can be read into that.

The second is emotion analysis, which looks at what our online activity signals about our emotional state both past, present and future. It purports to tell insurers what we are feeling, the moods and emotional attachments we have, and what that might say about us (more here).

And thirdly, there is behavioural biometrics, which brings together a variety of data sources to track how we are likely to behave and again, what meaning can be read into that.

Analysis like this aims to establish what sort of character we are, and how we feel about certain things, in certain times and in certain contexts. And most importantly for insurers, what we might then do as a result.

These technologies are at various stages of development and use within the UK insurance market. Clearly though, some like voice analytics have been around for several years and are well embedded into functions like claims and counter fraud.

The purpose of course is to protect the honest customer by identifying and dealing with the dishonest customer as earlier in the insurance lifecycle as possible.

Damning Comment

Let’s now bring in some recent commentary relating to these technologies. These quotes are drawn from a report published in October 2022 by the ICO, the UK’s data protection regulator.

“Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever.”
“While there are opportunities present, the risks are currently greater. …we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.”
“As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency…”

That in my opinion amounts to a very big flashing warning light about how business uses these technologies. And in a particularly damning comment at the time of the report’s launch, the Deputy Commissioner of the ICO said that the only use he could envisage for these technologies in a business setting was as a game at the office party. Nothing more than that would be appropriate.

Show Your Workings

So what is the ICO going to do about it?

“The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance…”

What they’re saying here is that they intend to use their powers to get firms using these technologies to “show their workings”. They will want to see how the firm has set the scope and depth of the purpose they’re using these technologies for.

They will want to see how the risk assessment has been carried out, in terms again of the scope and depth, but also what controls have been created as a result, how they’re being used, how this is being overseen and what evidence there is for the controls having the impact intended.

These workings out shouldn’t just cover privacy. Data protection regulations cover more than that. You can see from the quotes above that they also encompass things like fairness and discrimination.

Insurers need to remember that they can use special category data in certain circumstances but also that conditions apply. Insurers need to show how they’re managing this.

One aspect of these technologies the ICO will be interested in is whether the insurer has looked into the science underlying these technologies. Some of that science is highly contested. Has this been factored into the insurer’s risk assessment for these technologies?

Clearly then, the ICO will want to see that the insurer’s use of these technologies (the opportunity side) is related in some way to the risk side, through the risk assessment carried out. If that risk assessment hasn’t been thorough, then will the opportunity side be pulled back?

A Classic Ethical Dilemma

What we have here is a classic ethical dilemma. These occur when two ethical values are in some form of tension. Here we have honesty on the one hand (tackling fraud) and fairness on the other hand (using technologies that work, in ways that are fair).

The ICO wants to see how well the insurer has been managing that ethical dilemma. Are the judgments being made balanced and based upon the right information? Are the conclusions being drawn reasonable? Have they been documented so that others can understand how they’ve been made and so work out the implications for their own work?

Let’s now put the ICO to one side and look at situations that might cause attention to the use of these technologies to bubble up into the public domain. There are three that I think insurers need to watch for.

Current Situations

The first and most obvious one are media situations like that of Lemonade a few years ago. They’re proud of the data science capabilities and in a social media post talked about how they were using the videos submitted by claimants to check for fraud. Within a few hours, they were saying that they’d been misunderstood, that they wouldn’t do something like that, that they were good people. Unfortunately, not many were convinced by that.

The second situation that could bring attention to the use of these technologies within the insurance sector is discriminatory pricing. It is in what I call phase 2. Phase 1 saw Citizens Advice publish their research and engage with regulators and the sector. Phase 2 involves Citizens Advice go along with the FCA’s view that the consumer duty is the best way to address the issue.

So what will happen in phase 3? I think the Citizens Advice will be carrying on their research, watching for evidence that the consumer duty is having some impact. If that evidence isn’t there, or strong enough, they will, amongst other things, seek to highlight some of the technologies that are contributing to the issue of discriminatory pricing being sustained.

Tech in the Spotlight

Then there is discriminatory claims service and settlement. The case to watch here is in the Illinois State Court, where a case is being brought by a consumer group and law firm against State Farm, the biggest insurer of homes in the US, alleging discriminatory practices in claims and counter fraud. The plaintiffs hope that the detailed research they’ve done in the lead up to the lawsuit will result in it being given class action status, whereupon they will seek to expose the influence of certain technologies. Two leading technology firms working in the insurance sector are involved in the case (more here).

Exposures

Let’s move on to look at what might happen as a result. A fine is always possible, but may not be the only penalty used by the regulator or court. Here are two measures that insurers should watch for.

The first involves what I call ‘forced transparency’. This could come from the appointment by the UK government of a statutory consumer advocate (SCA), to strengthen attention to consumer interests in the market concerned. SCAs exist in a number of markets, but not yet in financial services. The difference they make comes from their powers they have to demand data from firms, to conduct their own research and to push regulators into action.

It is possible that should the consumer duty fail to address discriminatory pricing to the satisfaction of Citizens Advice, or the Treasury Committee in Parliament, then a SCA could be appointed (more here). One of the measures it could take would be to move behind the outcomes and seek to highlight not just questionable practices but questionable technologies too.

The second measure that insurers should watch out for is algorithm destruction. This has happened a number of times in the United States, but so far not in the insurance sector. A situation developing in the state of Washington, in the north west US, may change that (more here). It began with the state insurance commissioner challenging a leading auto insurer around issues of discriminatory pricing. Then the state Attorney General had to get involved, in what seems to be an increasingly combative legal case.

If the state carries their case, then there are signs here that algorithm destruction could be one outcome. This would involve the court or regulator telling the insurer to immediately stop using the relevant algorithm(s). That would have enormous repercussions for that firm.

Upcoming Developments

Here are some developments closer to home.

The ICO will soon be publishing their report on what firms can and cannot do with biometric technologies.

Next year we will see the FCA coming under pressure to show how well the consumer duty has tackled discriminatory pricing, most noticeably from the Treasury Committee. I think they will struggle to convince the TC on this and we could well see particular data and technologies being brought into those challenging discussions.

In a year or two’s time, we will see the EU finalising its AI Act. I expect to see some reference to sentiment and emotion analysis in it, either as a restriction or a ban. This could be business wide, but it’s important to remember that the EU has concerns about how insurers are using data and analytics, such as in their proposed ban on insurers use of secondary health data (more here). They may well pick out insurance for some form of special treatment.

Useful Skills

I recall being asked in the Q&A that followed my speech about the skills that would help insurers understand and respond to these situations. There’s several skills sets involved, but I focussed on the skills needed to tackle the ethical dilemma (honesty versus fairness) that is at the heart of where the counter fraud community finds itself.

The skills that are needed are the ones that will allow insurance people to see the situation from a wider perspective than just their immediate one, and to see the situation over more than just the immediate point in time they’re having to deal with it. These skills, of which I teach several, are designed to avoid the person feeling they have to make a decision now, based upon what they know now, in relation to the people in front of them now. By avoiding that immediacy, even for a short while, those wider perspectives of time and people can help arrive at a better decision.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.