Nov 8, 2022 8 min read

A Big Red Flag has been attached to Emotional AI

The UK’s data protection regulator has raised a big red flag over the use of emotional AI. Not only do such systems fail to meet data protection requirements, but they also come with other serious ethical issues. So what does this mean for insurers, and how should they respond?

emotional AI
Parties are the only acceptable use of emotional AI

The Informational Commissioner’s Office has been cutting in its condemnation of emotional AI. Emotional AI is fine so long as it’s not used for anything other than entertainment…

“if you’ve got a Halloween party and you want to measure who’s the most scared at the party, this is a fun interesting technology. It’s an expensive random number generator, but that can still be fun.”

Their concern is such that this is the first time that the ICO has issued a blanket warning on the ineffectiveness of a new technology. Their announcement went on…

“…if you’re using this to make important decisions about people – to decide whether they’re entitled to an opportunity, or some kind of benefit, or to select who gets a level of harm or investigation, any of those kinds of mechanisms … We’re going to be paying very close attention to organisations that do that. What we’re calling out here is much more fundamental than a data protection issue. The fact that they might also breach people’s rights and break our laws is certainly why we’re paying attention to them, but they just don’t work.”

As regulatory announcements go, that is about as blunt as they get.

What This Means to Insurers

Some insurers have been exploring the use of emotional analytics in underwriting, claims and counter fraud. They’ve been turning to image and audio data for such things as sentiment analysis, gaze tracking and facial movements and expressions. I explored the implications of this in a detailed article back in early 2019.

The problem with emotional AI is that it rests upon weak and contested science. The interpretation of that image and audio data is very likely to result in “…meaningful decisions (being) based on meaningless data.” So why have insurers being engaging with emotional AI? There are two immediate and obvious reasons.

Firstly, the outputs from emotional AI systems are very tailored to the data and analytics decision systems used by insurers. That’s why data brokers and software vendors have been pushing their use for a variety of insurance tasks. The fact that the science upon which these emotional AI systems are based is weak and contested is pretty low down the priority list for those vendors. The scientific debate has been ignored.

Secondly, it creates value for insurers from online images, call centre conversations and remote discussions on platforms like Zoom. That selfie you posted online recently – it tells insurers something about your life and health risk. That zoom call with a claims department yesterday – it could influence your settlement.

This led to a confidence in emotional AI systems taking hold across the market. As one leading European insurer wrote in 2019, “emotions can in no way lie”. Lots of people didn’t agree with that, including now the ICO.

Insurance has Problems

The insurance sector has provided some good examples of why there are concerns about emotional AI systems.

In 2021, Lemonade found out how the public felt about emotional AI. The insurer posted on Twitter about how its AI analyses videos of customers for ‘non-verbal cues’ to determine if their claims are fraudulent. They immediately had to fend off accusations of discrimination and bias, as well as what this report described as “general creepiness”. Lemonade withdrew the tweet and reposted apologising for what they described as 'the confusion'.

Here in the UK, as far back as 2016, Admiral tried to use sentiment analysis for underwriting purposes, based on what we write in Facebook posts. Facebook took exception and the whole project collapsed within hours of it being launched.

EU Wide Concerns

Last year, the EU published proposals to introduce new transparency obligations on firms using systems to detect emotions or determine associations with categories based on biometric data. Firms would be required to notify someone who is exposed to such systems. Compliance will rely on regulators being granted access to the source code and the training and testing datasets for such systems. And remedies such as system shutdown and fines of up to 6% of total worldwide annual turnover were suggested.

These are of course just proposals from the EU, yet the drift of sentiment is clear. You can see how regulators may be talking primarily about privacy, but definitely have other ethical issues like fairness, discrimination and transparency on their agendas.

I want to look now at two developments that the type of regulatory positioning taken by the ICO will produce. The first is a response from insurers, the outline of which I set out below. And the second is about the hurdles that the regulator will encounter.

Five Steps

UK insurers have no choice but to take heed of the ICO's views on emotional AI. Their response will face two challenges however. Firstly, to do this with sufficient independence of thought to avoid a near enough 'business as usual' attitude. And secondly, to work out how the ICO will turn their obvious concerns about emotional AI into specific regulatory expectations, particularly outwith of their core data protection remit.

Let's look now at five steps that insurers can start building their response around.

The first step insurers should take is to understand the issues associated with emotional AI. And the best way of doing that is to read this book – Emotional AI by Professor Andrew McStay. He is an influential figure in public policy around emotional AI.

The second step is for the insurer to map out where they are already using emotional AI or have plans in the pipeline to do so. This needs to incorporate both the functional context, the type of analysis being undertaken and the implementation pathway. If such systems are being used other than just for the office party, then the insurer has a problem.

The third step is for the insurer to revisit their data protection impact assessments for those systems and plans, and check the judgements, assumptions and conclusions used in them. If those don’t reflect the key points highlighted by the ICO, then the insurer has a problem.

The fourth step is for the insurer’s legal team to weigh up the implications of decisions that have already been made in relation to customer policies, which have been influenced by existing uses of emotional AI. If as the ICO say, they’ve “…yet to see any emotion AI technology develop in a way that satisfies data protection requirements”, then an exposure is likely to already exist. And given that legal would have already signed off on those existing uses of emotional AI, it’s best that this is done by an independent legal expert.

The fifth step is for the insurer’s risk management and compliance teams to incorporate the firm’s use of emotional AI into their assessments and controls. This needs to be done with a lot of reflective thinking, given the acceptance of such emotional AI systems up until now. Those teams also need to think about how the ICO might move forward on its recent announcement. Let’s now consider that.

The ICO’s Dilemma

There was nothing surprising about the ICO’s announcement on emotional AI. For sure, its tone was unusually damning, but concerns such as these have been around for several years now, including in this article I wrote in 2019. So what are the next steps for the ICO?

They’ve announced that they will soon be publishing guidance for firms on the use of biometric technologies. These remarks show that they don't consider all uses of biometric technologies to be bad…

“To enable a fairer playing field, we will act positively towards those demonstrating good practice, whilst taking action against organisations who try to gain unfair advantage through the use of unlawful or irresponsible data collection technologies.”

Note those last few words. They will take action where the use of data collection technologies may be lawful, but which the ICO deems to be irresponsible. That’s why firms should not be guided solely by legal ‘can we’ considerations. ‘Should we’ considerations should be just as pertinent, and in answering that latter question, the reasoning needs to be clear, whether the answer is a yes or no.

Words and Actions

What interests me is what lies beyond this guidance. The ICO acknowledges that emotional AI systems raise a number of ethical issues: privacy of course, but also fairness, discrimination and autonomy. With privacy, we can expect the ICO to rely on data protection legislation. With discrimination, it should rely on equalities legislation, but will it be able to?

The Equalities and Human Rights Commission told the Treasury Committee a few years ago that it didn’t have the expertise or capacity to address discrimination in complex digital decision systems. To that same committee, the Financial Conduct Authority promised that their resources and expertise could get inside insurers’ pricing models. After that, these bullish words by the FCA seem to have resulted in little of substance actually happening.

Unless regulators can build a clear path from general problem (a discrimination risk) to specific resolution (firms told what and what not to do), firms will take this as an amber light to proceed with as much care as their corporate culture deems necessary. What this then encourages is the leveraging of regulatory indecision for competitive advantage, which then holds back everyone in the market.

And if we then consider fairness and autonomy, the regulatory pathway is even less clear. Within what framework for say fairness is the ICO going to weigh up the different uses being made of emotional AI systems within a market like insurance? The FCA had to knock one up pretty quickly in relation to the fairness of insurance pricing but it has failed to turn heads. How can a data protection regulator fare any better?

The DRCF

One way they might try is through the Digital Regulators Cooperation Forum, of which both the ICO and FCA (but not the EHRC) are members. And I think the ICO might be better placed to lead on this. As a horizontal regulator, the ICO is better placed to take forward the type of ethical issues it has raised in relation to emotional AI. It is less concerned about a market (e.g. insurance) and more concerned about an ethical issues (in their case, privacy). The FCA, as a vertical regulator, has a culture and perspective too orientated around a rationalist market economy outlook. As they often like to remind us, they don’t do social or ethical issues. The super-complaint forced them to look at fairness.

So, problem solved? Not yet. A regulator delving into an area of contested science (such as emotional AI) will have to tread carefully. They will not be able to engage neutrally, however hard they try. My Master’s degree thesis looked at exactly this issue of regulators, risk and public policy. What is needed more than intervention by a regulator, is a forum facilitated by the regulator, at which different stakeholders can arrive at a consensus for acceptable uses of emotional technologies in markets like insurance.

What This Points To

All of this points to the next few years being an uneven one in relation to the regulation of emotional AI and the various ethical issues it throws up. It’s possible that the ICO will end up blowing hot and cold, like the FCA has done on data ethics. That said, I think perhaps less so though than the FCA, for horizontal regulators tend to think and act differently to their vertical counterparts.

For insurers, I think the direction of regulatory travel, at both EU and UK level, on emotional technologies is now very clear. An abundance is caution is needed, and this needs to be applied to what the insurer is already doing and what it has in the pipeline. Projects to gauge fraud risk in consumers and policyholders, to assess and predict mental health issues in group schemes, to assign character traits to a consumer, or that consumer to an emotional category – these will all now need to be designated as projects of high or unacceptable risk. Some hard decisions await insurers.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.