Jul 20, 2023 5 min read

The FCA’s Approach to AI is to be Audited

The National Audit Office is to review the FCA’s approach to artificial intelligence. A subsequent speech by the FCA’s chief executive set out their thinking on AI. It sounded very much like 'have cake and eat it’ thinking. Will the NAO call this out? If so, what will that mean for insurers?

AI

The National Audit Office (NAO) review will...

“examine how the FCA is working with others, particularly HM Treasury, to take action to meet aspects of the challenges and take advantage of the opportunities posed by recent changes.”

This will cover internal developments such as the consumer duty, but also...

“Externally, technological innovations such as crypto assets and artificial intelligence provide challenges and opportunities for regulation of financial services.”

Their review will take place over the winter of 2023/24, so we’re talking about a report around quarter two to three of 2024. Their reference to “how the FCA is working with others, particularly HM Treasury” points to questions having arisen about just how much the FCA is in tune with the department in Whitehall to whom it is responsible.

There’s a cascade here. Members of Parliament are tuned into voter concerns, which result in questions to Government. The Treasury handles these, as well as media concerns about particular developments in financial services. They would then look to the regulator for feedback on what is happening with regard to those voter and media issues.

What This Points To

For the NAO to find space for a review like this in the midst of their very busy workload post-Covid points to something in this cascade raising some particular concerns. One such concern could be the ‘all eggs in one basket’ situation around the FCA’s consumer duty. Is too much riding on it having the impact that is hoped? Have consumer groups used their influence to exert pressure on politicians to nudge the HM Treasury into having the NAO check things out?

Citizens Advice have positioned insurance as the exemplar sector on how the use of digital decision systems needs to be managed so as to avoid discriminatory outcomes. This means that they will pull out all the stops they can to bring their challenge to a conclusion. And their second ethnicity report, while acknowledging the FCA’s positioning of the consumer duty as the preferred way to address discriminatory pricing, also came with a warning that it had better deliver. I suspect they’ve taken steps to ‘up the ante’ on this.

Bias Thinking

While not directly mentioning the NAO review, the FCA’s CEO Nikhil Rathi’s speech a few weeks later is clearly a response. It was a pretty wide ranging speech, so I’m going to focus on four key aspects, starting with Nikhil Rathi’s references to bias in AI systems.

Rathi very firmly positions AI bias alongside human bias and then asks...

“...can we really conclude that a human decision-maker is always more transparent and less biased than an AI model? Both need controls and checks.”

This can easily be read as ‘we have lived with human bias so we have to live with AI bias’. I’m not saying that AI bias is likely to ever be completed removed, but Nikhil Rathi seems to be saying that we shouldn’t get more worked up about the latter than about the former. For politicians, this would be a highly questionable position to take. For people in Big Tech, this would be a much less questionable position to take.

Rathi’s speech brought together two important concerns about artificial intelligence: consent and explainability. He then positioned them together in an interesting way. Let’s start with these two statements...

“We still have questions to answer about where accountability should sit – with users, with the firms or with the AI developers? And we must have a debate about societal risk appetite.”
“What should be offered in terms of compensation or redress if customers lose out due to AI going wrong? Or should there be an acceptance for those who consent to new innovations that they will have to swallow a degree of risk?”

He then brings in the ICO...

“We are open to innovation and testing the boundaries before deciding whether and what new regulations are needed. For example, we will work with regulatory partners such as the Information Commissioner’s Office to test consent models provided that the risks are properly explained and demonstrably understood.”

What he is saying here is that consumers should expect to live with some degree of societal risk from AI. Given that bias is one of those risks, he’s in effect saying that consumers will have to live with AI bias just as they’ve had to live with human bias.

The question this clearly then raises is ‘how do you determine and manage that risk?’ He orientates his response around explainability and consent. Firms need to explain their use of AI, but without overwhelming people with detail. If consumers engage with firms that use AI, then they have the option to read the firm’s explanation of how the firms is using that AI. All this happens with a consent model used by the firm for that continued engagement.

Redrawing Accountability

That’s his narrative. Does it stand up? The answer to that depends on your perspective. If it’s a big tech or big firm one, then it does stand up. If it’s a consumer / public / small firm one, it doesn’t stand up. And the reason for it not standing up is that it redraws the lines of corporate accountability much narrower than public sentiment and academic research would think acceptable.

The reason for him doing so is that the FCA is prioritising the opportunities of innovation over the risks from innovation. So what’s wrong with that, you may ask – doesn’t the FCA support London being the fintech capital of Europe?

Well, the problem for the FCA is that politicians, policy makers and consumer groups will not see this being something in the regulator’s sole purview. Indeed, some will think the FCA would have a relatively secondary role in deciding this, certainly when it comes to determining the level of risk that society has to accept from AI.

Hyper Thinking

Let’s end with a particular point in his speech that caught my eye. In a section about embracing the opportunities of AI, he includes reference to...

“The ability to hyper-personalise products and services to people, for example in the insurance market, better meeting their needs.”

Well, as you know from several previous articles, the benefits of personalised insurance products are contentious (more here), and hyper personalised insurance policies even more so (more here). If this is what he wants the public to gain from a trading off of AI risks like bias, then he would seem to have seriously got his calculations wrong.

The debate about the opportunities and risks from pooling versus personalisation has been building in the last few years. There’s now a growing acceptance that the level of personalisation that Rathi means by ‘hyper personalisation’ would have serious consequences for the structural integrity of the market. Does the FCA not realise this? In my experience, within the FCA, some do, but other’s don’t. Scary!

Sources have reassured me that the NAO are a 'clued in' lot. They will need to be, in order not to fall for some of the narratives that the FCA has so clearly absorbed from big tech. The risk to the FCA is that should the NAO raise questions about this around mid 2024, then it would seriously undermine the consumer duty initiative. It wouldn’t be holed under the water, but it would result in a serious recalibration around what is expected from firms using AI.

Preparation Needed

That is what insurers need to prepare for. It won’t emerge as a ‘throw out what you have done so far and start again’. What it will be is having their consumer duty’s dials and levers moved more towards a better handling of the risk side of AI than the opportunity side of AI. In one or two cases, this could trigger some levers being moved to ‘off’ – in other words, algorithm destruction.

Insurers need to see and think beyond the FCA. It may be their main regulatory focus, but as financial service markets like Korea have found, it sits in its own wider accountability landscape. The NAO review could well bring that wider landscape to the fore.

If you’re holding an internal workshop about AI and the consumer duty, consider bringing me in as an ethical and independent voice. You can get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.