Oct 11, 2023 5 min read

Why the FCA’s Views on AI Regulation are a Gamble

The FCA has sent a strong signal to insurers that they will not be pushing for any new regulations relating to AI. Confidence in the impact of existing regulations is clearly high. Yet what matters at the moment will be speed of impact. Are the FCA taking an almighty gamble? And will it pay off?

AI regulation
The FCA say that the AI coin is in the air 

Earlier this month, the FCA’s chief data officer, Jessica Rusu, gave a speech on AI regulation at a big financial services conference. Curiously titled ‘AI: flipping the coin in financial services’ (more here), she sent a clear signal to the market that the FCA weren’t going to be generating new regulations around AI. Instead, the regulator would be relying on its big initiatives like SMCR and the Consumer Duty to deliver the right outcomes to consumers.

There are a number of big statements contained within her speech and I’ll outline them below.


Given that the PRA, Bank of England and FCA are currently engaged in a consultation around how AI should be regulated, it appears to all intents and purpose that the FCA has already made up its mind. Rather than wait for the Feedback Statement due out before the end of this year, the FCA is saying that in relation to AI, existing regulations are doing just fine (more on that in a minute).

What we have then is a consultation process as window dressing. The regulator’s mind is already made up. Now whatever your views on the adequacy of existing regulations in relation to AI, most people, when asked for their opinion, have some expectation that their input would be listened to. Otherwise, why should they bother?

I’m not surprised that the FCA doesn’t want to create new regulations for AI, but I am surprised that they embarked on a consultation having apparently already decided what will, or will not, be happening. Is this now how the FSMA's consultation requirements are to be met? If it is, then it's wrong to call it consultation.

Speed of Impact

By nailing its colours firmly to the mast of existing regulatory initiatives like the Senior Managers and Certification Regime (SMCR) and the Consumer Duty, the FCA is taking a risk. That risk lies in the speed with which both initiatives will have the impact intended.

Yet as I outlined here, we have only just seen the first enforcement action against an individual under the SMCR. Having been live for seven years now, the SMCR’s speed of impact would appear to be little better than glacial. There is a view though that this was the first of a pipeline of SMCR related cases. That may well be the case, but seven years is too long for a regulatory pipeline to produce.

What are the implications of this then for the Consumer Duty? Well, I’m concerned, especially in the context of a key risk to consumers from AI, which is bias. The FCA have positioned the Consumer Duty as the means by which the issues raised in Citizens Advice’s Ethnicity Penalty reports will be addressed. That may happen, but not within a timescale that Citizens Advice are going to find acceptable.

So, to use Rusu’s analogy, if the FCA has flipped a coin, it is around the speed of impact of the SMCR and Consumer Duty. From having tracked discrimination in insurance pricing and settlement for ten years now, I believe that coin will not land well for the regulator.

What insurers should therefore consider is the political impact from that coin not landing well. The risk to them lies chiefly in the regulator’s political masters being far less tolerant of levels and types of bias than the regulator itself. Expect this to land in around 15 to 21 months and the FCA to find itself backed into a difficult corner.

Drawing Lines

Jessica Rusu talked about the risks that AI can produce and the beneficial outcomes that regulatory initiatives like the SMCR and the Consumer Duty can produce. What she doesn’t do is indicate how those two things will causally interact. We’re left to assume that this will just happen.

Now as some of you will have noticed from past articles, I’m not one for just accepting that things will happen because we want them to. What I like to see is some evidence of how such and such a risk is going to be addressed by this or that initiative. I like to see how cogs connect and how things happen when they turn. I’m not seeing that in this speech or from the FCA in general; not in terms of what is seen as the big risk around AI at the moment, which is discrimination.

Should I just trust the FCA on this? Well, I struggle there, chiefly because how the whole issue of price walking was handled. A super complaint should not have been necessary; it happened because the regulator had not taken onboard the immensity of what was happening and the outcomes it was generating.

Supporting Innovation

What stands out in Rusu’s speech is the regulator’s strong support for innovation. Now some of you may be thinking along the lines of 'what is the problem with that'. And I agree – innovation can be great, and lots of it even greater. However, from my time studying innovation at post-graduate level, innovation can also be problematic, sometimes disastrous. It is not a given that innovation per se will have the beneficial impact expected. Innovation needs to be both encouraged and managed.

It looks from this speech that the regulator wants as much innovation as possible, with the SMCR and Consumer Duty dealing with those innovations that are, well let’s just say less beneficial. The problem is that this then bears all the hallmarks of the financial crisis of 2007/8. Back then, the consumer was left to pick up the pieces from a market that was regulated with too little foresight and too much hindsight.

Dealing with problems when they arise is fine so long as the regulator is able to ‘get on top of them’ very quickly (seven years?). To be honest, there seems to be more cases of this not happening than of this happening. The regulator has not set out a convincing case that it has the wherewithal to be acting alongside the market, dealing with issues as they arise, rather than behind the market, picking up the debris.  

Outwith it being a bad analogy, Rusu is, I think (and others too), wrong to present this as a two sided coin. In the digital world, there’s a lot of greyness between the risks and the benefits. On the one hand, that greyness does give insurers the space to innovate. On the other hand, it’s a greyness in which they can at best become disorientated, at worst hopelessly lost. Referral fees is an example of the latter from more analogue times – it helped stoke the explosion in personal injury claims.  

In the absence of a clear and firm regulatory steer, insurers must learn to navigate the world of AI with at least one hand on their ethical compass. And they need to make sure that they know how to actively use an ethical compass. The Irish regulator found that in their market, the compass being used was far too narrow. That’s what was behind their call for Irish insurers to take their data ethics ‘far beyond compliance’. It’s obvious but it needed to be stated and needs to be taken on board.

The other challenge from navigating that greyness is that not everyone will see it that way. In its two reports on the ethnicity penalty, Citizens Advice see little greyness in the impacts of discrimination pricing. Their research tells them that the problem is pretty clear cut. What I believe they’re still waiting to see is how the sector is actually dealing with it and how much impact the Consumer Duty is having.

To Sum Up

Insurers can listen to the speeches like that of Jessica Rusu and feel reassured. The EU route on AI is not for the UK, goes the thinking. The danger is that this provides a false sense of security and that is because those challenging the sector on the AI risks now being realised have the technical capabilities to evidence those concerns and the political connections to drive home their case.

Insurers need to prepare for this, both in terms of their risk radars and their digital strategies. The risk and benefits of this or that innovation are not going to be judged by them alone. The regulator may be backing away but others aren’t.  

If you’d like a more detailed analysis of AI and regulation in UK insurance, please get in touch here.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.