Is the FCA drifting off course on data and analytics? A recent speech seems to point to this. For insurers, the ‘false sense of security’ this creates matters less in the short term, more in the mid to long term. It introduces a systemic risk into the digital transformation happening in insurance.
The speech was by Jessica Rusu, the FCA’s Chief Data, Information and Intelligence Officer, and took place at the Alan Turing Institute’s FAIR conference. FAIR is the ATI’s framework for the responsible adoption of artificial intelligence in the financial services industry. So, a natural platform for sharing regulatory intentions.
What struck me most about Jessica Rusu’s speech last week was just how comprehensively it focussed on the firms it was regulating. Very little mention was made of civil society and consumer groups, the people on whose behalf the regulator regulates.
What worries me is that the three regulators, the Bank of England, PRA and FCA, often show signs of a strong preference for engaging with regulated firms and their advisers, and a general reluctance to engage with civil society and consumer groups. Of course, the regulator and the regulated need to engage with each other to build a strong, working relationship. However the signs are all too often that that relationship is the one they much prefer to foster, that they are much more comfortable with.
Why this matters
Does this matter? Surely a regulator who listens more to insurers than to consumer groups is, well, good for the sector? It may feel like that in the short term, but in the mid to long term, it is not in insurers’ best interests. The pricing super complaint is a classic example of this. The regulator had the relevant data for two and a half years before a civil society group forced them to act on it. A repeat of challenges like that is not something the sector can afford. The issues don’t go away.
In Rusu’s speech, there’s regular mention of the diversity and inclusion strategies being adopted across the sector, by both regulated and regulator. They’re important of course, but so is the evidence of questionable outcomes found in Citizens Advice’s ethnicity report, about which the regulator has said next to nothing. Of course D&I strategies within insurers help build cultures that make discriminatory pricing less likely, but that doesn’t do a lot to address the ‘here and now’ problem evidenced in the ethnicity report.
The FCA have on occasion said that their hands are tied by the remit given to them by the Financial Services and Markets Act. “We don’t do ethics” became a famous quote from an early CEO of the regulator, and it’s a line they haven’t departed much from since. So they seem to struggle with fairness and discrimination as two very ethical issues, preferring instead to encourage firms to build the cultures that reduce the risk of unfair and bias outcomes being experienced.
This is a weird position for a conduct regulator to place itself in. Push culture and accountability but stand back from the outcomes (like unfairness and bias) of cultures that have failed in some way. Support the responsible use of data and analytics, but stand back from data ethics.
It feels like the regulator has staked out the territory (‘we regulate culture, with principles on fairness and honesty'), but don’t actively manage all of it. Little then happens (regulatory speaking) in those ‘unmanaged’ parts, except that civil society groups still find issues with what some regulated firms are doing.
Just one of these things?
Is this a remit thing? It is I think only to a degree, and the minor one at that. It is more of a culture thing, shaped both by how issues like fairness are interpreted, reinforced by the ‘market mindsets’ that are recruited to the regulator.
Is this just an ‘artificial intelligence thing’? After all, consumer groups seem (in comparison with insurers and software firms) relatively weak on technology. Surely it’s natural for the regulator’s tech people to focus on firms? Well, in my opinion, that would be a significant under-estimation capable of introducing a systemic risk into the heart of the sector’s transformation. If you study ‘why some technological transformations fail and others succeed’, as I did at post-grad level, who you listen to (and who you did not) were classic ingredients.
The sector and its regulator should have a positive working relationship, but if it comes too cosy, which I believe it is, then in the mid to long term, that is bad for insurers. The regulator needs to build stronger and more positive relationships with consumer groups and civil society. The input from both sides leads to more sustainable markets.
What can insurers do about this? These two fairly straightforward steps come to mind…
Firstly, the firm’s risk management policy and procedures need to be picking up both ethics and compliance issues. Conduct issues do not equate to ‘what the regulator wants from us’. What I refer to above as the part unmanaged by the regulator needs to be managed by the firm. There could well be several years of only partially addressed issues there.
And secondly, the firm’s public policy people need to review who they are listening to, about what issues and on what terms. If the regulator has, to put it one way, ‘one ear that’s better than the other’, the firm needs to take steps to compensate for that.