A key question for insurers over the next 5 years will be how regulators handle the issues raised by their increasing use of algorithms. The message being signalled in a recent paper by the Digital Regulation Cooperation Forum is clear. And the position they’re taking will be an uncomfortable one.
It will be uncomfortable for the regulators, not the insurers. Sitting on the fence for any period of time is bound to be. The DRCF’s ‘The Benefits and Harms of Algorithms’ may provide a good overview of what it says on the tin, but for insurers, it signals little more than ‘we’re not sure what to do next – please tell us’.
Given how much work has already been done on algorithmic benefits and harms in relation to the US insurance market, and in other significant insurance markets around the world, the slow and cautious approach being taken by the four regulatory members of the DRCF (ICO, CMA, Ofcom and FCA) is more than surprising.
Their report says very clearly on its cover that it is intended to foster debate, yet that is a debate which has been going on for several years now. Is it the case then perhaps, that they only now feel able to join in this debate by working together, rather than addressing it in their individual specialisms?
Take the FCA. Back in 2019, it had data ethics as a cross sector priority, and then went quiet on it. Sure, data ethics is complex, but people have been researching it for at least a decade. There’s plenty of material out there for the FCA to have started working on. Given their seven year or so partnership with the Alan Turing Institute, they weren’t short of accessible expertise.
Do four regulators working together make the complexity of data ethics more manageable? It might, to a degree, but I don’t see it as any sort of magic solution. It may potentially take out as much as it adds in.
Lots of people see bias in algorithms as a significant issue needing to be addressed. Yet the regulator strikingly absent from the DRCF is the Equalities and Human Rights Commission. Without their involvement, I fail to see how the DRCF can made progress on bias. And without bias being addressed, the validity of what the DRCF does actually address will be open to question.
One danger that could well emerge is that this initiative of the DRCF might be used by one or other of its individual members to delay any actions of their own. So, for example, the FCA might dilute or hedge its response to Citizens Advice’s ethnicity penalty report, citing the debate being initiated by the DRCF. That would be a mistake, given a) the report’s recommendations were pointed very specifically at the FCA, and b) the EHRC is not part of the DRCF.
More Fence Sitting?
The problem with seeking in 2022 to foster debate and discussion on algorithmic benefits and harms is that to a large extent, the debates have already matured, positions have been taken, opinions have been firmed up. So in calling for input and opinion while sitting on the fence, the DRCF is very likely to find itself getting feedback that reflects the two sides of that fence. Nothing wrong with that, you might think, other than its likely outcome is that the DRCF’s members will feel the best place for them is to continuing sitting on that fence. The Grand Old Duke of York comes to mind
The missed opportunity that the DRCF report represents is a clear position on the benefits of algorithms not being gained at the price of harms under equalities and data protection legislation. In other words, less talk about regulator sandboxes and more clarity around regulatory lines in the sand. There was little to no sense of this in their report.
I said in late April 2022 that insurers should not expect much from the DRCF just now, but instead should look at what they’re producing at a circa six month horizon. I still stand by this, but suspect now that towards the end of the year, they may well be wrapped up in defending their slow and circumspect approach.
Insurers might be tempted to think “fine, let’s wait until something specific happens’. And while waiting for clarity is not a bad thing, the more ethical approach is to start framing what harms might be associated with their own particular use of algorithms. In other words, start populating their risk radars with the harms associated with algorithms and weighing up their impact on those all important digital strategies.
If as appears to be happening, consumer concerns are ahead of regulatory reactions, the sector needs to track and be responsive to both. Keeping their eyes only on the regulator could well result in them tripping over the consumer issues being pushed in front of them just now.