Last week, I said that insurers should not expect much from the DRCF just now, but instead should look at what they’ve producing at a circa six month horizon. And I was pretty spot on. Firstly, their work plan for 2022/23 may be new, but it has very similar goals and priorities to their previous work plan. And secondly, their two algorithm papers are calls for inputs, with 8th June as the deadline. This points to more from the DRCF in the autumn.
Benefits and Harms
I will be reviewing the two algorithm papers in some detail for the next newsletter, but for the moment, here are the key points in the ‘benefits and harms of algorithms’ paper:
- algorithms offer many benefits to individuals and society, and these benefits can increase with continued responsible innovation;
- harms can occur both intentionally and inadvertently;
- those procuring and/or using algorithms often know little about their origins and limitations;
- there is a lack of visibility and transparency in algorithmic processing, which can undermine accountability;
- a “human in the loop” is not a fool-proof safeguard against harms;
- there are limitations to DRCF members’ current understanding of the risks associated with algorithmic processing.
And here are the key points covered in the ‘auditing algorithms’ paper:
- some core background to understand the key issues associated with algorithmic audit and in particular the role regulators could play;
- the different types of audit and the potential outcomes from an audit;
- the existing landscape for audit, covering the parties involved in the audit ecosystem and the limitations and issues identified with the current landscape;
- the potential shape of a future landscape for algorithmic audit, including the potential role of a market for third party audits;
- some hypotheses the DRCF has developed in the course of researching this discussion paper, and invites stakeholder feedback.
Here’s a key paragraph from the conclusions section of the ‘auditing algorithms’ paper:
There is a potential role for regulators including DRCF members in supporting the healthy development of the auditing landscape, which is likely to need to bring together different tools and approaches to surface harms and enable responsible innovation. For some regulators this might include stating when audits should happen; establishing standards and best practices; acting as an enabler for better audits; ensuring action is taken to address harms identified in an audit; and identifying and tackling misleading claims about what algorithmic systems can do. Industry will likely have an important role to play as the landscape develops, in some cases through self-governance to complement any regulator activity, and potentially through working together with regulators to ensure that the external audit ecosystem can deliver effective, relevant audits where these are required.
Scope and Culture
While that sounds pretty much as we would expect, it does raise the question of scope. Who exactly has responsibility for these algorithms, such that they are then accountable for the outputs they produce? And are those firms regulated or not by the FCA?
As a vertical regulator, there will be firms running algorithms for the insurance sector that do not fall within FCA regulations. However, the DRCF members include some powerful horizontal regulators, so such firms will then be picked up by them.
End of problem? Not at all, for the nature of FCA regulations can be quite different to the regulations of horizontal members of the DRCF. So the CMA are interested in competition, and the ICO in data protection. Both are important of course, but far from covering the ethical issues that algorithms can throw up. So a key question I’ll explore in the next newsletter is just how equipped is the FCA to pick up on those other issues. Current signs are not that positive.