The Insurance Fraud Bureau is a market level organisation owned by UK insurers to address the sector’s fraud needs. It’s recently published a strategy for 2023-25 that is full of data and analytical ambitions. It’s noteworthy on three counts, which I’ll outline just below, with the problems this creates a little later.
Firstly, it shows just how much the IFB has developed since its foundation in 2006. It now has a “360 degree view of the industry’s transactional data set”. In essence, that means it sees all quotation, underwriting and claims data, of its members (most of the sector) and their product lines (growing).
That allows them to identify networks of organised crime that no insurer could detect in isolation. At the same time though, it also means it is handling vast amounts of data covering virtually all the UK adult population. That said, their plan to “analyse billions of claims each year” is rather over ambitious, given that there aren’t anywhere near that number of claims in the UK, even over the seventeen years since its foundation.
Secondly, their strategy emphasises their ambitions around application fraud, through their IFiHub and IFB Exploration products. The positioning of both at the heart of the IFB strategy means that we will all, from the minute we start using a quote machine, be scored on fraud. This is where all the energy over the next three years is being focussed.
And thirdly, the more data you gather together, the most sophisticated the analytics you need to manage and draw insight from it. This is all very expensive, so it is no surprise to find the IFB talking about automated decision making deployed in real time situations. It’s one way of enhancing the value side of its membership levy review.
A Different Threat Landscape
The IFB like to talk about the ‘threat landscape’. In other words, the big picture of where fraud is being tracked, by product line, volume and value. This is of course the external threat landscape. No mention is made in the IFB strategy of how they’re going to handing the internal threat landscape, which is pretty significant. Here are three ethics related issues that I think the IFB needs to watch on its internal threat landscape.
As I’ve explained on several occasions in the past, I have significant concerns about organisational accountability at the IFB. Weigh up these two things. On the one hand, they’re handling vast amounts of data about virtually all the UK adult population and positioning their unregulated services at the heart of one of the UK’s most regulated sectors. On the other hand, it is directed and managed exclusively by insurance people, with limited reporting about how it does this.
This means that all sorts of important decisions about how data is collected and analysed, and what decisions systems draw from this, are not subject to any form of external scrutiny. One might go so far as to say that the IFB is able to act like the police, judge and jury not just on cases of detected insurance fraud, but of suspected fraud too.
Take the volume of suspected motor application fraud for example (more here). It was reported to have increased from 260,660 cases in 2018 to 631,631 cases in 2019. No explanation is available about what caused this immense increase. Yet it raises all sorts of questions.
The biggest risks on the IFB’s internal threat landscape come from what I believe to be poor accountability within the organisation. I understand that some people have urged the IFB to make improvements, but have received little response.
Exemptions with Exemptions
A huge amount of what the IFB does is dependent on an exemption in the Data Protection Act 2018, relating to the prevention and detection of crime. Without that exemption, all those flows of data between the IFB and its members would be impossible.
The exemption means that the IFB does not have to comply with many of the DPA’s key provisions, such as the right to be informed and the purpose limitation. So what’s the issue here then? Surely this is all very necessary for insurers to effectively tackle insurance fraud.
And it is, absolutely. Except that exemption is itself subject to an exemption, which seems to be very significant. Here’s what the Information Commissioners Office says about it…
“It exempts you from the UK GDPR’s provisions on the right to be informed; all the other individual rights, except rights related to automated individual decision-making including profiling;…”
In other words, the prevention and detection of crime exemption does not apply in relation to the DPA’s provisions on automated individual decision-making and on profiling. Now, I’m not a legal expert, but that seems to point to the IFB’s ambitions on application fraud using automated, real time systems facing a pretty significant privacy problem.
To me, application fraud measures at the time of quote are all about profiling. And to be effective in real time, they must be automated, exactly as the IFB intimate.
Sure, the IFB have on their website output from a CII New Generation group on counter fraud data sharing, published in 2013 and updated in 2018. Yet it contains no mention of automated decision making or profiling, nor does it talk about application fraud. As a ‘best practice guide’, it seems to have gone well past its ‘best by’ date.
Lifting the Lid
The problem with black box arrangement is that they don’t always stay that way. In the US, the country’s biggest homeowners insurer is being taken to court on allegations of discrimination in claims and counter fraud. One of the leading providers of counter fraud analytics has been included in the action, which is seeking to attain class action status. Should this happen, it will be like a crowbar being use to lift the lid on how counter fraud decisions systems work.
As I noted in this article about the US legal case, the message it sends out to insurers and their partners is ‘do not rely on people not knowing’. This is especially true in the context of insurers no longer being the sole holders of data about ‘insurance related events’. Consumers groups have moved very firmly into that space as well.
In this article back in October 2022, I described the organisation of counter fraud in the UK market as too much like a house of cards. As I said then, I believe that the nudge that could trigger a challenge is already in play. I expect to see it emerge more clearly in 2023.
I then asked what might a typical insurer do? The three things I outlined are worth repeating here…
- frame your own expectations of what you need from how market level counter fraud operations are organised. Look to your own organisation as an example, or to what the investor arms of large insurers expect of the firms they put your money into. How similar are those expectations?
- look at how your enterprise risk assessment has weighed up the reputational risks associated with a serious challenge to counter fraud accountability. Has it been considered, and has it been undertaken with the right level of independence and challenge?
- those two earlier points should give you enough to evaluate whether some form of audit needs to be carried out. Whether you carry one out or not, set down your reasoning and workings for this decision, for you will be asked about it at some point over the next year or two.