Sep 4, 2019 3 min read

The radical new approach to tackling insurance mis-selling

Mis-selling has been a perennial problem for the insurance market. And over the years, it’s affected all lines of business: personal, commercial, life and pensions. So tackling it will always be a priority for any supervisory authority. In the last couple of years, the UK regulator’s approach has changed, radically. Intermediaries and representatives should prepare for a new regulatory experience.

In tracking the broad landscape of mis-selling, it’s become clear that the UK’s Financial Conduct Authority has been experimenting on a number of levels with algorithmic techniques to predict the probability and location of an adviser mis-selling financial products. And the keyword here is ‘predict’.

This is the supervisor exploring ways to move from being behind the market, to being not just alongside the market, but ahead of it, searching for the problems as they arise, addressing them before they take root and spread. And finding the best way to do so at increasingly granular levels.

So where is this heading? Is it the advent of predictive supervision in financial services?

Well, yes and no. It is all about prediction, but prediction is a two sided coin. Yes, it can be used to pin point the baddies and hold them to account. Yet is that ultimately what the regulator wants? Their strategic objective of securing market confidence will forever be a struggle if it focuses on just the bad guys.

Changing Bad Habits

The more likely side of this predictive coin for the regulator to adopt is the one that seeks to change those bad habits. In other words, to show advisers and intermediaries that mis-selling is just not worth straying anywhere near to.

Their approach will involve using these new predictive powers to identify pockets of significant mis-selling risk and then signal to agents in those pockets a variety of messages about the right standards to uphold. This would then be followed up with some targeted investigations and their associated penalties. The result will cause agents in those pockets to think it might just be them next.

Some of you might find the word ‘stifling’ come to mind. Will this involve the regulator forever looking over the market’s shoulder, checklist in hand, rulebook ready to be thrown? This shouldn’t happen if the algorithmic techniques being used are good enough to cause that stifling effect to only be felt inside pockets of high mis-selling risk.

Those pockets of high mis-selling risk are ones that have been much researched by the regulator over the years. So if those algorithms are well trained and tested on such research, the stifling risk should be manageable, especially if monitoring within the regulator of its deployment of algorithmic techniques is up to scratch.

Monitoring the Regulator

So, is that monitoring up to scratch? I don’t know and to be honest, if there’s one risk of this new regulatory approach to mis-selling going pear shaped, then this is it.

The pool of data scientist talent working in regulation is small, highly skilled but busy, engaged in the very projects that need monitoring. Who might then provide them with informed challenge that is independent? Luckily for the FCA, that challenge is close by, in the form of the Alan Turing Institute (ATI), with whom the FCA has had a business partnership for some three years now (and recently renewed too). The ATI is full of people with the knowledge to act as monitors, as critical friends.

Let’s turn now to the practicalities of predicting mis-selling. It relies on a combination of quite different datasets. After all, mis-selling is an outcome of an misalignment of premium, claims, cover and buyer. This means that a regulator’s algorithmic techniques need to be trained on each of those datasets, individually and in combination.

Premium data will be a mix of all sorts of gross and net varieties, reflecting the services provided and the negotiating power within the distribution chain. Claims data could be short tail or long tail, depending on the line of business. Cover data has to be drawn from the analysis of machine readable policies. And buyers need to be profiled, in order to establish their capacity, risk appetite and vulnerability. That’s a lot to pull together.

Scope or Depth?

Let’s jump ahead a little and assume that issues like these have been sorted. Again, where’s this heading? Towards greater depth, or greater scope? I’m certain it will be towards a greater scope. After all, distribution in a digitising market is only ever going to be more and more complex. To oversee that effectively, the regulator needs to follow all those tendrils, if only to then follow them back to a named individual on a manufacturer’s responsibility map. You just have to track the issues with appointed representatives for a harbinger of all this.

Let’s end with one important consideration about which the regulator needs to come clear, sooner rather than later. All of its investment in algorithmic techniques will count for little unless they are deployment within a clear set of principles. After all, why should the regulator not expect of itself, what it expects of others? And such principles should reflect the issues often uniquely associated with artificial intelligence. And finally, they should be shared with the public, open to the scrutiny that allows the wheat to be separated from the chaff.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.