Is big data steering insurance towards a cliff or a superhighway?

It was great to see the panel I was on last week to debate the above question attract a full house, at Europe’s main conference on computing, data and data protection (CPDP). And it’s clear that the tensions that big data is creating for insurance are being thought through in both academic and policy making circles. That’s good for insurance: revolutionary changes like this should attract debate.

The CPDP conference also debated two topics that should be on any ‘tuned-in’ insurers’ radar: the ethics of artificial intelligence, and the ethics of automated emotional surveillance. If the latter puzzled you, then think of how Admiral made the news recently (more here). I’ll be blogging on both of these topics soon.

In the meantime, I thought you might like to read my short speech (each panellist is allowed 8 minutes max) in the ‘big data and insurance’ debate. It’s also on YouTube.

Insurance is a market that has always relied on data, so the advent of this thing called big data was always going to interest them. And it’s produced some interesting outputs. When I first started working in insurance, there was one premium rate charged for flood risk across the whole of the UK. Now, data is being used to measure flood risk on a house by house basis. And insurers are moving beyond medical reports for life cover, to contemplating the use of facial recognition software as a quick and accurate way of assessing you for life and health cover.

The result is that big data is allowing insurers to personalise a policy and a premium just for you, built around who you are and what you might become. And the message coming out of the insurance market is that this personalisation is fair, for you’ll no longer having to pick up the cost of all those claims made by your accident prone neighbours.

Yet the flip side of this drive towards personalisation is that the idea of insurance as device for smoothing risk, for the sharing of risk, is being lost. In the past, each of us relied on insurance to smooth the cost of our exposure to risk, paying relatively steady premiums as a result. Personalisation however has the effect of increasing price volatility.

And so we have the rather bizarre situation, involving, on the one hand, big data providing insurers with vast amounts of information about our lives, our interests, our emotions, and on the other hand, this then increasing the price volatility of the insurance on offer.

And this will bring with it, for many people less well off than ourselves, a series of financial micro-crises, as they struggle to pay hugely increased premiums. And I’ve seen this happen, when the insurance renewal arrives and the policyholder finds that the premium has gone up from EU-500 to 5000 and their self funded excess from EU-250 to 10,000. And the policyholder hasn’t made any claims, nor has their risk changed in any way. It’s just that the insurer has acquired new data about their type of risk.

And insurers feel obliged to act in this way. There’s an insurance principle called adverse selection and what it says is that if the insurer acquires data about one subset of policyholders that gives them new insight into the risk that subset presents, then the insurer is duty bound to use that data to differentially price that subset of risk. You can think of adverse selection as providing the intellectual spark to all that ‘big data fuel’.

Take life and health insurance as an example. The most influential insurer in the European life insurance market has been talking about a time fast approaching when life and health cover will no longer be offered unless you agree to wear a health tracking device on your wrist. No wearable = no life insurance.

What this represents is an insurance market seeking unconditional access to personal data in return for offering insurance cover. OK, you may say, insurers can’t be interested in that many things about me. Well, you’d be surprised! They’re interested in what you shop for, what you eat, how you drive round corners, even how many exclamation marks you use on Facebook.

So how do regulators keep up with this revolution in insurance? The General Directive gives the regulator access to the premises and data equipment of insurers, yet to be honest, that would still only leave the regulator chasing the insurance industry from behind, feeding off any tell tale signs of misconduct.

A few years ago, regulators in the United States experienced just this problem when an insurer confirmed that its underwriting model was now so big and complex, it was having difficulty providing the required rating report. As a result, some state insurance regulators are now preparing for direct and ongoing digital access into insurers’ rating models.

What this represents is a first step toward the regulator moving from behind the market, to being right up there alongside it, dealing with ethical concerns as they arise, before they escalate. This is a step towards Panoptic Regulation. Just as insurers are seeking unconditional access to consumers’ personal data, so the regulator could seek unconditional access to all of the insurers’ data.

If you think of the Panopticon, what this represents is a tower within the tower. And it would be quite easy to put a team of behavioural scientists and algorithm builders into that inner tower, giving you the ability to monitor the insurance market in real time, for equal treatment, for fairness, for non-discrimination, as it enters one of its most transformative periods.

It also tackles one of the biggest concerns of financial regulators watching out for the next financial crisis – that of access to sufficient data about what is happening now, as opposed to last year. And you never know, Panoptic Regulation might even cause insurers themselves to become more robust in tackling the seeds of misconduct before they put down roots.

So, to sum up, I think that insurance is veering towards a cliff, but there is still time for it to regain the superhighway. Insurance is too important an industry for us to lose.