Data is the key challenge facing the regulation of UK financial services.
So said a representative of the Prudential Regulatory Authority at a recent CSFI seminar, encompassing in his words both the prudential and conduct branches of oversight. And the seriousness of what he said was underlined by the evidence he presented. The recent financial crisis was caused in large part by the inability of banks to fit a raft of questionable financial products within the management systems that their boards relied upon. The transactional data underlying those products had become so complex that the language with which the firm normally managed its business had been left behind.
So what does this have to do with the direction in which the UK insurance market is travelling? Lots, and in some pretty fundamental ways.
Insurance underwriting is changing. In the past, it relied upon neat, well structured concepts such as age, gender and address. That structured data is now being surrounded by increasing amounts of unstructured data, drawn from social media, credit scores and shopping preferences, and then linked by algorithms. Over the next five years, the underwriting of many lines of business will move from a reliance on predominantly structured data, to a reliance on predominantly unstructured data.
In 2008, the banks lacked a common language with which to catalogue and control the deals they were doing, and look where it landed them. While Sainsburys had a unique code for each of the thousands of different products it sold, the banks’ products were “…obscured in packages that had misleading labels and incomplete provenance” and then silo’ed into systems that couldn’t talk to each other.
In 2014, I fear that insurers may be no better positioned. Do they have a common language with which to oversee the myriad of risk characteristics through which their underwriting algorithms are accumulating exposures for them? If not, then could confidence in the insurance market end up sharing the same fate as that of banking? This may look like a data question, but it has trust at its heart.
If insurers don’t get this right, some scary consequences could result. Here are five:
- boards won’t be able to hold management to account for the way in which big data and algorithms have been weighing up risks and accumulating exposures within the business;
- investors won’t be sure of the underlying value of the assets and liabilities that their firm is holding;
- regulators may find it difficult to accept an insurer’s ‘Solvency 2’ calculations if they’re unable to test data provenance and accuracy;
- boards will find it difficult to approve mergers if both firms lack a common language for one of their most prized assets: their underwriting and claims data;
- reinsurers may become highly cautious and picky about the exposures being ceded to them if they are unable to reliably accumulate the underlying data.
The insurance sector is nearing an ontological crossroads: adopt a language for your data so that it can be aggregated, analysed and monitored in real time, without the need for human intervention, or find that your data is not just big, but resembles a dinosaur as well.