Insurers want to share claims data in order to improve the accuracy of their new analytics systems. It sounds quite straightforward, but lift the lid and it’s a proposition with several ethical implications.
Claims people are spending a lot of time and money introducing various artificial intelligence tools into claims processes, in anticipation of benefits such as faster and more accurate processing, a more responsive service and reduced costs. And AI tools like machine learning do offer opportunities, but in order for them to be delivered, those tools need to be trained first.
Such training involves feeding those AI tools with lots and lots of data. The more data they’re trained on, the greater the accuracy they achieve. And the greater the accuracy, the faster, more accurate, more cost effective they become. It’s a virtuous circle. Yet turn that ‘more = better’ lens around and what you seen through it are a series of unintended consequences.
Hold on though – doesn’t the insurance sector already share claims data through a series of joint initiatives ? There are indeed several of them, organised in the UK primarily through the Motor Insurers’ Bureau. Yet the rationale behind setting those up was the combating of fraud. That’s quite different to sharing data in order to improve the effectiveness of individual insurer’s processing systems.
Achieving critical mass in useful and relevant data is not something new to insurance. The last hundred years has however seen initiatives to coordinate data often mutate into centralised analysis and prescribed rules. The end result looked more like collusion and monopoly, rather than efficiency and progress. It would be ironic if this new era of big data and artificial intelligence was to tempt the sector in that direction again.
Drawing insight from claims data has always been a core skill in insurance, and insurers have in the past hired people with the knowledge, skills and experience that they felt were right for the judgements their firm wanted to be known for. It was part of how the market works, and so long as risk exists, those judgements, in the form of either human or artificial intelligence, will remain a key attribute and differentiator.
And over time, some insurers gained a reputation for being fair on claims, and others a reputation for, well, not being so fair. The former became more trusted and earned more business as a result. Such levels of trust have been, and always will be, a key part of how markets like insurance work.
Sharing claims data for the reasons being discussed would undermine market competitiveness. It would introduce barriers to entry and homogenise how the market thinks on what many insurers tell me is their raison d’être – the paying of claims.
Another consequence of such efforts to share claims data is commonly referred to as ‘mission creep’. You start out using the shared data to tackle fraud, and then to train your AI tools for better claims decisions. Yet the decisions you make from claims data are not just claims ones. Insurers draw underwriting and marketing insight from claims data as well.
So that shared claims database will almost certainly be used for purposes much wider than claims. And at that point, the risk of the ‘sky falling in’ on the market leaps, as the South Korean market experienced in 2012.
Insurers need to calibrate their ambitions for artificial intelligence to the scale of their resources and capabilities. And they need to shape those ambitions within the boundaries of accountability and trust. After all, they can’t on the one hand talk about it being harder to sustain trust in a digitised market, and on the other hand, fail to give sufficient attention to the ethical levers of trust.