May 17, 2022 3 min read

Insurers are on a learning curve for Responsible Innovation

The Geneva Association held an online conference earlier this month on “New Technologies and Data”. One session gave some interesting insight into sector thinking on responsible innovation. There were some good points, but also blind-spots

responsible innovation
Learning to innovate responsibly needs more than just a lightbulb moment

The session was called “Using Data Responsibly for Innovation – how can insurers strike the right balance? You can see the speakers here and listen to the session in full here, starting at 1:57:00.

Here’s how the session was presented in the agenda:

“New possibilities in data collection, analysis and usage are allowing insurers to offer risk prevention and mitigation services as well as adjusted premiums to their customers based on more granular risk and behavioural profiles. However, this is also bringing up questions about the responsible use of data in insurance. This panel will discuss how insurers can continue to foster innovation without compromising customer trust by using data fairly and transparently.”

The importance of the relationship between innovation and ethics is now widely recognised. Yet that relationship is often thought of as acting in one direction only, as in this GA session. In other words, how you do innovation ethically. There’s another way of looking at it though – how does ethics contribute to an innovative culture in a firm? It’s now recognised (but not always talked about) that the nature of a firm’s ethical culture can add to, or subtract from, their success at innovation (more here).

This conference looked only in the ‘how do you innovate ethically/responsibly’ direction. Sure, there’s only so much you can put into an agenda covering a big overall topic, but it’s worth noting that there is more to innovating responsibly than most people think.

So what did the speakers address? A lot of what they covered in relation to the responsible use of data was orientated around privacy. There were the occasional reference to fairness, but these tended to be in relation to machine learning fairness. This reflects the widespread tendency to think much more about the fairness of the system (such as ML), and much less about the fairness of the structure which the system contributes to and sustains (more here)

It was also a fairness as determined by insurers, and seen largely in terms of educating and incentivising consumers on behavioural change, so that they would pay a fairer price (their words, not mine). The other side of the coin here is that the fairness of a product or service is judged by the user, not the producer. The difference is significant, given the conflicts of interests and power relations involved.

Governance of Sorts

The speakers were in broad agreement on the need for artificial intelligence governance frameworks, so that insurers could track and assess for themselves the data and analytics being put to use. Yet at the same time, it was said that the sector wouldn’t regulate itself. On the face of it, this seems rather contradictory – we want to self-assess but won’t regulate ourselves. Dig a little deeper however and what this means is that insurers want to self-assess but not to the extent that regulations would expect.

This is a common pattern in insurance. For example, with counter fraud, insurers work to their own definition of fraud and proven, not that of the courts. Yet some will ask whether this type of gap matters. It does, because it shapes the narratives around fairness, bias and fraud, and such narratives can influence decision makers.

The speakers called for more collaboration between the sector and academics. This was particularly so in relation to bias, where the call was for more data to allow greater insight into the factors that influence it. It was felt that such research as has been produced so far is just the tip of the iceberg.

Is more collaboration the answer? Some thought so, although interestingly, it was collaboration in terms of sharing data with other insurers, not in terms of dialogue with consumers.

Bias was referred to as being addressed through GDPR, which is an unusual thing to say of data protection legislation. Again, this reflects the tendency for insurers to see data almost exclusively through a privacy lens.

So overall the conference addressed many of the key points about innovation and responsibility, but often in terms of one side of a complex coin. It was as if they recognised what issues mattered, but not really why they mattered.

What Insurers should Know

I want to end by referencing the moderator’s concluding comment about “we don’t know what we don’t know’”. That’s an important acknowledgment, for it signalled that opportunities to learn are still out there. Indeed they are, and the good news is that there are academics and consumer groups waiting to work with the sector on this.

The places to start learning are pretty clear – the fairness of structures and outcomes, the perspective of consumers, data issues beyond privacy, better governance and wider collaboration. That should be enough to get working on!

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.