The insurance industry is changing. New technologies and new business models are presenting strategic opportunities that firms are pursuing with enthusiasm. Yet this reshaping also presents new challenges to the accountability of insurance firms.
Every insurer in the UK has its accountability lens focused on the Senior Managers and Certification Regime (SMCR) at the moment. That’s important, but leaders need to focus just as much on the bigger picture. The danger is that their firm becomes occupied with the shape of trees while the forest starts to smoulder. In this post, I will give shape to the accountability challenge that insurance executives need to build into their broad, strategic vision for their firm.
This accountability challenge matters for two reasons. Firstly, it is shaping the expectations of how firms should conduct their business. And secondly, how well those expectations are met then determines the trust that consumers have in firms. This is what Professor Luciano Floridi of Oxford University means when he refers to ethics coming before the rules, during the rules and after the rules.
The context for this accountability challenge is the data transformation of the insurance market. As insurance leaders reshape their business models around tools such as artificial intelligence and data lakes, so they need to understand the implications that this reshaping has for the accountability of their firm. I’m going to break this accountability challenge down into seven components, drawing on Professor Floridi’s work on the ethics of information.
- Accountability blindness. It’s very likely that artificial intelligence (AI) will produce consumer detriment that is not picked up by the firm. It will therefore continue unrecognised and unaddressed, yet remain significant to those experiencing it. Such accountability blindness could stem from just not bothering to look for it, or just not seeing it.
- The accountability gap. The complexity of AI can open up a veritable gulf between the decisions of individual people, and the effects that those decisions produce. Even though the detriment is evident, no one in the insurance firm sees it as their responsibility.
- Diluted accountability. That complexity of AI can also result in individuals feeling that their input, their decision, is so marginal as to obviate any responsibility for the consequences that collectively result. The view would be that ‘I couldn’t have done anything wrong because my input has been so marginal’.
- Siloed accountability. That complexity also results in greater compartmentalisation of actions and decisions on AI projects. It’s all too easy for individuals, unable to see how an outcome could have resulted from their particular silo, to ignore it altogether.
- Blinkered accountability. Some people think of ethical issues only in relation to the behaviours of real, physical people. As a result, they fail to see, or even understand, the implications of the decisions they make in relation to ‘artificial’ intelligence.
- The accountability dynamic. The ever evolving nature of AI can make it difficult to nail down particular impacts to particular decisions. It can be all too easy to assume that a decision system in constant flux cannot be held directly accountable for anything other than fleeting, insignificant micro incidents of detriment.
- The accountability imbalance. The reach and depth of AI makes it an empowering technology for those utilising it. This can cause insurance firms to see their decisions as perfectly rational, while seeing the decisions of policyholders as much less so. The danger is that such empowerment could cause insurance people to just not see ethical issues associated with AI.
So what do these seven components of the accountability challenge add up to? They point to a material disconnect emerging between what firms are actually accountable for and what the executive team think their firm is accountable for. It’s a disconnect brought into being by the introduction of automated decision making into the heart of the firm’s processes. And it tells insurance executives that just as their firms are evolving in innovative ways, so must the accountability mindset of those firms evolve in similar fashion.
Some Harsh Lessons
Some of the most powerful users of artificial intelligence are learning some harsh accountability lessons at the moment. Facebook have had to admit that they just didn’t expect their algorithms to distribute advertisements promoting antisemitism and sexual violence, despite being warned for several years.
There’s an ironic mis-match happening in insurance at the moment. Many insurance people think that AI and big data are bringing their firms ‘closer to consumers’. That’s confusing proximity with intimacy. And it’s also mistaking proximity to the customer as an informational object, with proximity to them as a person. Consumers will want to get closer to insurers because of the outcomes they experience. AI has the potential to deliver outcomes that push consumers away.
Trust falls through the floor when consumers see insurers as just not recognising, let alone accepting, their accountability for the decisions their businesses are making. And it’s not just consumers- investors and business partners will expect insurance firms to keep a firm handle on their accountability, seeing it as an indicator of performance, confidence and ownership.
Regulatory initiatives like the SMCR may move accountability more to the centre of individual and corporate mindsets in the sector. That’s fine, but it will be of only marginal value if insurance executives fail to recognise and address the accountability challenges that their changing business models are creating. The risk is that in the UK, Parliament will force them to do so, as reported here.