As more and more decisions about our premiums and claims come to be influenced by all sorts of clever algorithms, one small but significant question appears to have been overlooked. In markets like Ireland and Canada, there is a requirement for insurance people to undergo compulsory ethics training. What happens then when many of their decisions are digitised? Will the algorithms have to pass some equivalent yardstick? If not, then are standards going to fall?
In many insurance markets, an hour or so of ethics training is compulsory for various categories of insurance people. Examples include insurance brokers in parts of Canada and just about everyone working in Ireland’s market. This is about competency: making sure these people have the right knowledge and skills to bring fairness, honesty and respect into the decisions they make on a day by day basis.
And in some cases, those individuals take such training in order to retain their licence to perform those particular insurance tasks. So what happens when some of those decisions are then handled by algorithms? Does the licensing system just become irrelevant, like rolls of film became when digital cameras came in?
This is not a Kodak Moment
That’s progress, some of you may say. Out with the old and in with the new. After all, those algorithms are very clever pieces of work. Yet such ‘Kodak moments’ don’t always hold true. Camera film was just a commodity replaced by another, in the form of a digital image file. Ethics training is not about a commodity; it’s about a competency. And that difference is significant.
Take the definition of ‘business ethics’ that I often use: the application of ethical values to business decisions. There’s no mention of person A or algorithm B in it. Whether those business decisions are being made by person or algorithm, there still needs to be ethical values being applied.
And while some ethics training is not always brilliant at evidencing a person’s ability to apply ethical values to the decisions they’re making at work, I often find that the training event itself is the most valuable component, with discussions, games and challenges allowing attendees to share experiences and discuss the best ethical steps for their particular role.
What’s the way forward then? Personally, I don’t think it means putting our blind faith in the algorithm to get it right. There’s too much evidence of unacceptable outcomes for us to take that step. And while some point to the culprit being the data fed into the algorithm, others point to algorithms sometimes learning things that humans would easily recognise as unacceptable.
Not all Algorithms are the Same
The way forward will lie in us addressing the algorithm itself. Remember that not all algorithms are the same. How they are designed involves a lot of choices, some of which will enhance the quality of its abilities, others not. Recall this post from earlier in the year, asking ‘how good are your algorithms, and can you prove it?’ The range is quite surprising, as is where the algorithms of some well known apps are said to lie.
How good your algorithm is can usually be gauged through some form of testing regime, made up of special datasets and a range of expected outputs. This should of course have taken place during the implementation phase of your digital project – your firm did do that, didn’t it? And don’t forget that this needs to be repeated at regular intervals: has your algorithm been learning the right things?
Is this perhaps straying into wider territory than just ethics? I don’t think so: remember that trustworthiness is also broad based. Its four core components are widely recognised as competency, reliability, honesty and goodwill. There’s something to be learnt about every algorithm for at least the first two of those components, and from all four when it comes to those accountable for how the algorithm is used.
The Name on the Map
So who in a typical insurance firm is going to run with organising all this? This needn’t be a ‘one size fits all’ approach – every insurer is different. Yet at the same time, they all share one vital characteristic, which is that there is one person named on their SMCR responsibility map as accountable for underwriting and claims decisions, be they algorithmic or human. It is that person who has to run with organising all this, for ultimately it is their career that is on the line. It will be a good test of their leadership on ethics as well.
It’s worth remembering the real point of all that compulsory ethics training. It wasn’t actually to have everyone learn about ethics, although in analogue times, that was the natural route to get where the regulator wanted you to be. The real point is actually about better, more ethical decisions. So this is about outcomes, not inputs. The move towards algorithmic decision making across underwriting and claims doesn’t change that.
This then brings me to the real question I want to raise with this blog post: are we going to see, in the not too distant future, compulsory testing of the algorithmic decision systems being used in insurance? I believe that’s a distinct possibility.