The importance of ethics training for insurance people is going to soar over the next three years. And when I say soar, that’s no exaggeration. Insurance people are going to be weighed up and tested like never before.
There are three drivers to this. Firstly, new regulations in the UK will reinforcement the importance of ethics in how individuals and firms should operate in regulated markets. Secondly, those same regulations will introduce both much clearer individual accountability within regulated firms, and an organisational map detailing which individuals are responsible for particular regulated activities. And thirdly, the FCA is moving forward with its use of data driven tools like machine learning to identify trends and pockets of misconduct.
The level of detected misconduct is going to increase by 25%
What this adds up to, and behind my use of the word ‘soar’, is the ability to both discover misconduct and to hold individuals to account. The FCA estimates that the level of detected misconduct is going to increase by 25%, and perhaps more. Think of it like this: the machine learning identifies misconduct, draws a line from it to an individual on the responsibility map and sends a recommendation for a review of that person’s certification to a supervisor, all in a day’s work.
The implications for the human resource, and learning and development, functions of insurance firms are clear. They’ve got to ensure that their people are learning about ethics, acquiring the right knowledge, using the right skills, and have the confidence to put them into practice. That’s why the regulations include a requirement for the firm to carry out integrity assessments, to check that firms are doing this and that their people know the right things. And this then blends neatly with the language of regulators on ethical culture, with the capabilities of key people on ethical issues being one of the four determinants of their firm’s ethical culture.
It’s the FCA’s interest in ‘SupTech’ that is behind all this. The use of artificial intelligence tools by supervisory authorities is emerging as just as significant for insurance markets as the use of ‘InsurTech’ tools by the insurance firms themselves. If you’re in any doubts about this, read the 2017 Beesley lecture by Stefan Hunt, the FCA’s (now former) head of data science.
It is the ambition of every conduct regulator to move from forever chasing the tails of sector misconduct, to being alongside the market as problems emerge. The artificial intelligence (AI) in ’suptech’ makes that a possibility like never before. And one of the principle uses that I foresee regulators putting AI to, is to search out any clusters of misconduct amongst executives and managers.
Let’s put this into the context of a situation that I have come across on a number of occasions. An insurance firm buys in some training elements around ethics and integrity. Assessment units are added, results recorded and the integrity box ticked. End of problem? Not at all.
A decent ‘suptech’ algorithm can take an ethical risk that the regulator has identified for that particular type of insurance firm and analyse that training and those assessments for relevance, scope and depth. Conflicts of interest at intermediaries are one such obvious ethical risk, as are pricing practices across both insurers and intermediaries. If that firm isn’t joining the dots together in relation to that ethical risk, then the algorithms will track down the relevant individuals through the responsibility map and signal all this to the supervisory team for taking forward.
This is not science fiction. An analogue version was running in school oversight when I was a Governor six years ago. The trigger for an intervention were generic plans disconnected from the issues that the regulator’s data judged a school to be facing. It’s not hard to move forward a year or two from now and see the insurance regulator running gap analysis and capacity analysis around integrity. The new regulations will be providing the machine readable evidence.
How should learning and development people respond? First and foremost, by making sure that their ethics training has the right relevance, scope and depth, and that the integrity assessments associated with it are sufficiently challenging.
Six problems to look out for
So what is meant by ‘…the right…’? It’s something I’ve touched on from time to time in previous posts, but here is a quick run through of six problems to look out for in your firm’s ethics training:the ethics training is not being orientated around the key ethical risks that the firm is facing;it’s being applied on too broad a basis – it needs to be focused on to the people who most often make decisions that touch on those key ethical risks;it shows people what ethics looks like, but doesn’t give them the knowledge, skills and practice to go on and confidently address those ethical challenges themselves on a regular basis;it is too top down. It should be based on what people want to know about the situations they face. Get them involved so as to deliver more resonant training;it’s too individualistic and silo’ed. Good ethics training feeds off discussions and the sharing of challenges and solutions;the integrity assessments give feedback that is too generic or vague. Feedback loops can work wonders with training effectiveness.
2018 sees the Senior Managers and Certification Regime, and the General Data Protection Regulation come into force for insurers here in the UK. It will also see more ‘suptech’ moves from the regulator, and without a doubt, those will be the more significant of the three for the sector’s reputation and the public’s trust in the market.