Jun 6, 2024 3 min read

Can an Algorithm Monitor for Unethical Behaviour?

Can an algorithm monitor for unethical behaviour? After all, codes of ethics are often presented as a set of rules. Can’t the algorithm programme the rules and then monitor for infringements? Might ethics itself be transformed by a digital revolution? I explore the issues involved.

algorithm
What algorithms understand: numbers and blanks

Insurers use algorithms to monitor not just what customers do, but also what is happening within their firm. They’re used to support all sorts of processes and to automate them to particular levels. In the main, this is orientated around performance and compliance.

What a firm views as ethical or unethical behaviour is invariably set out in a code of ethics. The challenge around the use of such codes is of course to give them traction in the everyday decisions that their people make.

So what better some may think than to use some form of digital decision system to monitor for this. Insurance is typical of many other business sectors in tending to have codes of ethics set out as a list of things to do, not do and so on. Why not build an algorithm to track, perhaps even prompt, for the ethical aspect of decisions?

An Ethical System?

For some, this may sound a bit futuristic, but for others, it’s old hat. A speaker at this week’s AIRMIC conference was talking about building ethics and morals into a digital system so that it can learn to adapt behaviours to achieve goals. Let the intelligent algorithm take over from the stupid human, went the narrative.

This raises the obvious question – can you actually build ethics and morals into a digital system? This would mean the algorithms that together make up the system being capable of moral understanding and in turn able to act as moral agents. In other words, the ability to recognise what is right or wrong, to then act accordingly and be accountability for those acts.

To do this, some form of autonomy would be needed. Yet, as Professor Carissa Véliz of Oxford University’s Institute of Ethics in AI reminds us:

“…algorithms are neither self-governing, because they need external input to set themselves goals, nor reasons-responsive, as no reasons can ever ‘convince’ them to change the goal for which they have been programmed.”

What can Codes of Ethics Achieve?

Let’s come at this from another angle. Ethics is often talked about in terms of what you should do, while compliance is talked about in terms of what you have to do. So where do codes of ethics sit in this? They express what the firm wants you to do in terms of ethics, and so on the face of it seem to be actually orientated towards compliance.

Yet there is more to a code of ethics than what you should or shouldn’t do. Here are some thoughts to consider:

  • a good code includes just as much aspiration as compliance.
  • it should be worded so as to encourage employees to think about the ethical side of a decision. In other words, to follow a thought process, not just a process.
  • at the heart of that thought process should be the ethical values that the firm wants its people to respect.
  • the final judgement as to whether the outcomes are ethical or not is shared with consumers, for whom the code has ultimately been written.
  • the firm’s ethical values may not correspond with the ethical values of consumers.  

What does this add up to then? A code of ethics illustrates the firm’s ethics and doesn’t define them. They can be written as rules, but the good ones are often not. What this adds up to then is they’re a sort of half way house between what the firm wants its people to do ethically, and how it needs its people to think ethically.

Ethics and Judgements

Let’s bring these two strands together. An algorithm is incapable of acting as a moral agent and so cannot act ethically, nor judge the actions of others in ethical terms. A code of ethics may be presented as a set of rules or expected behaviours, but the firm’s ethical responsibilities are not scoped or defined by it.   

That’s it then? Not really. Professor Véliz again…

“At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness.”

In other words, mathematical calculations can identify patterns and clusters, and be programmed to respond at certain levels of significance. Extremely sophisticated algorithms just do this, as the name on the lid indicates, more sophisticatedly. Those patterns and significances may point to something by being scored according to the system’s design criteria, and from that score may flow actions in other systems, or signals to the humans positioned in that loop.

This is not ethics or morals. These are actions and signals to be interpreted by a person that does have moral agency, to then reach conclusions and take steps in response. What that represents is ethical decision making by that person.

Insurers need to focus on training people, not algorithms, in ethical decision making. Only the former works. As my learning resource on leadership on ethics makes clear, it takes experience and feelings to form the judgements that deliver an ethical vision for a firm.

To assume the language of the AIRMIC speaker, when it comes to ethics, it’s algorithms that are stupid, not humans.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.