Feb 8, 2022 2 min read

Predictions, Insurance and being Human

An Oxford professor’s recent article in Wired sets out the ethical issues associated with predictive analytics. And one of her key examples is centred around insurance and the move towards personalisation.

Here’s an early paragraph that sets the scene…

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people - beings who are supposed to be infused with agency and free will.

Carissa Véliz is an Associate Professor at the Institute for Ethics in AI at the University of Oxford. She goes on..

Defying the odds is at the heart of what it means to be human. In addition to improving everyone’s baseline, we want a society that allows and stimulates actions that defy the odds. Yet the more we use AI to categorize people, predict their future, and treat them accordingly, the more we narrow human agency, which will in turn expose us to uncharted risks.

She then turns to insurance and the prediction of risk, exploring the trend from pooling to personalisation. It’s preparation for one of her key paragraphs…

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

She raises two key ethical issues. Firstly, if “we decide that we know what someone’s future will be before it arrives, and treat them accordingly, we are not giving them the opportunity to act freely and defy the odds of that prediction.” In other words, we would be reducing people’s agency and their capacity to change.

And secondly, “by treating people like things, we are creating self-fulfilling prophecies. Predictions are rarely neutral. More often than not, the act of prediction intervenes in the reality it purports to merely observe.” So a post that Facebook predicts will go viral, guess what, goes viral.

She concludes: “None of those social practices that are so fundamental to our way of life would make any sense if we thought or behaved as if people’s destinies were sealed.” Do we want to go down a road such as this, as we appear to be doing?

It’s an important article, dealing with key aspects of what is means to be human, and how predictive analytics could impact them. I would recommend reading it – here’s the link again.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.