May 24, 2022 5 min read

The Influence of Mental Health on Decisions in Insurance - Managing the Risks

Insurers are interested in your mental health because it’s seen as influential in all sorts of underwriting and claims decisions. The sector is moving from researching this, to putting it digitally into operation. So what are the ethical issues associated with digital psychiatry? There’s plenty.

mental health
Photo by Ümit Bulut / Unsplash

A research paper published a few years ago gives a good’ish round up of the ethics of digital psychiatry. I say good’ish because most of the paper is really insightful, but the section on financial services is, well, let’s just say, less than insightful. A lot of this analysis piece will be based around key points from the paper, added to which will be my points in relation to insurance.

The paper is “Digital Psychiatry: Risks and Opportunities for Public Health and Wellbeing”, published in 2020 in the IEEE Transactions on Technology and Society, 1(1), pp. 21–33. You can read it here. The authors are Christopher Burr, Jessica Morley, Mariarosaria Taddeo and Luciano Floridi, all at the Internet Institute at Oxford University.

So what do they mean by digital psychiatry? They define it as digital technologies that use artificial intelligence to infer whether users are at risk of, or currently suffering from, mental health disorders. So we’re talking here about both assessing the present and predicting the future.

Digital psychiatry is being used to infer whether an individual is suffering from depression, anxiety, autism spectrum disorder, post-traumatic stress disorder or suicidal ideation. Note the word ‘infer’, for it signals that these technologies operate with varying degrees of reliability and validity. That’s because the data and analytics works on proxies for mental health, not mental health itself.

Their paper focusses on the use being made of digital psychiatry outside of formal healthcare systems. In other words, the ethical risks created by its use outside of an environment…

“…typically governed by deeply entrenched bioethical principles, norms of conduct, and regulatory frameworks that maintain professional accountability.”

Generating Health Care Risks

A key point made in the paper is that the deployment of digital psychiatry in non-clinical settings could lead to significant health care risks. Those risks are shaped by four influences, which I’ll outline below.

The first influence is down to the nature of digital systems…

“Automated decision-making systems may be well-suited to tasks like classification, but they are poorly suited to perform tasks involving, for example, therapeutic, interpersonal conversation. Hence, this participatory process of sense making between a patient and psychiatrist… is also currently poorly reflected in digital psychiatry.”

This is significant. On the one hand, digital psychiatry is said to lead to greater engagement and empowerment. On the other hand, it , it could also introduce, in the context of people with mental health disorders, the risk of them failing to engage with health information at the right time (in other words, sense making).

You can look at this from another direction. Digital psychiatry relies on the passive collection of a lot of data about you. Yet doesn’t this also raise the question of whether the user is able to participate in the decision making process around its interpretation. In other words, is digital psychiatry turning mental health support into something done to you, rather than something done with you? When it comes to treatment, that difference really does matter.

A Duty of Care?

The second influence relates to a duty of care in the relationship established by the digital psychiatry. In formal healthcare systems, such duties are clearly set out and monitored. Outside of such systems, the situation is less clear. Should providers of digital psychiatry be under a duty of care? If there is to be a difference, how does the member of the public understand that difference? And how does that duty extend into the future, in the context of digital psychiatry seeking to predict one’s future mental health?

The third influence relates to the growing reliance of digital psychiatry upon firstly, unlicensed therapists, and secondly, upon the often unsubstantiated efficacy of the science upon which support is often based. Legally of course, this will all be hidden behind strongly worded disclaimers. Ethically and medically, the impact on the confidence and trust of people seeking support could be meaningful.

This matters, because…

“…they may create a psychologically distressing situation where individuals must live with the knowledge of a potential diagnosis for an extended period of time without any support.”

The fourth and final influence relates to ‘model generalisability’. This refers to say, a machine learning algorithm being trained on one dataset and then used in a new context. This matters in psychiatry…

“…where there is widespread acceptance that different demographics experience mental health in varied ways (e.g. different social attitudes towards a particular disorder impacting severity of symptoms”

Digital psychiatry must be alive to these differences, in both how people present and how they seek support.

The Insurance Context

Let’s turn now to insurance. It doesn’t feature in the paper, but financial services does. Most of the FS section talks about duties of care and the tailoring of support to people who may be vulnerable. That’s fine, but in 80/20 thinking, that’s the 20 part. What is rather glaring by its omission is any reference to the use of data and analytics for business decisions, in say bank lending or insurance underwriting.

This omission is significant. Take the insurer who has funded research into how to predict a person’s future mental health from how they smile in an online selfie. It doesn’t take a rocket scientist to work out how that could be used to influence underwriting decisions in almost any line of insurance business: personal, commercial, life, health and protection. Will they though? Well, adverse selection gives sector professionals a pretty hefty nudge to do so.

Adverse Versus Access

Data ethics academics have a tendency to see such insurance decision making in a ‘pot half full way’. Insurers are able to differentiate high risks from low risks, leading to a fairer premium, so the narrative goes. Yet it doesn’t take much to flip the situation over and see access to insurance for people with mental health challenges (now or predicted) finding cover hard to come by, let alone at an affordable price.

So if, as the UK Government accepts, one in four people in the UK will experience some form of mental health challenge at some point in their lifetime, then insurers using analytics to predict such challenges means that virtually every family will find accessing insurance for them challenging. Hardly surprising then that digital insurance is sometimes referred to as a political technology.

The paper does pick up on a little of this, when it talks about the assessment of any new service (in relation to digital psychiatry) not taking place in isolation to the rest of the system in which it will be embedded. So if insurance uses digital psychiatry, then that wider context is clearly decisions relating to underwriting, claims and counter fraud.

Some Steps for Insurers to Take

  • Read the paper around which this article has been based.
  • Review where and how your firm is using digital psychiatry, and establish how its governance arrangements support this. And don’t just think of governance in terms of privacy or SMCR.
  • Consider how your firm’s use of digital psychiatry sits within firstly, the FCA’s new consumer duty, and secondly, the duty of care normally associated with mental health. In doing so, it’s important that the firm challenges itself on this.
  • Weigh up the implications from how granular your firm collects and uses data in relation to digital psychiatry. What impact does this have on ethical risks like autonomy and privacy?
  • Look at how your firm is tracking the outcomes associated with its use of data and analytics in relation to mental health. What do those outcomes say about the nature of the support your firm is giving? Find a way of weighing this up from a user’s perspective.
  • Make sure that clear standards and processes are being used wherever support for mental health is being offered. Remember that access is only good if it feeds into the right treatment.
  • Examine how your firm’s decisions in underwriting, claims and counter fraud are making use of data relating to mental health. And look at how your firm’s analytics are weighing up the significance of that data. Make sure that the basis of all this is based upon peer reviewed science.

A Summing Up

Some of you will wonder why all this is needed, when insurers are just trying to help people. The key point here though is that people with mental health challenges don’t need ‘any old help’. They need the right help, delivered in a clear and professional way and based around tried and tested treatments.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.