May 31, 2022 3 min read

Sentiment Analysis – Treat with Caution

The increase in remote working triggered by the pandemic caused Zoom to become a popular tool for holding meetings and speaking with clients. Now Zoom is to allow account holders to carry out sentiment analysis on those they are engaging with. I explore what it means for insurers.

sentiment analysis
One face or many ; one voice or many?

Sentiment analysis uses artificial intelligence to scan facial movements and speech to draw conclusions about the person’s mood and engagement. It’s also referred to as emotional AI and is part of human state sensing. Image and voice data are what it feeds upon.

Zoom says that it will use its sentiment analysis tools to measure the emotional tone of the conversations, in order to help sales people improve their pitches. It’s a neat example, but not an altogether representative one. Here are some examples of how else it could be used:

  • to increase the effectiveness of meetings, by looking for people who talk too much, listen too little, go on and on, or appear less than engaged.
  • to assess how satisfied a client is with the service being provided, such as in claims
  • to assess the person for fraud
  • to gauge how much someone has understood what you’ve been telling them
  • to assess the person’s physical and/or mental health
  • to assess the person for signs of vulnerability

Reasons to be Cautious

Putting sentiment analysis to use in such circumstances needs to done with a lot of caution, for a number of reasons. Firstly, the science behind sentiment analysis is controversial. The scientific community is divided on how emotions present. This introduces a significant risk of the analytics just not telling you what you think it is telling you (more here).

Secondly, the testing of sentiment analysis has uncovered a lot of evidence that it produces unacceptable levels of bias. It’s good at analysing white male faces, but not good at, for example, the faces of black women.

Thirdly, the data handled by sentiment analysis will in many jurisdictions require the firm to tell the user about what is being collected and how it is being analysed. Given insurers’ preference for very generic forms of consent, sentiment analysis introduces a well above average privacy risk.

And finally, let’s not forget consumers attitudes to the data insurers collect and how they put it to use. A ‘double lens of mistrust’ is how sector funded research described those attitudes in a 2020 survey. So an insurer cannot detach the use of sentiment analysis from the type of relationship it wants with its customers. Using it would not be a neutral move.

A Use in Two Ways

I was giving a talk a few years ago to a group of young insurance professionals, about the different types of analysis that insurers were using. When I outlined how insurers could use sentiment analysis in claims, counter fraud and customer service, their reaction was all very positive. When I outlined how that same technology could be used to assess them personally, their reaction was one of shock. In other words, they saw the former use as ethical and the latter use as unethical.

What this tells us is that weighing up ethical issues like fairness and privacy cannot be detached from the position from which you’re weighing them up. Counter balances like the three lines of defence are inadequate at best. Subject matter experts might help, so long as there’s one for data ethics heavily involved.

A Force for Good?

A recent article in the trade press presented sentiment analysis as an underwriting tool for carriers in the reputational risk market. Perhaps even for an insurer to use to track their own reputation. On the face of it, it sounds neat and clever. Streams of data to create neat graphs in management information packs.

And so it could be, if it wasn’t for the underlying science being hotly contested, if it wasn’t for issues around privacy and consent, if it wasn’t for the greatly increased risk of bias it introduces, and so on. You probably get my drift by now.

Are insurers using sentiment analysis? I believe some are, with a few in what they might feel is a pretty advanced state. Have they done an informed analysis of the ethical risks involved? I doubt it. Some boxes will have been ticked – little more.

Reputation and Risk

What we have then is a use of data analytics that comes with a high reputational risk. And that risk will in most insurers be managed by legal advice and three lines of defence, both of which are weak when it comes to ethics. The problem for insurers is that over the next three years, they can expect to be challenged on their use of sentiment analysis. Better then to challenge themselves now, rather than wait for the more destructive one coming.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.