Jun 24, 2022 4 min read

Is the Use of Emotional AI Swinging from the Market to the Workplace?

Many insurers use Microsoft systems, not just for office work and communication, but increasingly in workplace management and business decision systems. Their recent ‘Responsible AI Standard’ is relevant then to people working in insurance as both underwriting and claims people, and as employees.

Is the Use of Emotional AI Swinging from the Market to the Workplace?
Analysing real time meeting performance using facial analytics - how well are you doing?

When I say ‘relevant’, I mean in more ways than most will think. This was evident in the New York Times article highlighting the standard’s launch. Its focus was on one of the most controversial aspects of AI work at the moment. This is the sensing of psychological and physiological states through the analysis of facial data. It’s also known as emotional AI (more here).

So why is emotional AI controversial? The NYT article sums that up in its opening paragraph:

“For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.”

The facial analytics being implemented in business settings at the moment is based around the view that there are a number of primary basic emotions hard wired into our brains and which can be universally recognised. It’s been developed around a taxonomy of human emotions and facial expressions. And this systematisation of facial expressions has proved attractive to business, for it fits neatly with all that clustering, categorising and correlating at the heart of data and AI.

All this is hotly disputed in academic circles. The alternative dimensional approach rejects the idea of basic emotions. Instead, it sees emotions as linguistically and socially constructed. Take smiles for example. In Japan, smiles depend very much on the social context and are driven by unique and complex display rules. Some for example can indicate negative emotions. What we’re seeing is that dimensional approach progressively gained wider recognition and acceptance (for more, read this).

I know that for some years, insurers have used voice data to gauge the caller’s psychological state – I’ve experienced it myself when making a household claim. And I know that some insurers have been researching facial analytics, with obvious underwriting and counter fraud purposes. Yet most people working in insurance would not expect such analytics to now be entering their own workplaces.

Measuring People in Meetings

Big tech firms like Microsoft, Google and Zoom are working on systems to analyse the effectiveness of meetings. Facial analytics are at the heart of this work. And it’s only a small step to move such systems from the meeting room into the wider office.

Some years ago, in a London market discussion with insurance professionals about data ethics, I raised the prospect of the regulator assessing the trustworthiness of significant management function holders at their firm, using emotional AI. I asked the question because there had been a recent report of investment management firms using that technology to assess what executives were telling them about their strategies.

The reaction of those insurance professionals was swift and hugely negative, labelling it as very unethical. Yet some of their firms were in the process of doing something very similar in relation to customers. They struggled to reconcile the two in that meeting, but at least understood there was a question to be addressed.

The Microsoft standard is not as thorough as it would like people to believe. It’s retiring facial analytics such as Azure Face, but only reviewing its use in their huge range of workplace management systems (more here). And that means that insurance people may well experience its use on themselves. Will their annual appraisals henceforth contain discussion items such as ‘how well you engage in meetings’ and ‘how easily distracted you are in the office’. I can be like that myself – it usually means I’m thinking.

Firms Revisiting Emotional AI

We’re seeing a rising tide of firms re-visiting their use of emotional analytics, and rightly so. It may give them results, but those results will be detached from reality. And so too then will the decisions based upon them.

In my experience, the culture around data and analytics in the insurance sector is less susceptible to, and more detached from, public criticism like the NYT article. Now though, that culture will just as likely find itself in a ‘done to you’ situation, as well as a ‘done by you’ situation.

At the moment then, I believe that unless an insurer is specifically saying that it does not use emotional AI, then it will be, to some extent. As examples, this could be in underwriting, claims, counter fraud, operations and/or human resources. One meeting will probably not be included though: the board meeting, unless of course investors push for it.

Now some of you will say that how you smile in meetings has always in some way influenced your bonus, presumably along the lines of smile = nice person = popularity = bonus. It’s just that now, it’s the genuineness of that smile, and the mental health issues that the facial analytics implies from that smile, that will affect not just your bonus, but perhaps your job as well.

Accountability for Emotional AI

If, like Microsoft, your board reads the winds of change and asks for information about how the firm is using facial analytics and what controls are in place around it, there are clear and obvious benefits in having its scope and depth clearly prepared in advance, and set in the context of controversy that surrounds it.

Are they likely to ask for this? Well, they are accountable for its use, from a variety of regulatory angles. So that likelihood is growing.

In the meantime, take the response to Microsoft’s standard as indicative of a changing landscape for emotional AI. Are firms thinking that its use in public domains is attracting too much controversy, while its use on employees in their workplace provides a useful training ground? It could well be.

Acknowledgment - some key points in this article were based on Andrew McStay's post on Linkedin about the Microsoft standard

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.