Apr 20, 2022 11 min read

Addressing Fairness in Insurance: a Review of the MAS approach

It is said that fairness will define financial services in the 21st century. Early attempts to address it have proved ineffective though. In digital times, the need for something that will generate momentum is becoming urgent. Can a Singapore initiative achieve this?

fairness
Can this Singapore initiative advance fairness in financial services? 

The Singapore initiative I’m referring to has been organised by the Monetary Authority of Singapore (MAS). It supports their ’Fairness, Ethics, Accountability and Transparency’ (FEAT) principles produced in earlier 2021. Much of the work around this is being done by a consortium made up of firms like Swiss Re, AXA and Accenture.

Last month, the consortium produced a ‘Fairness Principles Assessment Methodology’. It is a detailed work that covers a great many of the key points I would want to see addressed. So while a detailed read, it is also substantive in the better sense of the word.

The MAS Approach

In broad terms, the methodology draws on four principles from FEAT, as listed below. AIDA stands for artificial intelligence data analytics

  • Individuals or groups of individuals are not systematically disadvantaged through AIDA driven decisions, unless these decisions can be justified;
  • Use of personal attributes as input factors for AIDA driven decisions is justified.
  • Data and models used for AIDA driven decisions are regularly reviewed and validated for accuracy and relevance, and to minimise unintentional bias.
  • AIDA driven decisions are regularly reviewed so that models behave as designed and intended.

The methodology then follows a five step process to embed “fairness checkpoints in a typical AIDA system development lifecycle”.

  • a foundational set of practices and standards
  • define system objectives and context
  • prepare input data
  • build and monitor
  • deploy and monitor

I’m not going to explore these five steps here, for the detail they’re built upon is too extensive and systemised to be informative outside of the original. Instead, I’m going to explore the patterns that emerge from their approach and output, from the broad perspective of a critical friend.

This type of approach matters because before this methodology enters into anything like widespread use, it needs to be critiqued so that gaps can be filled, tilts can be righted and rough edges smoothed.

Some of you may well ask something along the lines of “what is in it for me? Aren’t things like this just for big companies?” Well, all insurers, from small to large, were impacted by the ban on price walking. And that ban came about largely because the market didn’t think that fairness had anything to do with pricing. The widespread view was that the fairness of a price was determined by the market.

An Early Influencer

What all insurers can therefore expect to see over the next few years are frameworks and methodologies being put forward for insurers to adopt so that the need for further ‘big reviews’ of digital decision systems goes away. It’s a bold ambition, but worth aiming for.

There will be comprehensive approaches as this MAS one, as well as smaller scale ones for the more average insurer size. What distinguishes the MAS methodology is that it is one of the early ones and so likely to influence in varying ways those that follow. So critiquing this MAS framework should help those frameworks that follow learnt from its strengths and weaknesses.

In overall terms, I was impressed by the methodology being proposed. A lot of ticks and some double ticks were annotated onto my reading copy. And it got me thinking. If firms were to have adopted this methodology say 8 years ago and reflected it in their digital strategies for underwriting, claims, counter fraud and marketing, then would a lot of the problems that I’ve been seeing in that time never have happened?

This could be brilliant, I thought, until a sense of realism kicked in. Would the loyalty penalty, the ethnicity penalty, claims optimisation, manufactured vulnerability and the like simply have never emerged? Would the reasons that drove their emergence not have happened? When I turned again to the MAS fairness methodology, I was not so sure.

Eleven Forest Level Issues

What I’m going to set out here then is a forest level critique of the methodology, not a tree level one. I will look at eleven systemic issues that I feel will determine the extent to which a methodology like this will succeed or fail.

Before doing so, I would like to mention that this article of mine about behavioural fairness  was referenced in the methodology as an example that there is still a debate to be had around fairness and insurance. This analysis seeks to inform that debate.

Commitment

A lot of the things in this fairness methodology are common sense. While it feels complex, that’s just because it brings a lot of things together. So they’re detailed in a big systems way that big consultancies like big firms to adopt, but there’s little rocket science here.

What makes detailed systems approaches stand or fall when addressing ethical issues is commitment, investment and motivation. The commitment phase is addressed in an accompanying ‘Ethics and Accountability’ document. Investment was something I could find little about, but clearly, implementing a methodology like this requires lots of long term investment, of people, money and time.

Will firms be willing to make such an investment? The problem is that the methodology is voluntary at the moment and there’s probably not enough ‘regulatory threat’ on the horizon to help generate the return on investment (one exception being this recent step in the US).

And motivation? Sure, there’s a growing volume of noise about the implications for fairness and discrimination in digital decision systems, but I’m not seeing a lot of that talk turn into walk.

In conclusion, the methodology seems to lack the spark that would kick start a firm into implementing it. I suspect it will instead have bits picked from it to address more immediate and more localised issues.

Values

The starting point for the fairness methodology is for the firm to determine its organisational principles, values and standards relevant to the digital decisions being made by AIDA systems. And it then has to operationalise these values etc alongside a framework of accountability.

The problem is that while many insurers have put together statements on purpose and values, far fewer have put them into operation to the extent that they have a material influence on strategic and operational decisions. This is what I call the gross/net gap – the difference between what firms say on paper that they are doing, and what they are actually doing in practice.

I believe that there are only a handful of UK insurers whose ethical values are actively influencing digital strategies. Sure, many insurers recognise that AIDA systems create problems around discrimination, but many are responding reactively rather than proactively. Others are just playing a waiting game so as to be told what to do.

In conclusion, this ‘organisational principles and values’ feels like a strength in relation to the overall effectiveness of the fairness methodology, but in practice, it will act more as a weakness. To be brutally honest, many insurers are just not use to working in this way, to making hard decisions based around those values and principles.

Culture

A considerable hurdle that this fairness methodology will face is that of culture. To paraphrase a well-known saying, ‘culture beats methodology, every time’. And surprising, this methodology barely recognises culture, let alone addresses it. This is the only reference of any substance to culture:

“Ensuring the working environment, culture and incentives do not drive behaviours that do not align with fairness best practice (e.g., reducing decision fatigue by ensuring appropriate working practices and timelines during development)”

It’s good that the methodology does recognise undue pressure as an issue to be addressed, but overall, in terms of the different ways in which culture can drive forward or hold back the tackling of an ethical issue like fairness, the lack of attention to culture is a significant flaw.

Both the fairness and the ethics and accountability methodologies see culture as a feature of an organisation ready to serve the strategy to be delivered. What they don’t recognise is how it can just as often influence the delivery of a strategy in ways that suit and sustain that culture.

In conclusion, even for a methodology of this scope and depth, not addressing culture is a curious omission that needs to be addressed.

Perspectives

This methodology has fallen into the trap that big firms and big consultancies tend to fall into, of assuming they know what customers’ problems are and what solutions are needed. This gives the methodology a top down feel – something along the lines of ‘we know what is best and just let us get on and do it’.

Thinking you know what customers want may feel like an easier option than going out and asking them what they want, but it is also the much poorer option. I found that out when reviewing customer service KPIs for Europe’s biggest motor fleet. It turned out, after asking users what they wanted, that the main KPI that the fleet wanted to adopt was ranked 22ndby its customers, while their first and second priorities were not yet being measured.

A methodology put together without meaningful input from consumers and/or their representatives will always be at considerable risk of fitting the needs of business more than the needs of consumers. And with issues like fairness, ethics and accountability, that risk could undermine a lot of the good intentions behind the MAS project. After all, fairness principles are only what they say on the tin if they emerge out of a joint venture.

In conclusion, the methodology is in great danger of delivering for business but not for consumers. Without the latter’s input, it will struggle to find acceptance amongst the very people it seeks to benefit.

Costs

This imbalance in perspectives also comes across in how costs are treated. Take this typical statement…

“It is important that the level of FEAT fairness assessment is proportional to the fairness risk of any use case as these assessments will need to be implemented with additional costs, which ultimately get passed on to consumers.”

The focus here is on the costs to the business of doing the assessment. And while I’m not advocating that an assessment should be undertaken at any cost, I am saying that the costs to the consumer of an assessment not being done should also be included in such a weighing up.

Such costs to the consumer of this or that fairness issue are often available. We’ve seen them here in UK insurance in both the loyalty penalty and ethnicity penalty campaigns by Citizens Advice. And remember that these are often only the obvious costs. The financial impact from any diminution in trust should also be considered, even if only in the round.

In conclusion, the methodology needs to factor in both sides of the ‘costs coin’, in order to reached balanced conclusions.

Flexibility

The methodology carries surprising levels of flexibility for users. Even fairness can be redefined on a case by case basis. Terms like ‘minimise’ and ‘reasonably necessary’ beg the question of what exactly is going to happen.

Of course, such flexibility avoid the risks from a ‘one size fits all’ approach. On the other hand, it lends the methodology a vagueness about what will actually be delivered and what users of it can actually be held to account on. In other words, how will one know if this output is good, medium or poor? The danger is that firms will cut the cloth of their fairness programme to suit their own needs, rather than the consumer’s needs.

With discrimination, there’s some track record from court cases for what things like minimise and reasonably necessary mean. With fairness, there’s a lot less. In conclusion, it’s up to methodologies like this to be clearer on this, to take the initiative. It feels like that opportunity has been missed here.

Not a Neutral Landscape

While the methodology is carefully structured and broad in its scope, it also lacks any perspective on dealing with existing consumer attitudes to how insurers collect and use data. Those attitudes are not very positive at the moment, at least here in UK insurance, so methodologies like this are never going to be introduced into a neutral landscape.

I suspect that insurers would find a methodology that was smaller in scope and depth, but more able to settle into and around existing issues, to be more useful.

More than One Fairness

I was quite astounded by this paragraph early on in the methodology:

“There are many notions of fairness, including basing this on equality or need. Notions of fairness continue to be debated in both moral philosophy and wider society. As one notion of fairness may conflict with another, fairness is an “essentially contested” concept. Therefore, although the general nature and importance of fairness are widely understood, its precise definition and what constitutes fair or unfair outcomes, should be defined in a given context (i.e., fairness is use-case specific).”

Sure, fairness is a complex thing, but it is not something that should be defined by a firm on a case by case basis. The conflicts of interests are obvious.

The trap that the methodology has fallen into is thinking that there is only one fairness. And by thinking that only one fairness exists, it’s not a big step to thinking of it having to be redefined on a case by case basis.

Fairness is not a unitary thing. It has several dimensions. There’s the fairness of merit that actuaries and insurance people are familiar with. Then there’s the fairness of access and the fairness of need that many of those different perspectives I mention earlier are familiar with.

Two more can be added into this mix. Firstly, the fairness of time, that recognises that the timeframe over which you look at fairness influences the conclusions you draw from it. And secondly, what I call the fairness of crowds, that recognises that we often don’t judge fairness on the individual basis that economists would like us to (and this is what I’m researching at the moment).

Standing back and seeing how these different dimensions to fairness interplay is called the equality of fairness. And it is the mix of dimensions within that equality of fairness that the methodology should have recognised. How that mix is derived is firstly, done on more of a case by case basis, and secondly, done through ethical dialogue with those different perspectives I mentioned above.

Beyond Privacy

Another way in which this methodology feels uneven is in the focus it puts on privacy. Sure, privacy is important and how personal data and attributes are collected is important. What is in my opinion of equal importance is how all that personal data is interpreted, and how that interpretation is then gauged against the context in which that interpretation is being made.

So for example, the interpretation of data in a claims or counter fraud situation is a very different thing to the interpretation of data in an underwriting or marketing situation. We can bring in here the debate about causality and correlation, and the question of how the power of interpretation is being handled. It’s obviously a big topic, but it’s also one central to an understanding of fairness.

Big firms and big consultancies tend to lean towards privacy because of the legislative framework that surrounds it. In legislative terms, fairness is a very poor relative. That however doesn’t mean it doesn’t deserve an as thorough treatment. This article on logical relatedness is a good place to start.

Fixing Inclusion

There’s a growing tendency in initiatives like this to talk of data driven approaches bringing…

“…benefits to society through the creation of innovative products and services that support affordability, ease of access and the quality of the end service, while also helping to bridge financial inclusion and protection gaps.”

That’s a bold claim and while there are cases where inclusion and affordability have been improved, I believe they are more patchy than the above quote would have us believe. There is certainly a tendency to pay far less attention to cases where inclusion and affordability have suffered.

What we need then is independent research of the right quality, to give us a better picture of what is going on. One example that this could draw upon was the findings of the FCA’s review of price walking in UK general insurance. In their final report, in the context of industry claims that price walking widened access, the FCA stated that their research have found no evidence of this.

An Uncertain Future

Looking forward, the success of a methodology will depend on take-up, recognition and reputation. Firms will feel more positive about adopting a detailed methodology like this if they are then able to share their success with clients, investors, staff and partners. Such recognition will be an important part of the return on investment being sought.

Yet a look back at similar initiatives finds that within a few years, they reach a point where take-up needs to be encouraged, but reputation of the brand needs to be maintained. Pressures emerge to loosen the reins a bit, to bring in more adopting firms. Existing firms then feel aggrieved, not wanting the reputation of the brand to be lessened, especially after having done the hard work to earn it.

What helps a lot in such situations is the availability of two things: a ‘getting started’ version, and a ‘dealing with problems’ toolkit. This improves the throughput of applicant firms and the retention of those finding it hard work. The analogy here is that the height of the ladder is only as good as the rungs that will get you up there.

Summing Up

The MAS fairness methodology is an attempt to address fairness in a comprehensive way. It suffers however from dropping too soon into the level of trees, and not having spent enough time understanding the shape of the fairness landscape and the forests that define it. This makes it a work full of useful detail for firms to dip into, but lacking in overall grasp to deliver the big differences that were hoped for.

Too harsh perhaps? I hope not. I’ve met several of the authors and the first thing I would do on next meeting them would be to congratulate them on this work. It’s just that I would be equally attentive to their plans to refine it to overcome the issues that I have outlined here.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.