Medical underwriting is on the midst of a digital transformation. Data and analytics are opening up opportunities that underwriters are keen to exploit. Yet alongside those opportunities lie ethical risks. Handle those ethical risks well and that transformation will secure public confidence in medical underwriting. Handle them badly and social and political pressures could kick the legs out from underneath that transformation.
The challenge for medical underwriters is twofold. The first is gauging the extent and complexity of those ethical issues. And the second is transforming their businesses in ways that deliver the outcomes that secure that social and political support.
It is important to understand that what might be ethical risks in the short to mid-term, become business opportunities in the long term. So while many insurers see digital transformation as central to their competitive differentiation, some are looking beyond that, to the time when digital transformation is delivering only marginal advantage. Those insurers want to know what they can do now to drive their competitive advantage through and beyond digital. The answer their research has produced is simple: it’s to transform in ways that secure public trust.
As a result, those insurers are looking now at how their digital strategies should be reconfigured. Instead of trust being an assumed output, they are positioning trust as the central outcome that their digital strategy has to deliver. Hardly surprising then that some insurers now have Chief Data Ethics Officers.
And the thinking of those insurers is gaining traction. Consider what PwC’s 2020 Global CEO survey had to say: “…insurance CEOs are increasingly concerned about public trust. Without winning this trust, it will be difficult for insurers to bring their expertise to bear.”
So all that expertise in medical underwriting and in digital transformation, will count for little if it does not secure public trust. Leaving this to chance seems like a risky move, and even more so given the questions starting to be asked within the investment community.
The reality is that the digital transformation of insurance is not taking place against a blank canvas. There is a clear social and political backdrop, on which ethical questions are being raised and expectations taking shape. Engaging with those ethical questions will help secure trust in medical underwriting.
This analysis will help medical underwriters to understand more about the ethical landscape upon which their portfolios are located. It is not a map of that ethics landscape, but more like a travel guide, highlighting the main issues and giving examples of the challenges that will need navigating.
The analysis is in three parts:
- key ethical risks
- core data ethics elements
- structural ethical issues
In part two (the core elements of data ethics) can be found examples of some significant ethical challenges that medical underwriting is likely to encounter.
Part 1 - The Key Ethical Risks
The following four ethical risks are recognisable landmarks on that ethical landscape. They should as a matter of course be recognised not only within any ethics programme, but within digital strategies as well. How insurers then navigate in relation to them will influence the trust that consumers will have in the products and services on offer.
While ethical risks like privacy may seem familiar, it is important that medical underwriters ‘think outside of the box’ on them. These are ethical risks for which assumptions and boundaries are evolving, and underwriters would do well to challenge their thinking from time to time.
While the GDPR transformed the attention paid to privacy, it also focussed attention onto the directive’s specific requirements, and increased the cost of handling health data. This spurred interest in its regulatory boundary, and saw law firms guiding clients on how to stay inside, or stay outside, of it.
One response has been the greater use of inferential analytics, in which personal data is avoided and affinity profiles used instead. For medical underwriting, this means that decisions about a sensitive matter like health are being made without use of sensitive data about health. This raises obvious ethical questions.
And as health programmes become more holistic, the data they seek to draw in becomes much wider. This makes consent a more complicated matter. For example, how generic or specific should consent be to access holistic data upon which to base decisions about sensitive health issues? The danger here is of drift into unethical practices.
The ICO’s auditing framework for AI will put practices like these under close scrutiny. Three obvious questions will be raised:
a) were ethical questions recognised;
b) how were those questions resolved;
c) were the outcomes fair?
Add in the accountability in SMCR and the importance of weighing up those questions properly mounts.
The sensitivity, both at law and in the public’s minds, of health data is something that medical underwriters have to match with careful consideration not just upon what can be done, but what should be done.
Identity and Autonomy
Insurers are using data and algorithms to assemble identifies for customers, drawing on myriads of sources: shopping, travel, and social media, for example. As all data is historic and a product of its time and context, those decisions systems are therefore using not accurate data about the insured person, but historical data for a proxy of that person. This raises ethical questions about the outcomes that those decisions systems can produce in the real world.
The wellness trend in life and health portfolios, and insurers’ access to proliferating data sources, means that a tension is forming between an insurer’s right to know and the person’s right to autonomy. So when someone exercises in a non-data way, or shops in a non-data way, does this result in consequences that lead to higher costs or lower cover? Are people being penalised for living the way they want, rather than how an insurer would prefer them to?
The market argues that insurance is a free market with lots of choice, including not to participate. That however sits uncomfortably with the extent to which medical underwriting has embedded itself into important life events. This then attracts ethical and political questions about what the market is there to deliver, and how well it is meeting the public interest.
Alignment of Interests
Insurance has at its heart a permanent and significant conflict of interest, between the interests of the insurer and the policyholder. How that conflict of interest is managed influences many of the ethical challenges outlined in this paper. Unless those two interests are maintained in some form of alignment, it will be nigh impossible to secure public trust.
Regulations require that insurers put the interests not of ‘customers per se’ first, but ‘each customer’ first. Yet anticipated group level benefits can sometimes be used to ‘under address’ that permanent conflict of interest at an individual level. In between those group and individual level perspectives lie many grey areas, in which medical underwriters need to find a consistent and ethical way forward.
It’s important then that data and analytics are used in ways that support rather than undermine the alignment of those insurer and policyholder interests. This should form part of product and delivery design, and then embedded into the decisions systems that make this happen.
Conflicts of interest can sometimes feel like a wearying issue to deal with. This period of digital transformation is however the moment when fully engaging with them will reap the most reward.
The issue of algorithmic bias is now well known and every insurer should be taking steps to address it on a systematic basis. This means the usual corporate focus on discrimination as a workplace issue, being replaced with one that encompasses consumers too, both on an individual and group basis.
For medical underwriters, there are particular challenges to this. Doctors have been following strict guidelines around ethnicity and race for many years, and medical underwriters, especially those seeking access to electronic health records, will be expected to do the same. This means data and algorithms being designed, trained and tested to similar standards. Delivering on that is the challenge, especially against the backdrop of SMCR.
A discrimination prevention strategy with the scope to identify and control this issue is now the norm, as is a governance structure that gives it strong support even if it were to disrupt digital strategies. Consider the FCA’s reassurances to the Treasury Select Committee in February 2019, that it had the resources and expertise to identify where discrimination in insurance was occurring. What they were signalling were the supervisory technologies that draw on data and analytics every bit as good as those used by underwriters. The monitoring of discriminatory outcomes is going to be a regulatory priority over the next few years.
Part 2 - The Core Data Ethics Elements
The three core data ethics elements are data, algorithms and practices. Each are outlined below, with examples of some significant ethical challenges that medical underwriting is likely to encounter.
This element focuses on the ethical issues raised by the collection of large datasets and the uses to which they are put. It covers themes like bias, anonymisation, minimisation and proportionality. Outlined below are four scenarios where these types of data ethics risks will be prominent.
Training and Testing Data
Data is often presented as factual and objective, but in reality, it rests upon a series of choices. What data you collect is a choice, as is the extent to which you recognise the context from which that data emerged. The way in which one dataset is then aligned and connected with other datasets involves further judgements. And how you then weigh up the significance of a piece of data involves subjective assessments about what is high or low, acceptable or excessive. Then there’s the question of how representative is a piece of past data for your present day decision. Data is therefore far less objective than people think.
An increasing proportion of data is unstructured. The processes for cleansing, sorting and labelling such data involve judgements about what attributes and variables will be counted and which ignored. This makes such datasets partial, delineated by many subjective decisions. And as for much of the content that people post on social media, objectivity is not the word that first springs to mind.
The data you use for the training and testing of your underwriting algorithms is full of ethical dimensions. How those are recognised and factored into design processes is important.
Data from Genetic Testing
For 23 years, the UK Government has had some form of agreement with the insurance sector on access to and use of genetic testing results. There are signs however that the current ‘Code’ could soon be ‘re-engineered’. Its scope now looks imbalanced against the digital realities of today’s market, and the exploration of proxy data is undermining its relevance. Over the next few years, some form of change is likely.
Such change will undoubtedly come under political and social scrutiny. The debate that took place 23 years ago was about possibilities. Now, the debate will be about actualities. What we can say about how that debate will roll out now is twofold: firstly, the insurance sector has changed a lot over those 23 years, and secondly, public attitudes towards the use of genetic testing data will probably have changed very little.
It will not be an easy debate. The problem for the sector is that significant aspects of its case are based on market attitudes and sentiments – ‘this is how we do things, this is the way the market is going, and this is what we need’. Independent research and scrutiny of those concerns and expectations is patchy. This means that the forthcoming debate about genetic data will be tough, and the market needs to be prepared.
Data on Mental Health
One in four people in the UK will experience some form of mental health issue in their lifetime. The impact of this on families, jobs and life outcomes is therefore significant. Greater awareness of this from within the insurance sector has resulted in two things: firstly, support frameworks for both staff and customers, and secondly, research into data sources for how and when mental health issues present.
Such research encompasses both current and predicted episodes: for example, what does the way in which you smile say about your future mental health? This is hugely controversial, not just at the operational level, but at the scientific level too.
Mental health initiatives enjoy huge political support. This means that insurers exploring how to digitise its identification and interpretation within medical underwriting models must therefore recognise and prepare for the social and ethical concerns that are so obviously associated with this. Market imperatives are unlikely to carry weight in the anticipated debate. Insurers need to challenge themselves before they experience the wider challenge that lies ahead.
Access to Electronic Health Records
Giving insurers greater access to electronic health records will benefit both consumers and insurers, at the point of both inception and claim. Yet it also feels like a double edged sword. How competent will medical underwriting algorithms be at handling such data? Can underwriting people (or systems) match the level of skill and competence of the medical professional who wrote those records?
Insurers could point to how AI systems can sometimes be better at diagnosing than doctors, but those are often point events, not an all-encompassing record of someone’s health journey. And insurer systems may have sophisticated mathematical models requiring lots of high quality data, but not all health situations and not all people can offer up such data? How will their records be judged?
Will the insurer’s handling of those electronic health records be done with the patient’s interests foremost? It’s both the basis upon which the doctor created such records in the first place, and what regulations require of insurers. And this is required on a per insured basis, not at some generic group level of interest. The problem is that public trust in their interests being put first is pretty different between the medical and insurance professions at the moment. Unless that gap can be bridged, the case for access to electronic health records will meet resistance. And given that most digital transformations will need such access, the need to properly address the ethical issues is only going to build.
This core element of data ethics covers the ethical issues relating to algorithms and the uses to which they are put. It covers themes like training, transparency and explainability, and has implications for topics like discrimination, vulnerability and inclusion.
Transparency and Explainability
That transparency and explainability are core ethical concerns for how data analytics are used in financial services is amply demonstrated by it being the first joint research project to be initiated by the FCA / Alan Turing Institute partnership. Their findings will address both the ‘what and how’ and the ‘to whom’ questions that transparency and explainability raise.
That research will produce findings designed to meet the expectations of external stakeholders like customers, partners, investors and regulators. But what the FCA will also be interested in is how transparency and explainability work internally. How does a leadership team, or a firm’s board, know that the data and algorithms are working within regulatory expectations, and how is this being evidenced? The FCA’s expectations around this are going to be pretty unequivocal.
Data science researchers at the Alan Turing Institute will be telling the FCA that pure transparency and explainability is neither possible nor desirable. Instead, the focus will be around the knowledge and skills that underwriters can demonstrate both having and applying, to establish the appropriate level of transparency and explainability in the products they’re responsible for.
The Quality of Algorithms
Some people think of algorithms as mathematical tools that produce objective results. What those people are missing are the very human aspects of how algorithms are designed, trained and applied. It is for such reasons that the increasing use of algorithms in medicine has led to the adoption of evaluation frameworks for ensuring that their qualities are understood and managed. Such frameworks can look very simple, but their strength lies in how thoroughly systems have to be weighed up against them.
The algorithms upon which medical underwriting is progressively relying need to be subject to similar evaluation frameworks. I’m sure many underwriting algorithms will be subject to due diligence, but such processes are usually orientated around corporate rather than customer interests, and rarely touch on ethical issues. To be trusted, such algorithms need to be assessed to a standard equivalent to those being used in medicine itself.
The tasks to which those algorithms are put can be broken down into four categories: descriptive (what happened?) ; diagnostic (why did it happen?) ; predictive (what will happen?) and; prescriptive (judging what should happen and making it happen.) Each of these categories sits within an ethical context, with the ethical questions predominantly arising in the predictive and prescriptive categories. For example, what are the criteria upon which predictions are being generated by those algorithms? How are they respecting ethical concerns such as autonomy and fairness? How is the appropriateness of their use in different contexts being assessed and controlled? And who is asking these questions, and judging the responses?
There’s nothing wrong with algorithms being used to make predictions. The ethics lie in how you configure them, deploy them and manage them. They are powerful instruments that need to be carefully managed against ethical as well as operational and financial criteria.
Much of the data being used by insurers tells them what, when and where we do something. AI is now using voice and image data to tell the insurer why we did what we did, and how we felt when we were doing it. This is opening a window into our emotional lives.
And that is only the beginning. Insurers are funding research into how AI can use voice and image data to predict our future emotional states. In other words, the direction in which our mental health is heading. Great if that’s used to guide and support that person to a better mental place. Controversial if it’s used to influence medical underwriting decisions.
Bear in mind as well that research into emotions is hugely problematic. The academic community is split on the fundamental question of how emotions present. Yet the insurance community is seen to have firmly nailed its colours to just one side of that debate, the one most easily quantified and processed through data and algorithms.
The use of emotional AI is an inevitable development for those insurers orientating their portfolios around wellness. Yet that inevitability also holds true for the ethical questions that are already emerging, which medical underwriters need to both understand and address.
A lot of the ethical issues associated with data and algorithms are framed by the practices within a firm. This covers for example, the leadership given, the capabilities built and the culture that develops. And from these come the influences that shape the design of the products and services. Out of this has emerged some important themes, two of which are examined here.
Demonstrating accountability during periods of digital transformation has been the subject of much academic research. And as one of the main researchers is now the FCA’s lead adviser on data ethics, we can expect this to at some point feature in the latter’s SMCR monitoring.
Given the sensitive nature of health data and the complexity of decision making systems, such accountability will be a challenge to organise and maintain. Yet it will so obviously be a matter the FCA will want to see firms address. They will expect heads of life and health underwriting to be working on two levels. Firstly, knowing the issues, asking the right questions and judging the answers received. And secondly, recognising the accountability challenges often found in digital, and demonstrating how they’re being addressed.
Of all an (re)insurer’s lines of business, life and health is the one that is going to be pushed hardest on this, for its influence and impact are so widespread. Yet perhaps the most fundamental issue here is understanding what exactly the underwriter is accountable for. This is not just what they’re familiar with, but what is expected of them, from communities through to investors.
Underwriting at Claims Stage
It’s good to make it as easy as possible for consumers to take out policies. The sector is seeking to ask consumers just a few identifying questions and leave the rest to big data and clever analytics. Yet such an approach raises ethical questions. Leaving the questions out of underwriting means that they’ll then have to be addressed at the claims stage. This may seem efficient, but it could also undermine other things that insurers need to achieve, such as positive, trustworthy engagement with customers at the point of claim.
Consider what Professor Terras of Edinburgh University’s Futures Institute said at an Alan Turing Institute lecture in 2019: "All data is historical data: the product of a time, place, political, economic, technical, & social climate. If you are not considering why your data exists, and other data sets don’t, you are doing data science wrong”
What this points to are ethical questions about weighing up underwriting data at the point of a subsequent claim. The health related data that a typical insurer may gather about a typical consumer is not only incomplete, but by the time a claim is made, out of context and out of date as well. Furthermore, gathering data in relation to a ‘one-to-many need’ (underwriting) at a ‘one-to-one event’ (a claim) feels like trying to fit a square peg into a round hole.
Accelerating underwriting in this way may seem like a positive step to achieve consumer engagement and reduce underinsurance. It needs however to be weighed up against an equally significant downside, that ‘if you have to claim, we will then seek out lots of detailed information we haven’t looked at before’. The ethical tensions this raises are obvious.
Part 3 - Structural Issues facing the market
The digital transformation underway in insurance has, by the inherent nature of any big change, resulted in more than just new products and services. It has caused many long established practices to be questioned as well. This is good, so long as the ethical knock-on effects are addressed too. Here are four areas in which ethical concerns are being raised.
Many of the ethical questions relating to data and algorithms have their origins in the sector’s move towards personalisation. And the personalisation narrative is seen to now define the future of insurance. Yet it is also a narrative that is sitting increasingly uncomfortably within a wider debate about fairness and inclusion.
What insurers may not realise is that that wider debate about fairness and inclusion is now no longer confined to consumers traditionally viewed as vulnerable. It is a debate now orientated around what the CMA’s CEO referred to in 2019 as the ‘the new vulnerable’: “This is not just people who are vulnerable on well-understood indicators: those who might be old, or on low incomes. It includes millions – perhaps even the majority – of the population, many of them ‘time poor’. They - us - are the “new vulnerable”. We are all vulnerable now.”
Personalisation is bringing about fundamental changes in insurance, but unless the sector orientates those changes so as to respect fundamental concerns falling under the broad heading of ‘fairness and inclusion’, it will undoubtedly put the sector on a collision course with regulatory and political establishments. That is what the CMA was signalling.
An Actuarial Lens
Actuarial thinking has been hugely influential to the reshaping of products and services happening in life and health insurance at the moment. Yet there is also a danger that this trend could end up making insurance too rigid, too distant, too inexplicable. Yes, products may seem more flexible, insurers may feel closer to their customers and user experience may look enhanced, but in reality, the lens through which the sector sees its customers is ever more one of data and analytics.
This actuarial lens will inevitably be incomplete (humans do not live by data alone) and obscuring. It is a lens that emphasises proximity (the insurer moving closer to the customer), but ignores intimacy (the customer moving closer to the insurer). The difference between proximity and intimacy is trust, and without trust, the ultimate success of any digital revolution will be in doubt.
Data and analytics are not inherently trustworthy. For the transformation of medical underwriting to succeed, actuaries must explore how to build up that trustworthiness. There are signs of this happening, but unfortunately, the wrong way round. The emphasis has been more on what the public should do, and less on what the profession should do. Digital revolutions in insurance are neither natural or inevitable. To make it a more likely success, actuaries must question what they still need to rethink, and where they still need to do more.
Retreat from insurance
The language of loss prevention embodied in wellness products and services strikes many good cords. Why wait until the loss, when you might prevent it in the first place, goes the narrative. The danger in this narrative is that it also signals a restructuring of the market, away from being a provider of indemnity products and towards being a provider of life services. Reinforced by the atomised levels of selectivity that data and analytics provide, wellness then resembles a pre-qualifying system for insurance.
What this feels like is a retreat from insurance. So while data and analytics are often talked about as ‘disrupting’ insurance to new levels of provision, they could well achieve, for many people, quite the opposite. Provision could become hyper selective, as insurers focus instead on automated wellness services. For some, this will feel less like being nudged towards wellness and more like being coerced away from life and health insurance.
The problem for the sector will be that losses still happen, for life largely carries on as before. An ‘insurance’ sector doing less insurance will inevitably attract more and more questions of a political and social justice nature.
A key rationale underlying the wellness trend is behavioural fairness. In other words, we’ll support you making healthier choices, but if you don’t make them, then you pay more, perhaps even lose cover. Yet if you think ahead to where this trend takes medical underwriting, the narrative of individual responsibility is a selective one. The individual may be taking on financial responsibility for their own health, but that will be influenced by the extent to which that responsibility can actually be exercised. Where that’s not possible, then the selectivity in that narrative could actually lead to more exclusionary outcomes.
Some academic researchers into behavioural fairness have suggested that it could put insurance on a dangerous trajectory. Just as society is increasingly embracing difference, could life and health underwriting be emphasising it?
The sector’s use of behavioural fairness has its origins in the difficulties around actuarial use of protected characteristics. The irony however is that the sector has essentially leapt from the frying plan into the fire. Behavioural fairness relies heavily on devices, data and analytics. In doing so, it picks up a lot of the ethical questions about what is tracked, how it is interpreted and how it is leveraged. Such questions are just as full of ethical issues around protected characteristics as ever before.
Challenges create Opportunities
The digital transformation of medical underwriting does create ethical challenges. Many of them are understood and acknowledged. How best to address them, and to what extent – that is often less clear and agreed.
Underlying these ethical challenges are structural issues like personalisation and behavioural fairness that raise fundamental questions about what insurance is there to achieve. As debate both within and outside of the sector explores these issues, the personal nature of medical underwriting will make it the case study par excellence.
Is medical underwriting approaching a crossroads? On the one hand, its digital transformation is already underway. The threat of tech competitors entering the market is already being realised. The logic of personalisation and behavioural fairness is firmly embedded into digital strategies. Yet on the other hand, regulatory requirements are unchanged. If anything, fairness and inclusion are more firmly embedded into regulatory strategies than ever before, and regulators better equipped to supervise them. So what should medical underwriters do?
The starting point is to look beyond that crossroads to where medical underwriting wants to get to, and to then work back to the choices that will shape that journey. Those choices are not ones for underwriters to make on their own though. If trust in the sector is to be secured, consumers will want to have their voice heard too, and to see that voice having an influence. I’ve seen some insurers start along this path, looking at digital as a step rather than a goal.
Remember that while personalisation and behavioural fairness may be central to many a digital strategy, they are in large measure orientated around an underwriting agenda. Nothing wrong with that, except that they will be configured to reflect the sector’s needs. As the earlier quote from the PwC 2020 CEO survey noted, they will count for little unless the public trust insurers enough to engage with them.
The sector’s engagement with ethical issues like those outlined in this analysis has to be done in ways that build trust. In other words, it must be done in ways that demonstrate trustworthiness, the four components of which are competency, reliability, honesty and goodwill.
Get that engagement right and the public will trust insurers. That trust will allow customer intimacy to replace digital proximity. And from this will come greater adoption, greater utilisation and better outcomes. The widest possible number will benefit to the widest possible extent, and that is a yardstick absolutely central to regulatory thinking.
From Looking Forward to Moving Forward
The move from ‘thinking about’ an ethical situation, to ‘doing something about’ an ethical situation, is more significant than most people think. Hurdles like cost, competition and complexity deflate expectations and stall momentum. How then can this be avoided or reduced in relation to the ethical challenges facing medical underwriting?
Two things will help with this ‘thinking to doing’ step. Firstly, be clear about where you want medical underwriting to get to. And by clear, that means a mapped out scenario built around the factors that will deliver future business success. This gives you the purpose to which people can sign up, both within and outwith of the sector, and keep in sight of when those hurdles crop up.
And secondly, have a platform upon which medical underwriters can have open and honest debate about how to get to that future. Perhaps AMUS is that platform, or perhaps a special purpose platform set up to weigh these considerations.
Given the ever onward nature of digital transformations, what matters now is for medical underwriting to embrace a future that encompasses all of these social, ethical and technological complications. That after all is what distinguishes a true paradigm shift from just a lot of disruption.