The loyalty penalty was already well established when FCA research in 2015 found that it produced an average price differential between new and existing customers of 70% at five years. And remember, that was the average – some insurers were much higher, some much lower.
So when Citizens Advice delivered its super-complaint in 2018, there were already signs of the market beginning to move on to more sophisticated techniques. This meant that the regulator ended up looking into something that was approaching its use-by-date, and not the more modern pricing techniques that were being introduced.
Now that those new pricing techniques seem to be gaining wider traction, it’s worth looking at the implications they have for insurers and their customers. So we’re talking here not about the end of the beginning, but the beginning of the middle. What the beginning of the end phase looks like is something I’m researching at the moment!
Pricing around the Ban
What I find interesting about the new pricing techniques is that they seem capable of circumventing the pricing ban currently being rolled out across renewals. Indeed, I believe the pricing ban has both speeded up the implementation of those new pricing techniques and also resulted them in being fine-tuned in response to this new regulatory environment.
Some will of course say something along the lines of “what’s the problem - insurance is a private market”. And yes it is, but it is also a market that needs to secure and sustain the trust of the insurance buying public. And I’m not sure this current phase in its development will do that. Sure, it’s clever, but that’s always been only a partial measure of success.
So in this piece of analysis, I’m going to look at these new pricing techniques and how they would be applied to personal lines household business. I’m then touch on the implications those same techniques could have for life and health markets, including group schemes. And then I’ll look ahead to say 5 years’ time, when their use will have matured. And I’ll end with some thoughts that insurers can use to weigh up how these techniques align (or not) with their strategies for customers, digital, etc.
Enrichment at Scale
Pricing walking was a pretty simple technique. The longer you were with an insurer, the more they would nudge the premium up at each renewal, at first to recover their introductory discount and then to build up portfolio revenue. This took time, so that’s why it was called lifetime value pricing.
That simplicity sat on top of some sophisticated profiling, used to establish whether you were the type that would respond or not to that annual nudge up of price. Nevertheless, the technique’s primary axis was time.
The new pricing techniques have pivoted away from the simplicity of time, towards the complexity of you as the policyholder and of your property as the asset being insured. Remember we’re using household business as the example here.
This pivot has been supported by the availability of a great many datasets which tell insurers something (perhaps significant, perhaps not significant) about your property. So this is data enrichment at scale (more here). We’re talking here about not just that your property is there, in place X, but that it is a property with numerous features, using in such and such a way.
Examples of the features being collected in these many datasets include the property’s floor area, its rebuilding cost, what type of heating system it has and whether it’s listed.
Then there’s when it was last sold, whether it has an extension, or a new bathroom, kitchen or bedroom. Add to that the parking arrangements, the number of rooms, how they are used and past claims on the property (from prior owners) and on neighbouring properties. And let’s not forget information about construction, outbuildings, trees, watercourses and when it was built.
So why are insurers buying data like this? There are two main reasons, which I’ll explore now, beginning with acquisition.
The Acquisition Conundrum
Insurers have faced a conundrum. The more ‘sophisticated’ their underwriting, the more data it relied on. Obtaining this from the customer would be a lengthy and incomplete business. Having to ask new customers dozens of questions wasn’t a good way to attract their business. Instead, insurers worked on obtaining that data (or something similar to it) from sources in the public domain and sources collated collectively by insurers.
This can be illustrated by Aviva’s ‘Ask it Never’ product, the aim of which was to never ask customers any questions. Presumably they’re need to ask who you were, but perhaps their voice recognition software took care of that! Don’t be too quick to consider that a jest!
This allowed marketers to position this new pricing trend as providing customers with a much improved service. Key benefits are described as a reduced quote time, improved accuracy and a better understanding of needs. As a result, the policy was said to be fit for purpose, with no more misunderstandings at time of claim. Their words, not mine.
There’s more than the usual level of wishful thinking in this marketing. After all, the accuracy of quote and the ‘no more claims misunderstandings’ are both premised on the data gathered by insurers being complete, representative and correctly interpreted. Not an easy thing when you’re talking about correlations and clusterings, especially in relation to claims.
A Tale of Two Perspectives
What it does do though is make acquisition much quicker and simpler. And while this matters to insurers, I’m not so sure it’s a top priority for consumers. Some surveys have found that consumers are concerned if they’re not asked what they consider to be enough questions about what they’re seeking insurance on. In other words, will they pay my claim if they don’t ask me questions about what I want insured? Here you can see the shadow of mistrust that consumers have with regards to insurers’ handling of data (more here)
What we can say here is that reducing the friction of taking out a new policy is something that all policyholders would warm to. And the same applies to making claims decisions more certain. Yet how this is done is more nuanced than insurers often like to think. Policyholders see claims certainly linked to underwriting accuracy, with the latter meaning more than just lots of data. So what we have then is thinking on the part of insurers that has too much wishful'ness in it. We need better research, conducted independently, to explore those nuances so that consumer trust is not eroded.
And what also needs to be addressed is the increasingly heard assertion that policyholders get a better service because all this data allows insurers to understand their needs. Here the insurer has to be careful to differentiate between the needs of the customer, the risk attached to those needs and their underwritering strategy. Insurers need to be more honest here, for the reality is that those needs will be very much weighed up in relation to all three of those things, not just the first.
The Pricing Opportunity
It is around pricing that these new ‘enrichment techniques’ present insurers with some intriguing opportunities. Clearly, gathering more and more data gives the underwriter more decision points around what that data represents in terms of risk. So for example, if the underwriter finds that your property is listed, then that might nudge them to up the premium or limit certain elements of cover. Like it or not, the reality is that underwriters do look at things in this way.
What this represents of course is a change of risk, and under the new pricing rules, a change of risk allows the underwriter to price the policy at renewal in a different way to what they did a year ago. Of course, in reality it may not actually be a change in risk, but rather a change in the underwriter’s understanding of the risk. Either which way, the rate, premium or cover can be changed.
What then makes this (let’s call it) enrichment pricing so intriguing is the question around how all that data is brought to bear on a particular policy’s actual renewal. If the underwriter gathers together say 20 new data points about your house, need they apply them all at once? Not at all.
In fact, the underwriter could apply them progressively and thereby give themselves the opportunity to adjust price as often, and to the extent, that they want. All they need is a narrative that combines changes in risk and changes in understanding of risk. And hey presto, the pricing ban becomes irrelevant.
Dealing with Consent
Hold on, some of you may say. If the insurer collected all sorts of data about myself and my property, and most of this I knew nothing about, then when I had a problem with a claim, couldn’t I just rely on saying that “I didn’t give you that data!” In other words, if some aspect of the insurer’s data triggered a rejection of your claim, then that wouldn’t be fair if that piece of data wasn't known to you.
It's a valid point, but one which I believe insurers using enrichment pricing will address through consent. This will take the form of some transparency around the data points being used (such as ‘click here to find out something about them’), which is then linked with a ‘click to consent’ button tied directly to renewal. I don’t expect there to be much transparency, but rather enough for corporate lawyers to see it as not sitting outside of data protection legislation. And the insurer’s consent strategy will continue to be as generic and wide as possible.
That’s why I think the sector needs to think seriously about how it handles consent, for I expect it to be challenged at some point (more here). And should that challenge to some extent succeed, insurers caught up in that debacle would then have to delete the affected data and withdraw the algorithms trained upon it. That’s why I ask my clients about their business continuity plans.
Pot Half Empty?
Now some of you will be thinking that perhaps I’m being a bit ‘pot half empty’ here. Perhaps I am, but over the years, I’ve usually found that pricing opportunities are rarely left untapped, at least by part of the market. So, yes, I’m not describing all of the market, but certainly a chunk of it. Some will not consider doing this, some will always do it, and in the middle will be those wondering if they should do it. And it is in relation to that latter ‘middle’ group of insurers that I’ll offer suggestions later in this analysis about how they might weigh up their options.
Implications for Life and Health
Let’s move now to quickly look at the implications of ‘enrichment pricing’ in relation to life and health insurance, including group schemes. After all, data about your property is one thing, but data about your person is something else altogether.
The data for this will be collected in relation to your shopping, travel, work, leisure, social media, fitness, family and friends. It will come from devices, sensors, search history and eventually perhaps, medical records. The insurer will end up knowing more about you than you yourself do!
What’s key to making full use of this data will be a move to annual underwriting of what have traditionally been long term policies, and then near to real time underwriting. The framework of partnerships for delivering this are currently being assembled.
With group schemes of course, the data will come through the work environment and the devices that are increasingly being integrated into that. How that data is then handled depends on the extent to which the risk in group schemes is balanced between the insurer and the employer. Both usually have some interest in it and so may share the data in the context of their own interests.
The overall impact on customers in life and health markets from enrichment pricing will be to find over time that their premiums vary more, and their cover being trimmed / tailored more, as employers and insurers seek to earn a return from the cover they provide to you.
Five Years’ Time
Let’s say that enrichment pricing is widely adopted across the personal lines market, just like life time value pricing was. So what will the market look like in say 5 years’ time? I’ll return to household business to illustrate this.
We’ll see a relatively steep upswing in the data being collected and then a progressive plateauing out. After all, there’s only so much about your house that can be datafied. At the same time, the sophistication of analytics will progress, but I suspect then plateau as the results and returns from ever more complex analytics fail to convince finance directors. And this will be an experience shared across many insurers in the market, not just one or two.
As a result, the competitive advantage from enrichment pricing will diminish. In response, those insurers will put their thinking caps on and recognise that three options exist. The first option is to play around with perils, limit, excesses, terms and endorsements. The delivery of online personalised policy wordings should allow this by that point in time.
The Internet of Things
The second option is to delve into even deeper data, in the form of household devices such as phones, virtual assistants, doorbell cameras and the like. In other words, the internet of things. And this data will provide insurers with what we might call hyper granularity. So for example, if your doorbell camera or virtual assistant picks up that you often have friends with toddlers around, then your accidental damage premium could go up or cover go down.
With this second option, insurers will in essence extend the competitive advantage of enrichment pricing for another (let’s say for simplicity) five years.
The third option is that the insurer pulls back from enrichment pricing. And their reason for doing so will rest on two factors. One is that the competitive advantage of ever more granular pricing and cover falls, to the extent that further investment is curtailed. The other involves a reconsideration of where this enrichment pricing is taking their business.
Are these three options realistic? I think enough so for insurers to put them on their radar. I know that some insurers have already done so. So why have they? Firstly, it’s because they suspect that ever more granularised pricing can only go so far. Secondly, because data and analytics will become ubiquitous. Thirdly, the margins turn very thin. And fourthly, because of their customers. I’ll now explore that last point in more detail.
Two Ways to Get Closer
Enrichment pricing is what I refer to as a proximity strategy for your pricing. It involves using ever increasing amounts of data to get closer and closer to the customer, pricing them at ever more granular levels. Unfortunately for the insurer, it’s not a strategy that many customers are comfortable with. And unfortunately for the customer, it’s a strategy that generates price volatility (as life goes through its ups and downs) for something that people buy for price stability.
The alternative to a proximity strategy for your pricing is what I refer to as an intimacy strategy. This involves using data and analytics to cause the customer to want to get closer and closer to you the insurer. What this does is put the customer relationship at the heart of your firm’s overall strategy. So what data you collect, how you use it, what analytics you develop and how you use them – all this and more is orientated around the relationship you want with your customers.
And what emerges as a defining difference between these two pricing strategies is trust. And the insurers who have moved towards an intimacy strategy for pricing, who have stood back from forever enriching their underwriting data lochs, have decided that trust will be their competitive advantage.
They’ve decided that they don’t need to know what you use a particular room in your house for, how you move around your house, who visits you, who lives with you. They need to know what you want from your insurance, and are using data and analytics as part of a wider plan to give you that. And in doing so, this encourages you to want to do more business with them.
Looking at the Middle
When I talked about the end of the beginning and the beginning of the middle, in relation to the future of insurance pricing, I was envisaging that middle being a phase in which firms start to use data and analytics to differentiate themselves along the lines of the proximity and intimacy strategies I’ve outlined above.
And so the obvious question this then raises, is what will define the ending of that middle period, in the same way as the current pricing ban is sort of defining what I’m calling the beginning period. I have my views on this, but here is not the best place to set them out. This analysis is about the end of the beginning.
Yet in considering the end of the beginning, I promised to provide some questions that insurers could use to weigh up how they might approach what I’m calling the beginning of the middle. Here are four I think should be considered:
- What type of firm are you? So, for example, do you want to remain an insurer or evolve into a tech service provider?
- What defines you as a firm? So, for example, what do you say about yourself in something like your purpose? How do you reflect this in your organisational structure?
- How have you framed what success means to your firm? For example, this is our purpose and we achieve it by doing… what? How do you actually evidence this in practice?
- How do you want to differentiate yourself from others in the market? And what do you do to embed this into how you work? So, for example, how does your pricing strategy and digital strategy support this?
These questions may seem rather philosophical, but they are in fact both normal to ask and important to ask during times of change, such as is happening at the moment through digital transformation. Insurance is unique in terms of how data and analytics could change it. Insurers need to decide how to turn this into something that makes them unique.