Sep 4, 2023 6 min read

Why the Social Scoring Ban is Seismic for Counter Fraud

There are many significant things in the EU’s AI Act but none will be more disruptive for insurers than the proposed ban on social scoring. It will cause the sector’s core digital strategies to need serious reconfiguring. So why has this happened and what do insurers have to look out for?

social scoring
Coming soon in the EU - a big red stop sign for social scoring in counter fraud

Let’s start by looking at what exactly the EU means by social scoring. Here’s the definition to be used in the AI Act:

“‘social scoring’ means evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics;”

And this is how the EU has scoped its proposed ban:

“the placing on the market, putting into service or use of AI systems for the social scoring, evaluation or classification of the trustworthiness of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that unjustified or disproportionate to their social behaviour or its gravity;"

Three Key Aspects that Shape the Ban

Let’s break this down a bit.

Firstly, this is about trustworthiness. The ban is on AI systems that socially score, evaluate or classify in relation to trustworthiness. Counter fraud is the insurance function that will most obviously be affected, but given that their systems work through underwriting, claims and marketing, it’s fair to say that all key functions will be affected.

Secondly , the ban is both specific and generic. It covers both individuals and groups of individuals, so this makes it wider than traditional data protection with its focus on personal data. This non-personal aspect is reinforced through the references to known behaviour, inferred behaviour and predicted behaviour. So this is about you, people like you and people who could be like you, now or in the future.

Thirdly, note how the first of those detrimental sub-clauses takes direct aim at secondary data. In other words, data generated in one context that is then used in another context. This is in contrast to primary data, in which the data is used in the same context in which it has been generated. This questioning of secondary data by policy makers is a growing trend.

Fourthly, note how the second detrimental sub-clause appears to throw a spanner into the works of insurers’ favourite lobbying phrase, being their ‘right to underwrite’. They could now be called upon to justify that, for example, by showing how their application fraud strategies are proportionate in relation to the social behaviour impacted by them.

What the Ban Doesn’t Do

It’s worth turning this around and looking at what the proposed ban on social scoring does not stop insurers from doing. It doesn’t stop them basing their strategies on primary data – in other words, on data they collect directly from insureds.

It doesn’t stop insurers taking proportionate steps to counter fraud, although they now have to be prepared to evidence and justify that proportionality. So this is not stopping insurers from addressing fraud; it is all about how fraud is addressed.

Why Has the EU Done This?

The EU had originally talked about its AI Act banning social scoring by just public bodies. They then reworded it to include private bodies as well. This is partly down to pressure from the EU’s Economic and Social Committee (EESC). It is telling that the EESC illustrated its views by examples relating to an individual’s eligibility for a loan or mortgage. Financial services were clearly a concern for them.

The EESC’s views would then have been reinforced by the situation in the Dutch city of Rotterdam. The city council there had one of the big consultancy firms create an algorithm to identify cases of potential benefit fraud. Initially described as a “sophisticated data-driven approach", the algorithm was subsequently found to discriminate based on ethnicity and gender. It also has “...fundamental flaws that made the system both inaccurate and unfair.” The city dropped it.

What the Rotterdam case showed was that a ban only on public bodies like Rotterdam city council would be ineffective where most of the systems were being developed within the private sector.

There are a lot of issues explored in this article about the Rotterdam case. It shows how an apparently 'sophisticated' piece of analytics turned out to be rather simple and very flawed, with significant repercussions for the many people affected.

The development by the Chinese Government of a social credit system will also have played a part in the AI Act’s wording. On the face of it, this feels far away, but not until you think in terms of ‘suspicion machines’. That’s exactly what the Rotterdam fraud algorithm was, how the Chinese system is described and why counter fraud is that part of insurance likely to be most impacted by the social scoring ban.

Secondary Data, Again

You’ll recall from this article that the EU intends to ban insurers’ access to secondary health data, under its European Health Data Space. And in the proposed AI Act, the EU is taking a similar stance in relation to social scoring.

What we have then is a business sector increasingly basing its core activities around extracting insight and value from secondary data, and a powerful policy making institution increasingly introducing limits on how secondary data can be used. Some of you will say that this just shows how big institutions like the EU are against innovation. I think that is missing the point. What the EU is saying is that the way in which it is finding secondary data being used is so full of flaws that a strong push back on such trends is necessary to protect fundamental rights.

What are the Implications for Insurers?

The implications of the proposed AI Act for insurers, and for the overall trend in digital insurance, are significant. Think of it this way. The ban will impact all counter fraud systems, but those on the application side in particular. It puts the sector’s reliance on secondary data at odds with the thrust of two big EU policy initiatives: health and artificial intelligence.

It also puts the sector at risk of the ban being extended beyond counter fraud. That’s a possibility because of the fundamental purpose of social scoring, which is to undertake social sorting. In social sorting, people are classified according to measurable similarities and differences and then exposed to different things as a result. In terms of insurance, some policies become less visible, let alone taken up. Services become less available, or accessible only through some form of automated system. For growing numbers of people, claims processes become extended and take longer.

This results in stratification, where provision of a product across the market begins to coalesce around certain types of consumers and disappear for other types of consumers. No market is perfect, but should stratification happen in a market too widely and firmly, then it would be deemed a failing market. That’s not what politicians and regulators want to see happen, especially in a market so socially important as insurance.

So these are not implications that can be dealt with by a tweak here or there. These are implications that affect the engine house of the digital transformation happening across the sector. If, as would seem to be happening, the sector has been innovating without sufficient regard for public and political sentiment, the outcome is a policy making reaction designed as an expression of power.

The Stuff of Nightmares

To be honest, this feels like one almighty failure of foresight on the part of the sector. The proposed ban on social scoring has not come out of nowhere. I highlighted the implications for insurers in this article back in 2014 – nine years ago. I described it as ‘the stuff of nightmares’ for insurers, and I can’t help but feel it now looks like I was right.

Why weren’t sector insight people seeing this? Or those in the big consultancies? Why didn’t public policy experts have this on their radars? I have my opinions on this, but what I will say publicly is that ‘seeing’ is not just about recognising that something is there, but about interpreting it against a ‘world view’ of what you expect, and want, to encounter. It’s a form of self selection bias.

The sector will of course pull out all the lobbying stops it can think of, seeking an exemption to this ban for insurance. Will this succeed? After all, it’s worked before. I don’t think it will work this time though. On secondary data, the EU has already fixed its views. On social scoring, it ties in too strongly with fundamental rights for exemptions to seem justified.

What the EU will say is that insurers can still underwrite, still market, still tackle fraud, and still settle claims. They just have to find a way to do these things that doesn’t clash with what the EU sees as core tenets.

Steps to take now

I think first and foremost insurers need to build a deep understanding of social scoring and the social sorting that results. And they need to do so with a strong dose of independent thinking and challenge in how they go about this. This is needed because preconceptions have too often got in the way of clear thinking.

And I would also urge insurers to look carefully at their counter fraud strategies and consider how capable they are of having, to put it one way, certain cogs in their assessment machines geared down or removed.

Finally, I would urge insurers to resist the temptation to approach this by threatening to stop underwriting some segments or layers of business. It’s a move that could really backfire.

I undertake detailed research for clients on emerging trends like social scoring. My independent insight gives them a wider perspective on developments. The result is better decision making and planning. Email me at... duncan@ethicsandinsurance.info ...to find out more.
Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.