Jun 7, 2023 4 min read

A Big UK Survey on AI Shows Both Support and Concern

A major survey into public attitudes and experiences of AI has just been published in the UK. It found ‘highly nuanced views’ on the advantages and disadvantages of different types of AI. So what can insurers learn from this survey?

A Big UK Survey on AI Shows Both Support and Concern
Public feedback on AI will be listened to

The survey was conducted in late 2022 by two leading UK institutions: the Alan Turing Institute and the Ada Lovelace Institute. It was a nationally representative survey of over 4,000 adults in Britain and the results were published this week. This is what was covered:

“We asked people about their awareness of, experience with and attitudes towards different uses of AI. This included asking people what they believe are the key advantages and disadvantages, and how they would like to see these technologies regulated and governed.“

They looked at 17 different uses of AI:

“We asked the British public about their attitudes towards and experiences with 17 different uses of AI. These uses ranged from applications that are visible and commonplace, such as facial recognition for unlocking mobile phones and targeted advertising on social media; to those which are less visible, such as assessing eligibility for jobs or welfare benefits; and applications often associated with more futuristic visions of AI, such as driverless cars and robotic care assistants.”

Insurance wasn’t one of those uses, but there was one pretty close to insurance: the use of AI to determine the risk associated with repaying a loan. I will touch on some of the other uses, but I’ll give most attention to the survey findings for loan repayment.

Positive but Nuanced

For the majority of AI uses that the survey covered, people had broadly positive views, but expressed concerns about some uses. So for example, across many uses of AI, people saw the main advantages being speed, efficiency and improving accessibility. However, people also note concerns relating to the potential for AI to replace professional judgements, not being able to account for individual circumstances, and a lack of transparency and accountability in decision-making.

With regard to loan repayment in particular, In total, people saw the use of AI as a broadly positive thing. Their top five benefits were...

  • applying for a loan will be faster and easier
  • the technology will be less likely than banking professionals to discriminate against some groups of people in society
  • there will be less human error in decisions
  • the technology will save money usually spent on human resources
  • the technology will be more accurate than banking professionals at predicting the risk of repaying a loan

And the top five disadvantages people saw were...

  • the technology will be less able than banking professionals to take account of individual circumstances
  • banking professionals may rely too heavily on the technology rather than their professional judgements
  • it will be more difficult to understand how decisions about loan applications are reached
  • if the technology makes a mistake, it will be difficult to know who is responsible for what went wrong
  • the technology will gather personal information which could be shared with third parties

Clearly, there’s a good deal of skepticism about how loan repayment decisions are made, whether by a human or a machine. At the same time, explainability and accountability mattered.

The Possibility of AI Regulation

The majority of people in Britain support regulation of AI. When asked about who should be responsible for ensuring that AI is used safely, people most commonly choose an independent regulator, with 41% in favour. In second place, 26% of people wanted this done by the companies developing the AI technology. And 24% wanted an independent oversight committee with citizen involvement.

What this points to is a divide in public opinion between external regulation and self regulation. My question about the latter is of course around who would then check whether the companies developing the AI technology are actually upholding their responsibilities. Some form of regulation seems to be more on the cards than not.

Lessons for Insurers

As I mentioned earlier, there are no direct references to insurance in this survey. However, I think there are certain lessons that insurers can drawn from the survey findings.

The first is that the public expect and broadly support the use of AI in determining risk. That insurers are using AI should therefore be no surprise to consumers.

There is concern about how judgements are made and what can be done in the case of a bad judgement. Here explainability and accountability are important. What is missing are the steps in between the two. In other words, if you felt a judgement on your case was unreasonable, what are the steps that you can take to experience that explainability and accountability? And add to this, the question of how then to judge whether that explanation / accountability has been reasonable in the circumstances.

That’s where that third place method of delivering accountability should be noted. At 24%, there’s pretty strong support for ‘an independent oversight committee with citizen involvement’.  If the UK Government think (or are advised) that an independent AI regulator would not have enough traction (or be too expensive), and if at the same time (despite sector lobbying) they thought that self regulation by tech firms wouldn’t satisfy public concerns, then suddenly that third option could find itself being chosen.

How might this work then? I just can’t see it working on either a ‘all of the UK’ basis, or a ‘per firm’ basis. Sector level oversight committees would be much more effective. And indeed, consumer groups might themselves lobby for an involvement at this level, in much the same way as they do at present when acting as a statutory consumer advocate.

Bear in mind that just as the UK Government weighs up the possibility of AI regulation, the main issue that such regulation would be called upon to address is discrimination in digital decision systems. There’s already a regulator for that though, as there is for privacy and autonomy.

That makes me think that we will not see an AI regulator, but instead the beefing up of AI expertise within existing regulators. Into this mix would then be added consumer advocates where the risk of discrimination is highest, to ensure that the level of challenge matches the impacts that are happening. Given Citizens Advice’s campaign on the ethnicity penalty, this doesn’t bode well for an insurance sector used to doing its own thing.

The real exposure here for insurers is that they have previously relied on lobbying to carry its case with regulators and policy makers. Once consumer representatives have a permanent seat at that decision making table, the effectiveness of that lobbying diminished rapidly. That will be a game changer for explainability and accountability, the two big concerns the public has with the use of AI as a risk assessment technology.

To sum up, insurers need to be prepared for closer scrutiny and more challenge in relation to what they’re doing with AI.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.