Times change and we change with them — so the saying goes. Let this is certainly not true of our health and disability benefits systems.
Our latest report at Policy Exchange Sickfluencers and AI shows how our health and disability benefits systems are being undermined by recent technologies such as social media, online communities and AI. These developments are leading to the emergence of a “grey area” of claims which pose an unsustainable cost to the taxpayer and a threat to the legitimacy of the system as a whole.
The UK’s health and disability benefits such as Personal Independence Payments (PIP) and Access to Work rely on static assessments, binary judgements, and paper-based assumptions about disability, ill-health, work and support. Because these systems are so rigid and predictable, they are increasingly vulnerable to exploitation for claims that might not clearly warrant the level of support currently available, which many people would question.
Fifteen years ago, people who believed that they might be eligible for support and needed help to make an application would have been most likely to go to a charity such as their local Citizens Advice or a support group linked to their condition. This put the role of the adviser front and centre and meant that people would be more likely to be pointed to benefits which were clearly relevant and appropriate for them.
This has changed over recent years.
Our research found evidence of the emergence over recent years of large-scale peer-to-peer coaching in online groups — online communities with thousands of active members dominated by posts asking how to describe symptoms so as to maximise awards, what to include in forms, and what benefits people might be able to claim.
People were often explicitly advised to make strong statements about their conditions and to “lay it on thick.”
Alongside these we have seen the rise of “Sickfluencers” — content creators who attract online audiences with explicit and eye-catching figures about the substantial financial payments they can receive (e.g. “up to £62k”). This content gets proactively pushed out by algorithms to people who might not have previously regarded themselves as eligible and raises expectations about how much can and should be claimed.
Some of these “Sickfluencers” produce detailed “walk-through” content and model answers for people to use in assessments with eye-catching titles such as “PIP Example Answers!!! Mental Health” which attracted large numbers of views. In some cases this content includes advice for people to request particular products or services which the influencer is promoting.
The next stage in this evolution of where people seek advice is naturally AI.
Commonly, AI tools will, if asked, help to strengthen applications for health and disability payments. Naturally, AI optimises for outcome and lacks the human constraints and insights which would have operated with an advisor in the past. AI aims to be as helpful as possible to the user regardless of what many may see as the merits (or lack thereof) of their case.
In addition, the communities and influencer content described above works as excellent source material for AI to train itself on when seeking to be as helpful as possible in strengthening claim applications.
For our health and disability benefits system to continue to command public confidence it needs to be robust, respected and recognised as legitimate. These developments, along with the rise in successful claims we have seen alongside them, cast doubt on this.
Economic inactivity due to ill-health costs £212 billion per year, equivalent to 7 per cent of GDP. 16.8 million people, or roughly one quarter of the population, considered themselves disabled in 2023-24, up from about 11.9 million in 2013-14.
There are now over 4.2m people on Universal Credit with “no work requirements”, over half of those on that benefit. 1.5m people are now claiming PIP for mental health conditions, and that is an increase of over 100,000 in the space of a year.
This is clearly entirely unsustainable — and far from ideal even for many of those in receipt of these benefits. In some cases, people with genuine challenges would be better supported with short-term, targeted, or non-financial assistance, rather than through ongoing payments.
We cannot of course resist the march of technology — and nor should we try to. Instead, our report calls for more human input and oversight in the assessments system. It calls for making in-person assessments the default for health and disability benefits. We also propose introducing stronger requirements for medical evidence to support all claims. Finally, we are calling for far greater use of practical support in place of cash payments to support people to be independent and to help people back into work.
Most people strongly support a compassionate and well-functioning benefits system for those who need it both. The best way to secure the future of this is to protect it against the distortions which are undermining its credibility.











