The incentive structure in a democratic welfare state makes economic realism impossible
The government’s last-minute U-turn over what were relatively modest welfare cuts has turned into a political rout. Tax rises now seem inevitable. But with 60 per cent of the tax burden falling on just 10 per cent of the population, the country is more sensitive than ever to a capital flight that could wreck government finances. On the other hand, shredding the fiscal rules will poison the government’s relationship with the gilt markets, undermining the conditions for economic stability.
Reeves, for all her faults, was clearly trying to impose some fiscal order and make the national finances sustainable. Regardless of the merits of her approach, the fact that she was left weeping on the front bench after her defeat makes for poignant symbolism. She was crying for her political career, and the project that she’d staked her chancellorship on — growth.
Growth that would fix everything: boost tax returns, bring down the debt to GDP ratio, and bring an end to nearly twenty years of stagnant incomes. A necessary (though insufficient) part of this project was fiscal and financial stability. These two prerequisites needed to be met for anything to happen. That meant stopping the explosion in the growth of the wealth transfers via the welfare state.
Just as only Nixon could have gone to China, after all, maybe only Labour could have trimmed the welfare state. Only they could convince parliament, the press, and the electorate that trade-offs needed to happen to preserve social democracy.
But this isn’t to be. Every party — Reform, Tories, and Labour — has in some way tried to promise more wealth transfers over the past months. Whether it be winter fuel payments, child benefit caps, triple locks, or personal independence payments. There is no constituency for a broad-based reduction in the scope of the welfare state.
This is a structural crisis facing the country. It is also a profound anomaly, in historical terms. Until recently, the idea that a state would sacrifice anything and everything to preserve its welfare entitlements would have been absurd. To understand how it got this way, it’s useful to understand the trajectory of history.
The parish and manor
During the Roman period, there existed a proto welfare state via the grain dole and system of public games. This arrangement kept the urban poor of the Republic and Empire from starvation and squalor, but it was not philanthropic: it was nakedly a form of mass bribery. Rather than being based on values, this “proto welfare” was about buying the allegiance of the masses and avoiding civic strife.
With the rise of the Christian church and the fall of Western Rome, such proto welfare was carved away from the state. The Church — partly out of moral conviction, and partly to build a base of support and legitimacy independent of secular authority — became responsible for alms to the poor and needy, and for trying to even out the world’s natural unfairness. In an agrarian economy characterised by low mobility and population density, local parishes knew their flocks intimately and were able to distribute resources accordingly.
On the other hand, secular rulers were generally freed from engaging in redistribution. The role of the new states that emerged in the centuries following the Western Roman collapse was simple: to improve their society’s material conditions. The parish managed spiritual and ecclesiastical matters, while the manor was responsible for administration and law. The state provided protection for commerce, mechanisms to settle legal disputes, and built key infrastructure like roads and fortifications.
For states, the improvement of overall material conditions was motivated by a simple rationale: war. National development was an instrumental goal to supporting ever-larger armies and navies, along with securing an edge in logistics and weaponry. In his magisterial Coercion, Capital, and European States, the sociologist Charles Tilly captured this with the aphorism:“War made the state, and the state made war.”
The poor laws and the modern welfare state
Gunpowder warfare changed this. Before gunpowder warfare, the state engaged in welfare only via donating titles and assets to the Church as part of the ongoing social contract. But as it became clear that guns changed the direction of warfare to favour large, standing armies with logistical requirements, the manpower and resource requirements for states dramatically increased to maintain competitive pressure.
In 1543, England became the first nation capable of casting somewhat safe and efficient cast-iron cannons that cut the costs of artillery down by two-thirds. It should not be regarded as coincidental that this technological breakthrough was followed in the coming decades by the bulk of the Tudor Poor Laws, the first state dabbling in welfare.
Concurrent with the merging of church and state following the Roman Split, the Tudor Poor Laws mandated the payment of alms for the deserving poor. They also mandated parishes keep stores of raw materials, so that there was always some work available for the poor. Consciously or unconsciously, these laws were critical to sustaining growth in population and output to feed and arm much larger military forces.
England was the first, but this process repeated itself to varying degrees in most European nations throughout the early modern period. But this process was truly turbo-charged through the industrial revolution, with the incredible logistical, manpower, and resource requirements for warfare that came with it. It was in this context that New Poor Law was passed in 1834. One of the key justifications for the workhouses was providing a mechanism to transfer unemployed rural workers to urban economies, offering critical labour for industrial development.
Just fifty years later, Germany took this historical sequence to its natural next step through the creation of a true welfare state: the provision of health insurance, accident insurance, and old age pensions. Along with seeking to defang the nascent socialist movement, Bismarck also saw it as a method of improving productivity and thus industrial and economic output.
By then, it was well-recognised that such output could be directly converted into military force when necessary and thus was a key component of ensuring defence and the state monopoly of force. When Britain followed Germany with the Liberal welfare reforms of the early 20th century, it was with an eye to this truism — along with fending off its own strain of socialism.
The new incentive structure
However, throughout the twentieth century, the idea of the welfare state existing to ensure societal stability and continuity was supplanted. In its place the idea that the provision of the welfare state — and the alleviation of as much economic suffering as possible — was now one of the central objectives of the state.
A key reason for the change in the meaning of the welfare state can be found in Article 2 of the UN Charter, which mandated all member nations to refrain from “the threat or use of force against the territorial integrity or political independence of any state”. Membership of the UN — and thus access to the global economic system — depended on rejecting the right of conquest and aggressive war.
In one move, the incentive structure for states transformed. If war made the state, what would peace do to it? Without the evolutionary selection mechanism of war forcing constant adaptations in economic and industrial efficiency, the welfare state no longer became about enabling general economic and social development. Rather than the state’s legitimacy coming from its ability to maintain its monopoly on force in the long run, it became its ability to minimise suffering to the greatest degree possible.
In and of itself, this may have been a welcome development, given the potential — and the increasing potential — for destruction in war. However, this change in the national incentive structure had an unsustainable interplay with another trend: universal suffrage.
So, what was stopping them from voting to give themselves more money and services ad nauseum?
Universal suffrage gave direct political power to the beneficiaries of the welfare state, regardless of how much they had paid into it. So, what was stopping them from voting to give themselves more money and services ad nauseum? For some time, this question was a moot point. Low life expectancy, limited medical technology, cultural taboo, and (gradually dwindling) long-termist elites prevented this from being an immediate concern.
But then life expectancy went up to the point where the majority could expect to redeem their pensions. Complex medical conditions that were untreatable in the mid-1900s became addressable. The taboos and intuitions of generations from the pre-1945 world order slowly receded from the world. And there was no incentive presented to the net beneficiaries of the system to price in long-term state sustainability and societal continuity.
The system’s sustainability
About 52 per cent of Britain’s population are now net recipients of state benefits over taxes. Their livelihoods are generally improved by expanding the provision of the welfare state, and so they do not shoulder the burden of paying for it. In other words, most of the electorate’s incentive is now to expand the welfare state — and, without considerations of societal stability and defence requiring significant consideration by elites any more, many of them are happy to oblige.
This high share of net recipients also means that the ability to raise taxes is further restricted, since a smaller tax base is more sensitive to emigration and capital flight. As a result, states are continually encouraged to divert their budgets away from the traditional staples of providing law and order, infrastructure, and “basic” public services.
In 1955, the share of government expenditure on social protection (13.3 per cent of the government budget) was roughly equal to its net investment (11.4 per cent of the government budget). This year, the IFS expects 26.7 per cent of the government budget to be spent on social security payments alone. By contrast, net investment has stood at just 5.7 per cent of the budget. The welfare state has cannibalised the traditional responsibilities of government.
Electorally, this makes sense — after all, if net recipients are most of the electorate, doesn’t improving their net welfare take precedence over that of the rest of the electorate? And it takes a fair bit longer for cuts in investment and services like the courts to be felt, meaning that by the time pushback on deteriorating living standards hits the political actor is sufficiently distanced from the consequences of their decisions.
The current choices
The “liberal democracies” that have acceded to international law and universal suffrage have created an unsustainable system. Without a plausible deterrence via state failure and conflict, a plurality or majority of their population will continually gravitate towards voting for leaders that will give them more. The lobby of net recipients, consciously or unconsciously, drives an internal arms race without regards for fiscal credibility.
But what are the alternatives to this? The first is a collapse in the international system and the restoration of aggressive warfare, since it gives the population a clear deterrent in failing to prioritise economic and industrial growth. However, since the continued existence of the international system and the UN is largely beyond any individual state’s control, this is not an actionable alternative (quite apart from being an unpleasant one).
The second, then, is a rejection of universal suffrage in some form. So far, we have seen experimentations with this model in non-European nations like Russia and China. There, the role of a ballot is treated more as an exercise in testing legitimacy and whether the current course is viable for the state, rather than a competitive policy contest. The actual control of discrete policy decisions falls to an oligarchy, whose interests are closely aligned with the state and social continuity.
However, this does repudiate the historic mode of social organisation for European states, and typically does require some degree of repression to enforce. This raises another potential alternative to universal suffrage: a return to restricted suffrage.
While historically gated behind property qualifications, restricted suffrage in the modern day could be directly tailored to respond to the self-payment issue present in universal suffrage systems. Prima facie, the most logical choice may be restriction of the franchise to net payees towards the state.
The current historic anomaly, where the state itself treats redistribution as its central goal, is unsustainable
The benefits of this system are obvious. Net payees are the share of the population whose overall welfare stands to be improved by shifting government spending into areas that promote long-term stability and growth, like infrastructure and basic services. They have a baked-in desire to balance their tax burden against the provision of services, meaning that fiscal credibility becomes much more important for governments. At the same time, they have every incentive to also want to maintain the “safety net” conception of a welfare state that they themselves may need.
The current historic anomaly, where the state itself treats redistribution as its central goal, is unsustainable, but given the incentive structure we have at present, it can’t be reversed by popular vote. Like a train without brakes, this phenomenon has been slow to accelerate but has acquired too much momentum for any external party to stop. Unfortunately, we’ll probably need to wait for a derailment.