Who gets replaced first — the makers or the makers’ myth?

Who gets replaced first — the makers or the makers’ myth?

How the “permanent underclass” argument collides with a shaky AI business model — and what we might do about both.

Artificial intelligence is no longer a distant hypothetical: it’s a growth industry, a cultural meme and a political headache all at once. Last week The Guardian ran a sharp column arguing that the worry about AI producing a “permanent underclass” is real — but that the people building the bubble around AI may be the ones who are most replaceable. That provocation brings together three things we can no longer ignore at once: techno-pessimism about mass job loss, mounting economic skepticism about AI’s returns, and the hard politics of labor protection. The Guardian


The new worry: “permanent underclass”

The phrase “permanent underclass” entered mainstream conversation after a recent New Yorker piece and follow-ups that summarized a strand of Silicon Valley thinking: if AI reaches or exceeds broad human capabilities in the next few years, many forms of paid work could vanish or radically shrink, leaving people without the kinds of jobs that supported middle-class life. That scenario is associated with striking forecasts from AI researchers and investors who say some forms of general-purpose AI could radically outpace humans on tasks that were previously safe. The New Yorker+1

What’s notable in public coverage is the emotional response: the fear isn’t purely technical, it’s social. People — from tutors and journalists to junior engineers — are asking whether jobs that once served as entry points into stable careers will still exist. That fear fuels political conversations about retraining, social insurance, and workplace protections. The New Yorker


But wait — is the AI industry itself economically sound?

Here’s the crucial counterpoint: many of the companies pitching AI as an unstoppable job-replacing force are sitting on fragile economics. Central banks and market analysts have flagged the soaring valuations of a handful of AI bets; surveys of fund managers show broad concern that AI-hype has created a speculative bubble. The Bank of England — and other authorities — have warned a sharp repricing could happen if sentiment sours. Reuters+1

Concrete signals of trouble have been piling up. Several studies and reports suggest that a very large share of enterprise AI pilots are not delivering measurable returns, that firms who rushed to replace humans with automation have often regretted the decision, and that the capital and infrastructure costs of AI (especially data-centres and specialised hardware) may be much higher — and depreciate faster — than investors assumed. Those are not metaphors; they are balance-sheet problems. Fortune+2Orgvue+2


Two paradoxes, one tight rope

This produces a strange liminal space:

  1. Social risk is real. Global reports find employers planning workforce reductions and a wave of roles that will require re-skilling; projections show millions of task shifts through the decade. For many workers the transition will not be seamless. reports.weforum.org+1
  2. Commercial risk is also real. The AI build-out has enormous upfront costs (chips, power, cooling, engineering) and may not produce reliable ROI across the board — leaving investors exposed and making some business models unsustainable. If the industry stumbles, the scramble to “replace people with AI” could prove to have been a wasteful detour, damaging livelihoods and leaving societies to pick up the pieces. Futurism+1

Put another way: you can be worried about automation and skeptical that the people selling it have thought through the long-term math.


Lessons from history — and recent wins

We’ve lived through disruptive technology before. When automation reshaped car manufacturing and other industries, unions and coordinated policy responses often determined whether workers were protected or discarded. In 2023 the Writers Guild won contract language that treated generative AI as a tool — not a writer — and secured protections such as disclosure requirements, limits on using writers’ work to train models, and measures to preserve credits and employment conditions. That agreement is a concrete example of a sector using collective bargaining to fence off the worst outcomes of automation. Brookings+1

Other industrial precedents show what “cooperative modernisation” looks like in practice: negotiated retraining, job security guarantees, and institutional commitments to workforce transitions rather than mass redundancies. Those blueprints matter because they’re realistic — they don’t rely on technocratic inevitability. The Guardian


What the evidence suggests companies are getting wrong

A sampling of empirical signals worth bearing in mind:

  • AI pilots often fail to deliver ROI. Recent studies report a high failure rate among enterprise generative-AI pilots — a reminder that promising demos don’t always scale into durable productivity gains. Fortune
  • Regrets after layoffs. Surveys show many companies that fired employees in the name of automation later admitted they made the wrong calls; hidden costs of layoffs (lost knowledge, churn, retraining) can outweigh apparent wage savings. Orgvue
  • Infrastructure is costly and fast-moving. Some analysts estimate that the rapid hardware and design churn in AI data-centres shortens useful lifespans and produces steep depreciation — a commercial vulnerability for capital-intensive build-outs. Futurism

None of this argues that AI won’t transform work — only that transformation won’t be smooth, and that market hype masks concentrated risks.


Policy and workplace prescriptions that actually matter

If your aim is to minimize the social harm while allowing useful AI to spread, the evidence and recent experience point to several practical steps:

  • Industrial rules, not just voluntary ethics. Contracts and labour rules like those the WGA negotiated can be extended: mandatory disclosure when AI is used, limits on using worker output for model training without compensation, and explicit rights for affected workers. wga.org
  • Collective bargaining + sectoral retraining funds. Unions and employer coalitions can negotiate retraining pipelines, sectoral hiring guarantees, and phased automation timetables (cooperative modernisation). Historical examples show it’s feasible. The Guardian
  • Careful market oversight. Regulators should watch concentrated valuations and systemic exposures (eg. data-centre financing, hedge funds’ leverage) so that a sudden repricing doesn’t trigger cascades that harm ordinary workers and pensioners. The Bank of England’s warnings about a sharp correction are a reminder here. Reuters
  • Public investment in human capital. If 41% of employers foresee workforce reductions for AI reasons, governments must not defer reskilling and safety-net investments — they are part of social infrastructure. reports.weforum.org

The rhetorical pivot we need

Two debates often run together and confuse policy: (a) the technological question — how capable will AI become? — and (b) the economic question — who benefits and who pays for the transition? We should treat them separately. Even if AI continues to improve, a fair and resilient society is not guaranteed. Achieving that outcome is a political choice: we can let markets and venture capital decide who wins, or we can insist on social buffers, accountability, and bargaining power for workers. The WGA fight in Hollywood showed that a sector can win meaningful protections; the question is whether other industries will follow. Brookings+1


Bottom line

Yes, the anxiety about a “permanent underclass” is real in political terms. But it’s not a foregone conclusion. At the same time, the people selling an always-on AI future are not immune to market realities: surveys, central bank warnings and infrastructure math suggest the AI economy may be brittle. If we want an AI future that doesn’t create a permanent underclass, we’ll need to combine labour power, regulation, realistic scrutiny of AI business models, and public investment in retraining — fast, deliberate, and democratic. The New Yorker+2Financial Times+2

Read more