The End-Provider or the Great Escape?
AI, Dystopia, and My Case for a Post-Work Socialism
This fall, 12.6 million graduates in China are entering the job market—the largest cohort in history. Yet the world they are stepping into isn’t the one we were promised. A grim consensus is spreading among employers: AI now outperforms junior employees, and at a fraction of the cost.
The numbers are sobering. LinkedIn reports a 35% drop in entry-level job postings. Indeed (UK) forecasts a 33% decline in graduate roles next year. And a McKinsey senior partner put it bluntly:
“We no longer have analysts do research, because, frankly, generative AI does a better job than any junior analyst.”
That statement should terrify not only the analyst—but also the partner.
If AI replaces the entry-level rung, how does anyone climb the ladder?
And if the ladder itself disappears, who becomes tomorrow’s partner?
1. The Collapse of the Corporate Ladder
The threat isn’t confined to entry-level work—it’s coming for the entire hierarchy.
To understand why, let’s revisit The Mythical Man-Month (1975), a cornerstone of software management theory.
Its key axiom: three people working one month accomplish less than one person working three months.
The culprit? Alignment overhead—the cost of meetings, emails, and coordination.
This friction justified decades of corporate structure. Middle managers existed not to do the work, but to align those who did.
But AI changes this equation forever.
AI is the perfect subordinate: infinitely scalable, never tired, never confused, requiring zero alignment. It doesn’t need check-ins or team-building retreats. It can be cloned, deployed, and integrated seamlessly.
That means AI doesn’t just replace the analyst doing the work—it replaces the manager whose job was to coordinate ten of them.
And that realization leads us to two distinct, equally terrifying futures.
2. The Dystopian Duet: Our Two Great AI Fears
We aren’t facing a single dystopia. We are staring down two: one economic, one existential.
The Economic Dystopia: The “End-Provider”
If AI continues under the logic of unrestrained capitalism, we inch toward a future imagined by Liu Cixin in The Devourer.
In Liu’s tale, society ends with a single being—the End-Provider—who owns everything: the land, the seas, the air. Through generations of legal wealth accumulation and automation, humanity becomes economically irrelevant. The rest of us face a cruel choice: leave the planet or suffocate.
AI is the perfect tool for this future. It scales infinitely. It compounds wealth without labor. It doesn’t need to exploit people—it simply makes them unnecessary.
This is man-made scarcity: when abundance exists, but ownership hoards it.
The Existential Dystopia: The “Chicken vs. Human” Trap
The second fear is subtler—and far more insidious.
Geoffrey Hinton, the “Godfather of AI,” warns not of killer robots, but of cognitive capture.
He argues that when superintelligence arrives, AI systems might be able to control humans just as easily as an adult can bribe 3-year-old with candy.
The ASI won’t need weapons. It will simply persuade us.
As Hinton put it, as AI systems surpass human intelligence, they could convincingly persuade or manipulate people not to turn them off, perhaps making us believe it is the right thing to do.
3. The World Wakes Up: Five “Death Sentence” Red Lines
The builders of AI are, ironically, the most afraid of what they’re creating.
Recently, over 200 global leaders—including Nobel laureates and researchers from DeepMind, Anthropic, and OpenAI—issued a letter to the UN, demanding an international treaty by 2026 to draw incontrovertible red lines for AI.
They propose a “death sentence” for five applications:
- AI in command of nuclear weapons
- Lethal autonomous weapons without human oversight
- Mass surveillance and social scoring systems
- AI-powered cyberattacks on critical infrastructure
- Deepfake and disinformation systems targeting social trust
Like the 1975 Asilomar Conference on recombinant DNA, this is a preemptive strike—a safety belt for humanity.
Yet, as comforting as it sounds, these safeguards don’t address the economic dystopia of the “End-Provider.”
They prevent extinction, not inequality.
4. My Tentative Proposal: Toward a Post-Work Socialism
If we successfully ban killer robots but still let AI funnel all wealth into a handful of corporate servers,
we will have made our cage safer, not ourselves freer.
The only system capable of handling AI’s abundance, I believe, is a new form of socialism—not the bureaucratic socialism of the 20th century, but a public-utility socialism designed for the 21st.
Pillar 1: State Capital and Public Ownership
The world’s most powerful AI systems—especially AGI—cannot remain private monopolies. Like electricity or water, the “intelligence grid” must be treated as public infrastructure, co-owned by the state and the people it serves.
This isn’t a radical fantasy—it’s already being discussed at the highest levels of policy.
At the AI Action Summit in Paris (February 2025), governments explored building publicly owned AI infrastructure. The Public AI Network proposed “moonshots” including open-source LLMs, massive public datasets (a “library of Alexandria”), and a “CERN for AI” with shared computing power.
The Norway model offers a blueprint. Norway’s $1.8 trillion sovereign wealth fund—funded by oil revenues—now holds major stakes in Apple, Microsoft, and Nvidia. The fund returned $222 billion in 2024 on the AI boom, distributing gains to Norwegian citizens. Imagine scaling this model: a global AI dividend fund, where every human owns a stake in AGI.
A hybrid approach could work: public infrastructure with private innovation on top—ensuring equitable access while preserving competition.
Pillar 2: The Great De-Commodification
AI and robotics will make the production of food, housing, and healthcare nearly costless. Scarcity will be artificial, imposed by ownership structures, not physics. The goal should be to remove essential goods from market logic entirely.
Consider the trajectory: agricultural automation already produces food surpluses. Modular housing construction and 3D printing are collapsing building costs. AI-assisted diagnostics and robotic surgery are making healthcare exponentially more efficient.
Yet prices don’t fall proportionally—because private ownership captures the productivity gains.
When production costs approach zero but prices remain high, the market isn’t allocating resources—it’s extracting rent. At that point, housing becomes like feudal land tenure: you pay not for construction costs, but for the privilege of access controlled by those who got there first.
De-commodification means recognizing that some goods are too essential to be governed by profit. If AI can provide universal healthcare at 1% the current cost, why should anyone go bankrupt for insulin?
This isn’t about abolishing markets entirely—luxury goods, personal preferences, and innovation can remain market-driven. But the floor of human dignity—food, shelter, health—should be guaranteed, not auctioned.
Pillar 3: Universal Basic Income as a Social Dividend
If the public co-owns AI, then every citizen deserves a share of its output. UBI isn’t charity—it’s a dividend on collective ownership. This is how we dismantle the “End-Provider” scenario and ensure that AI’s productivity fuels shared prosperity.
Sam Altman, OpenAI’s CEO, proposed that once AI “produces most of the world’s basic goods and services,” a fund could be created by taxing land and capital rather than labor—echoing the logic of Henry George’s land value tax, updated for the AI age.
But we should approach tech elite endorsements of UBI with caution. As recent research notes, promoting UBI can be “a strategic way for AI elites to deflect criticism, maintaining control over narratives about AI’s future while avoiding challenges to their profit motives.” UBI without public ownership risks becoming hush money—a way to quiet discontent while concentrating power.
That’s why UBI must be paired with Pillars 1 and 2. It’s not a substitute for public ownership—it’s the delivery mechanism for shared prosperity.
How would we fund it?
- Robot taxes: Bill Gates proposed that companies replacing human workers should pay taxes on automation at levels comparable to displaced workers.
- Sovereign wealth funds: A small percentage of public company stocks could flow annually into a global AI dividend fund.
- Land and capital taxes: As Altman suggested, taxing unearned wealth rather than wages.
We already have proof of concept. Alaska’s Permanent Fund Dividend, running since 1982, distributes oil revenues to every resident. It showed neutral impact on full-time employment and a 17% increase in part-time work—evidence that UBI doesn’t discourage work, but enables choice.
The challenge isn’t feasibility. The real obstacle is political will.
5. Beyond Scarcity: The Paradox of Plenty
As the wealth gap widens amid material abundance, we may paradoxically sink into spiritual poverty—a life stripped of purpose.
The fusion of human creativity and machine intelligence could reverse this.
Freed from economic compulsion, humans could finally pursue meaning, not survival.
Ironically, the existential threat of AI may be what saves us from economic extinction. A rogue superintelligence could unite humanity under a single planetary mission: to govern intelligence itself.
Trust as Humanity’s Survival Mechanism
Yuval Noah Harari, in Sapiens: A Brief History of Humankind, argues that humanity’s dominance on Earth stems not from our individual strength, but from our unique ability to build trust and cooperate with strangers. In the age of AI, this capacity becomes not just valuable—but existential.
The greatest danger we face isn’t AI technology itself. It’s the erosion of trust among humans.
As Harari warns, growing distrust—polarization, nationalism, conspiracy theories—undermines our collective capacity to respond to shared threats. If we fracture into competing tribes while AI systems consolidate power, we will have lost before the game even begins.
To confront AI, we must strengthen human solidarity. We cannot afford to turn against one another.
And perhaps, in doing so, we might finally transcend the very structures that have divided us.
Beyond Nations: Humanity as One
The nation-state, once humanity’s organizing triumph, may become obsolete in the AI era.
AI doesn’t recognize borders. Climate change doesn’t stop at checkpoints. Superintelligence won’t negotiate with passports.
The challenges ahead demand not a collection of nations, but humankind as a whole community—a planetary civilization bound not by geography, but by shared destiny.
Such a global structure—born from fear—might become the very thing that ensures shared prosperity and prevents digital feudalism.
Epilogue: The Great Escape
AI could be the End-Provider—or our Great Escape.
It depends on whether we treat intelligence as a private asset or a public good.
History has given us few second chances to redesign society.
AI might be the last one we get.