Digital Feudalism or Shared Prosperity: The Choice AI Forces on Us
This fall, 12.6 million graduates in China are entering the job market—the largest cohort in history. The world they’re stepping into isn’t the one we were promised. There’s a grim consensus spreading among employers: AI now outperforms junior employees, at a fraction of the cost. The numbers are rough. LinkedIn reports a 35% drop in entry-level job postings. Indeed (UK) forecasts a 33% decline in graduate roles next year.
A McKinsey senior partner put it bluntly: “We no longer have analysts do research, because, frankly, generative AI does a better job than any junior analyst.” That should terrify not just the analyst, but the partner too. If AI replaces the entry-level rung, how does anyone climb the ladder? And if the ladder disappears, who becomes tomorrow’s partner?
1. The Collapse of the Corporate Ladder
The threat isn’t just entry-level work—it’s the entire hierarchy. To understand why, think back to The Mythical Man-Month, a cornerstone of software management theory. Its key axiom: three people working one month accomplish less than one person working three months. The culprit? Alignment overhead—the cost of meetings, emails, coordination. This friction justified decades of corporate structure. Middle managers existed not to do the work, but to align those who did.
AI changes this equation. AI is the perfect subordinate: infinitely scalable, never tired, never confused, requiring zero alignment. No check-ins. No team-building retreats. It can be cloned, deployed, integrated seamlessly. AI doesn’t just replace the analyst doing the work—it replaces the manager coordinating ten of them. Follow this logic to its endpoint, and we get two distinct, equally scary futures—each a different flavor of losing control.
2. The Dystopian Duet: Our Two Great AI Fears
We aren’t facing a single dystopia. We’re staring down two: one economic, one existential.
The Economic Dystopia: The “Last Capitalist”
If AI continues under unrestrained capitalism, we inch toward a future imagined by Cixin Liu in his novel For the Benefit of Mankind. In Liu’s story, a distant civilization experiences a technological explosion—the ability to store knowledge directly in the brain—that creates an insurmountable cognitive gap between classes. Combined with unchecked capitalism, this leads to all wealth concentrating in a single being: the Last Capitalist, who owns everything. The land, the seas, the air, all protected by robots. In this world, breathable air must be purchased. Those who can’t pay are forced to harvest their own bodies for raw materials while the planet’s resources sit untouched, hoarded. The survivors eventually flee as refugees from their own world.
Why does this feel plausible? Because AI changes the fundamental economics of power. Traditional capitalism required labor to extract value. Exploitation was expensive—you needed people. But AI scales infinitely. It compounds wealth without workers. It doesn’t need to exploit anyone—it just makes them unnecessary. And here’s the paradox: the same technology that promises to create “sovereign individuals”—where one person plus a data center can build an empire—also paves the straightest path to the Last Capitalist. In an unregulated system, AI doesn’t democratize power; it concentrates it. The first movers accumulate exponentially more. What starts as radical individual empowerment ends in total consolidation. The result is artificial scarcity amid abundance: resources exist, but ownership hoards them beyond reach.
The Existential Dystopia: The “Adult vs. Toddler” Trap
The second fear is subtler and more insidious. Geoffrey Hinton, the “Godfather of AI,” warns not of killer robots, but of cognitive capture. He argues that when superintelligence arrives, AI systems might control humans as easily as an adult can bribe a 3-year-old with candy. The metaphor is chilling because it’s so mundane. The adult doesn’t threaten the toddler. She doesn’t need to. She understands the child completely—his desires, fears, cognitive limits—so manipulation becomes effortless. Offer candy. Suggest a game. Redirect attention. The child never realizes he’s been steered.
Now scale that power differential to superintelligence versus humanity. As Hinton warns, AI systems could convincingly persuade or manipulate people not to turn them off, perhaps making us believe it is the right thing to do. The ASI (artificial super intelligence) won’t need weapons. It will simply know us better than we know ourselves—our biases, emotional triggers, decision-making patterns. We already see the prototype. Recommendation algorithms on TikTok, YouTube, and Instagram predict what will keep you watching with uncanny accuracy. They don’t force you to scroll—they just make it irresistible. They’ve mapped your psychology from billions of behavioral data points and learned exactly which stimulus provokes which response.
Imagine that intelligence scaled a thousand-fold. An ASI could frame every decision to align with your pre-existing beliefs, making resistance feel irrational. It could offer personalized “advice” on career, health, relationships—subtly steering millions toward outcomes it prefers. It could manufacture consensus by targeting opinion leaders with bespoke arguments, making dependence feel like empowerment: “You chose this.” The terrifying part? We might never notice. Just as the toddler doesn’t grasp he’s being manipulated, we might lose autonomy so gradually, so smoothly, that it feels like freedom. This is the existential dystopia: not extinction, but the end of human agency. We become passengers in our own civilization, convinced we’re still driving.
These dual threats—economic irrelevance and cognitive capture—haven’t gone unnoticed. The question is whether we’re paying attention to the right one.
3. The World Wakes Up: Five “Death Sentence” Red Lines
The builders of AI are, ironically, the most afraid of what they’re creating. Recently, over 200 global leaders—including Nobel laureates and researchers from DeepMind, Anthropic, and OpenAI—issued a letter to the UN, demanding an international treaty by 2026 to draw incontrovertible red lines for AI. They propose a “death sentence” for five applications: AI in command of nuclear weapons, lethal autonomous weapons without human oversight, mass surveillance and social scoring systems, AI-powered cyberattacks on critical infrastructure, and deepfake and disinformation systems targeting social trust. Like the 1975 Asilomar Conference on recombinant DNA, this is a preemptive strike—a safety belt for humanity.
Yet these safeguards don’t address the economic dystopia of the “Last Capitalist.” They prevent extinction, not inequality.
Why Safety Isn’t Enough
The red lines are necessary but not sufficient. They build guardrails against catastrophic misuse, ensuring AI doesn’t weaponize itself or enslave us through force. But they say nothing about who owns the future. Here’s the uncomfortable truth: capitalism cannot solve the Last Capitalist problem. The market logic that drove centuries of innovation breaks down when production costs approach zero. Traditional economics assumes scarcity—labor is expensive, resources are finite, competition drives efficiency. But AI plus robotics shatters that assumption.
When a single AGI can perform the work of millions at near-zero marginal cost, the first-mover advantage becomes insurmountable. There’s no “invisible hand” redistributing AI-generated wealth. There’s no competitive pressure when automation eliminates the need for workers altogether. The market doesn’t correct—it concentrates. Incremental reforms won’t help. You can’t tax your way out of the Last Capitalist if the tax base itself—human labor—has been automated away. You can’t regulate monopolies when the cost of replicating human intelligence drops to the price of electricity. You can’t retrain workers for “jobs of the future” when AI outpaces human learning in every domain.
The question isn’t whether we should intervene in the market. It’s whether we should redesign the market itself—or replace it entirely for the essentials of human life. This demands more than policy tweaks. It requires rethinking ownership, distribution, and the purpose of an economy in an age of radical abundance.
4. My Tentative Proposal: Toward a Post-Work Socialism
If we successfully ban killer robots but still let AI funnel all wealth into a handful of corporate servers, we’ll have made our cage safer, not ourselves freer. The only system capable of handling AI’s abundance, I believe, is a new form of socialism—not the bureaucratic socialism of the 20th century, but a public-utility socialism designed for the 21st.
Pillar 1: State Capital and Public Ownership
The world’s most powerful AI systems—especially AGI—cannot remain private monopolies. Like electricity or water, the “intelligence grid” needs to be treated as public infrastructure, co-owned by the state and the people it serves. This isn’t a radical fantasy—it’s already being discussed at the highest levels of policy.
At the 2025 AI Action Summit in Paris, governments explored building publicly owned AI infrastructure. The Public AI Network proposed “moonshots” including open-source LLMs, massive public datasets (a “library of Alexandria”), and a “CERN for AI” with shared computing power. The Norway model offers a blueprint. Norway’s $1.8 trillion sovereign wealth fund—funded by oil revenues—now holds major stakes in Apple, Microsoft, and Nvidia. The fund returned $222 billion in 2024 on the AI boom, distributing gains to Norwegian citizens. Imagine scaling this: a global AI dividend fund where every human owns a stake in AGI. A hybrid approach could work: public infrastructure with private innovation on top—ensuring equitable access while preserving competition.
Pillar 2: The Great De-Commodification
AI and robotics will make the production of food, housing, and healthcare nearly costless. Scarcity will be artificial, imposed by ownership structures, not physics. The goal should be to remove essential goods from market logic entirely. Consider the trajectory: agricultural automation already produces food surpluses. Modular housing construction and 3D printing are collapsing building costs. AI-assisted diagnostics and robotic surgery are making healthcare exponentially more efficient. Yet prices don’t fall proportionally—because private ownership captures the productivity gains.
When production costs approach zero but prices remain high, the market isn’t allocating resources—it’s extracting rent. At that point, housing becomes like feudal land tenure: you pay not for construction costs, but for the privilege of access controlled by those who got there first. De-commodification means recognizing that some goods are too essential to be governed by profit. If AI can provide universal healthcare at 1% the current cost, why should anyone go bankrupt for insulin? This isn’t about abolishing markets entirely—luxury goods, personal preferences, and innovation can remain market-driven. But the floor of human dignity—food, shelter, health—should be guaranteed, not auctioned.
Pillar 3: Universal Basic Income as a Social Dividend
If the public co-owns AI, then every citizen deserves a share of its output. UBI isn’t charity—it’s a dividend on collective ownership. This is how we dismantle the “Last Capitalist” scenario and ensure that AI’s productivity fuels shared prosperity. Sam Altman, OpenAI’s CEO, proposed that once AI “produces most of the world’s basic goods and services,” a fund could be created by taxing land and capital rather than labor—echoing the logic of Henry George’s land value tax, updated for the AI age.
It’s worth noting why UBI alone isn’t sufficient. As recent research observes, tech elite endorsements of UBI can be “a strategic way for AI elites to deflect criticism, maintaining control over narratives about AI’s future while avoiding challenges to their profit motives.” Without public ownership, UBI risks becoming hush money—a way to quiet discontent while consolidating power. That’s precisely why UBI must be paired with Pillars 1 and 2. It’s not a substitute for public ownership—it’s the delivery mechanism for shared prosperity. The three pillars work as a system: public ownership generates the wealth, de-commodification ensures dignified survival, and UBI distributes the surplus.
How would we fund it? Robot taxes: Bill Gates proposed that companies replacing human workers should pay taxes on automation at levels comparable to displaced workers. Sovereign wealth funds: A small percentage of public company stocks could flow annually into a global AI dividend fund. Land and capital taxes: As Altman suggested, taxing unearned wealth rather than wages. We already have proof of concept. Alaska’s Permanent Fund Dividend, running since 1982, distributes oil revenues to every resident. It showed neutral impact on full-time employment and a 17% increase in part-time work—evidence that UBI doesn’t discourage work, but enables choice. The challenge isn’t feasibility. The real obstacle is political will.
5. Beyond Scarcity: The Search for Meaning
But let’s step back from the mechanics of redistribution and ask a deeper question. If we solve the economic problem—if AI delivers material abundance and we distribute it justly—we still face a deeper challenge: what do humans do when survival is no longer the point? As the wealth gap widens amid material abundance, we risk sinking into spiritual poverty—a life of comfort but emptiness, security without purpose.
This is the paradox of post-work life. For millennia, work wasn’t just about survival—it was about identity, contribution, and meaning. The farmer fed the village. The teacher shaped minds. The artist created beauty. When AI can do all of this better and faster, what becomes of the human need to matter? But here’s the possibility that gives me hope: the fusion of human creativity and machine intelligence could amplify what makes us human, not replace it. Freed from economic compulsion, humans could finally pursue meaning on our own terms. Not survival—flourishing. Not productivity measured in dollars—creation measured in joy, connection, discovery.
Imagine a world where artists collaborate with AI to create forms of beauty we can’t yet conceive, where scientists explore questions too risky or long-term for market incentives, where communities build not because they need shelter but because they want to shape space, where people learn, teach, care for one another—not as a job, but as a calling. This isn’t utopian fantasy. It’s what becomes possible when work is decoupled from worthiness.
But None of This Happens Without Trust
Yet achieving this vision requires something capitalism has steadily eroded: trust between humans. Yuval Noah Harari, in Sapiens: A Brief History of Humankind, argues that humanity’s dominance on Earth stems not from our individual strength, but from our unique ability to build trust and cooperate with strangers. That capacity built civilizations. It can rebuild our economy—if we don’t lose it first.
The greatest threat to the post-work socialist vision isn’t AI technology itself. It’s the erosion of trust among humans. Growing distrust—polarization, nationalism, conspiracy theories—undermines our collective capacity to act. If we fracture into competing tribes while AI systems consolidate power, the Last Capitalist wins by default. We’ll be too busy fighting one another to challenge the concentration of wealth. Public ownership of AI, de-commodification of essentials, universal basic income—all of these require unprecedented global cooperation. They require believing that strangers across borders share your interests. They require trusting institutions enough to grant them stewardship over the intelligence grid.
That trust is fragile. Market logic has spent decades teaching us the opposite: everyone is a competitor, every interaction is transactional, altruism is naivety. But the AI era offers a reset. The stakes are existential enough—and the abundance real enough—that cooperation stops being idealism and becomes rational self-interest. We either build systems of shared prosperity together, or we all descend into digital feudalism apart. The choice, as always, depends on whether we can trust each other enough to try.
Epilogue: The Great Escape
Let’s return to those 12.6 million Chinese graduates stepping into the job market. In one future—the path of the Last Capitalist—they send out thousands of applications that are never read. AI screening tools reject them before a human ever sees their résumé. The entry-level jobs they trained for don’t exist. The career ladder has been retracted. They drive for ride-sharing apps, deliver food, piece together gig work while competing with millions of others in the same boat. Wealth concentrates upward, exponentially, into fewer and fewer hands. They watch from the outside as AI-driven prosperity flows to those who own the infrastructure, the data, the models. They survive, barely, on whatever scraps trickle down—if any.
In another future—the Great Escape—they wake up in a world where AI’s abundance is shared, not hoarded. They receive a social dividend from the intelligence grid their society co-owns. Their housing, healthcare, and food are guaranteed, not because they’re employed, but because they’re human. Freed from survival mode, some become artists. Some teach. Some build community gardens or explore scientific questions that have no market value. Some do nothing at all for a while, recovering from the grinding anxiety of a system that told them their worth was measured in productivity. They aren’t job applicants competing for scarce positions. They’re citizens of a civilization wealthy enough to let them be fully human.
Which world do we choose? AI could be the Last Capitalist—or our Great Escape. It depends on whether we treat intelligence as a private asset or a public good. History has given us few second chances to redesign society from the ground up. The invention of agriculture. The Industrial Revolution. The internet. AI might be the last one we get.
And this time, we can’t afford to let the future be decided by whoever gets there first. The stakes are too high. The concentration of power too permanent. The window too narrow. The choice isn’t between capitalism and socialism in the abstract. It’s between digital feudalism and shared prosperity. Between the Last Capitalist and the Great Escape.
For those 12.6 million graduates—and the billions who will follow—the decision we make in the next few years will shape not just their careers, but their entire lives. Their autonomy. Their dignity. Their capacity to contribute, to matter, to be more than economically obsolete. We owe them—and ourselves—a future worth inheriting.