🌾 Of Chaff and Wheat
What Agentic AI Leaves of Work and Who Ends Up on Which Side
On March 31, 2026, tens of thousands of Oracle employees woke up to an email delivered at six in the morning. Sender: Oracle Leadership. Content: »Today is your last working day.« No justification, no conversation, no name. Slack and VPN were already deactivated. Oracle had just reported one of the best quarters in its history.
It would be a mistake to read this as a slip-up by a single company. It describes a logic that is currently spreading across much of the knowledge economy: growth and employment are decoupling. Record profits no longer justify jobs; they finance their substitution. What Oracle has done, others will do. More quietly, perhaps. But in the same direction.
The question behind this is more uncomfortable than any wave of layoffs: Will I still be needed in five years? Not as a person. As a workforce. Six theses. No reassuring conclusion. But a clear one.
Thesis 1: Agentic AI Is Not Another Wave of Automation. It Is a Different Category.
Economic history knows a tried-and-tested pattern. New technology destroys jobs in one sector and creates them in another. Farm workers became factory workers. Factory workers became clerical workers. Clerical workers became knowledge workers. Each step assumed that the new category of work lay beyond the reach of the machine. And each time they were right: the machine could do one thing, the human the other.
Agentic AI breaks this pattern because it does not take over a specific task but operates as a cognitive generalist. It reads, analyzes, writes, decides, communicates, and coordinates. Dario Amodei, CEO of Anthropic, describes it as a general labor substitute for cognitive tasks. This means: the category of work into which knowledge workers have moved since industrialization is for the first time itself within reach of automation.
Add to this the speed. Industrial automation rolled out over decades because it required hardware. Agentic AI is software. It scales not in years but in months. The time societies had to adapt is shrinking to a degree for which there is no historical analogy. Geoffrey Hinton, Turing Prize and Nobel laureate, put it bluntly in September 2025: The rich will use AI to replace workers. That creates massive unemployment and an enormous increase in profits.
»The category into which work has retreated since industrialization is for the first time itself within reach of automation.«
Thesis 2: Institutional Forecasts Rest on an Assumption That Is Wobbling for the First Time.
The World Economic Forum expects net new jobs. The IMF sees exposure but also complementarity. McKinsey points to productivity potentials that create new demand. These forecasts share a common underlying assumption: that human labor remains a permanent bottleneck because there is always something humans can do that machines cannot.
This assumption has never been wrong historically. It is the reason why the so-called Lump of Labour fallacy, the idea that there is only a fixed amount of work, has been considered an economic fallacy for two hundred years. Anton Korinek, economist at the University of Virginia and member of Anthropic's economic advisory board, identifies the core problem: Economists have argued against this fallacy for two hundred years. But their argument assumes that human labor always remains the bottleneck. Exactly that, writes Korinek, could now change.
This does not mean the institutional forecasts are wrong. It means they hold under a condition whose validity is being questioned for the first time. Anyone who does not factor this in is modeling the past, not the future. Daron Acemoglu, Nobel laureate in economics at MIT, gives the optimists an additional empirical problem: Most AI investments produce so-called so-so automation. Technology that destroys jobs without significantly increasing productivity. No broad prosperity gain that creates new demand and thus new work.
Thesis 3: Reskilling Does Not Solve the Problem. The Data Is Clear.
Anyone who raises the displacement by agentic AI almost always gets the same answer: reskilling. People need to learn to work with AI. Society needs to invest in further education. This sounds reasonable. It is not empirically supported.
Randomized controlled trials of government reskilling programs, the gold standard of evidence, have consistently shown for decades: no statistically significant improvement in employment or income. The US Trade Adjustment Assistance program, the best-studied displacement program in the world, produced higher wage losses for participants than for non-participants. The Office of Management and Budget rated it ineffective. Even in the Swedish model, considered the European gold standard, only about a third of reskilled workers earn as much or more afterward as before.
The problem is not the will to reskill. It is time. The half-life of technical competencies has fallen below five years, for AI-specific skills to about two and a half years. AI capabilities double every seven months. A person starting reskilling today is structurally chasing a finish line that moves faster than they do. Yoshua Bengio, Turing Prize and Nobel laureate, put it this way: The people who lose their jobs are not necessarily the same ones who can transition into AI-related roles. Reskilling assumes a compatibility that does not exist for a growing share of those affected.
»Reskilling sounds reasonable. But the finish line moves faster than the person running toward it.«
Thesis 4: Entry-Level Positions Disappear First. This Destroys More Than Jobs.
The earliest and most consequential effect of agentic AI on the labor market is not the elimination of senior positions. It is the disappearance of entry-level positions. Salesforce reduced its customer service by about half within a short time. Klarna cut forty percent of its workforce. McKinsey now operates twenty thousand AI agents alongside forty thousand humans. Stanford researchers documented a significant decline in young workers' employment in AI-exposed occupations between 2022 and 2025.
This is not just an employment problem. It is a training problem. Entry-level positions are not primarily productive roles. They are learning environments. The consulting firm that no longer hires junior consultants saves costs and simultaneously loses the infrastructure from which its senior consultants should emerge in ten years. PwC has internally identified this risk: The reduction of entry-level hires could deprive the organization of its future leadership. The IMF confirms in its January 2026 research that hiring of entry-level workers is already declining.
The societal impact is even more severe. Career biographies begin with entry-level positions. Those who cannot find one develop no professional identity, no network, no competence through practice. This is not an abstract sociological observation. It is the beginning of a structural rupture between a generation and the labor market that lasts longer than any recession.
Thesis 5: The Labor Market Is Splitting. Not Into Winners and Losers, but Into Controllers and Controlled.
The common narrative about AI and work goes: There will be winners who work with AI, and losers who are replaced by it. This distinction is too crude. The more relevant dividing line runs not between those who use AI and those who do not. It runs between those who configure and control AI systems, and those who are configured and controlled by AI systems.
The model that is already establishing itself is: one specialist with fifty AI agents. What used to require a team of ten people is now done by three with the appropriate infrastructure. Solo-founded startups already account for over a third of all new businesses. Demand is shifting to senior architects, product owners, and designers who set standards and orchestrate agents. At the same time, the positions that once made up the majority of knowledge workers, namely junior roles, clerical work, standardized analysis, are not being supplemented. They are being replaced.
For companies, this means a decision that must be made now: Which roles complement AI systems, and which do they replace? Those who do not ask this question answer it anyway, through inaction. The people working in these roles today usually notice only when the decision has long since been made.
Thesis 6: The Societal Response Is Missing. And Time Is Running Out.
There is no societal response to agentic AI that even comes close to the speed of change. Social systems were designed for a world where work is scarce and capital must be distributed. They are not built to manage a situation where work becomes surplus. Pension systems require contributions. Health insurance requires employment. Even the universal basic income, regularly cited as a solution in these debates, addresses the material problem, not the more fundamental one: the question of what work means for people beyond income.
Hinton rejects basic income not because of its cost but because of its logic. It does not address human dignity. Demis Hassabis, CEO of DeepMind, poses the systemic question: If governments and companies no longer need people to generate prosperity, what bargaining power do citizens still have to demand the foundations of democracy and a good life? He considers a new political philosophy necessary. This is a factual description of a legitimacy problem that arises when the social contract between work and participation tears apart.
The reaction times of democratic institutions are structurally too slow for the speed of this change. This is not an argument against democracy. It is an argument for acting now, before inaction becomes a decision. Sam Altman, whose company is significantly contributing to this dynamic, himself called for a New Deal for the AI age in April 2026, including a public wealth fund and taxation of automated production. That the CEO of the leading AI company considers such instruments necessary says more about the internal assessment of the disruption than any external forecast.
»If governments and companies no longer need people to generate prosperity, what bargaining power do citizens still have?«
Who is wheat and who is chaff is decided not by talent or diligence alone. It is decided by position in the system: whether someone configures systems or is configured by them. Whether they make decisions that AI executes, or fulfill tasks that AI can already handle. The dividing factor is the direction of dependency.
This is not a moral statement. It is a functional description of what is happening right now. The question that follows is not technological. It is political: Who decides how this system is built? And whose interests are represented in the process?
This question is still open. But the window is closing.