Post-AGI Economics As If Nothing Ever Happens
What's wrong with most econ thinking about AI transition.
When economists think and write about the post-AGI world, they often rely on the implicit assumption that parameters may change, but fundamentally, structurally, not much happens. And if it does, it’s maybe one or two empirical facts, but nothing too fundamental.
This mostly worked for all sorts of other technologies, where technologists would predict society to be radically transformed e.g. by everyone having most of humanity’s knowledge available for free all the time, or everyone having an ability to instantly communicate with almost anyone else.1
But it will not work for AGI, and as a result, most of the econ modelling of the post-AGI world is irrelevant or actively misleading 2, making people who rely on it more confused than if they just thought “this is hard to think about so I don’t know”.
Econ reasoning from high level perspective
Econ reasoning is trying to do something like projecting the extremely high dimensional reality into something like 10 real numbers and a few differential equations. All the hard cognitive work is in the projection. Solving a bunch of differential equations impresses the general audience, and historically may have worked as some sort of proof of intelligence, but is relatively trivial.
How the projection works is usually specified by some combination of assumptions, models and concepts used, where the concepts themselves usually imply many assumptions and simplifications.
In the best case of economic reasoning, the projections capture something important, and the math leads us to some new insights.3 In cases which are in my view quite common, non-mathematical, often intuitive reasoning of the economist leads to some interesting insight, and then the formalisation, assumptions and models are selected in a way where the math leads to the same conclusions. The resulting epistemic situation may be somewhat tricky: the conclusions may be true, the assumptions sensible, but the math is less relevant than it seems - given the extremely large space of economic models, had the economist different intuitions, they would have been able to find a different math leading to different conclusions.
Unfortunately, there are many other ways the economist can reason. For example, they can be driven to reach some counter-intuitive conclusion, incentivized by academic drive for novelty. Or they may want to use some piece of math they like.4 Or, they can have intuitive policy opinions, and the model could be selected so it supports some policy direction - this process is usually implicit and subconscious.
The bottom line is if we are interested in claims and predictions about reality, the main part of economic papers are assumptions and concepts used. The math is usually right.5
Econ reasoning applied to post-AGI situations
The basic problem with applying standard economic reasoning to post-AGI situations is that sufficiently advanced AI may violate many assumptions which make perfect sense in human economy, but may not generalize. Often the assumptions are so basic that they are implicit, assumed in most econ papers, and out of sight in the usual “examining the assumptions”. Also advanced AI may break some of the intuitions about how the world works, breaking the intuitive process upstream of formal arguments.
What complicates the matter is these assumptions often interact with considerations and disciplines outside of the core of economic discourse, and are better understood and examined using frameworks from other disciplines.
To give two examples:
AI consumers
Consumption so far was driven by human decisions and utility. Standard economic models ultimately ground value in human preferences and utility. Humans consume, humans experience satisfaction, and the whole apparatus of welfare economics and policy evaluation flows from this. Firms are modeled as profit-maximizing, but profit is instrumental—it flows to human owners and workers who then consume.
If AIs own capital and have preferences or goals of their own, this assumption breaks down. If such AIs spent resources, this should likely count as consumption in the economic sense.
Preferences
Usual assumption in most econ thinking is that humans have preferences which are somewhat stable, somewhat self-interested, and what these are is a question mostly outside of economics.6 There are whole successful branches of economics studying to what extent human preferences deviate from VNM rationality or human decision making suffers from cognitive limitations, or on how preferences form, but these are not in the center of attention of mainstream macroeconomy.7 Qualitative predictions in case of humans are often similar, so the topic is not so important.
When analyzing the current world, we find that human preferences come from diverse sources, like biological needs, learned tastes, and culture. A large component seems to be ultimately selected for by cultural evolution.
Post-AGI, the standard econ assumptions may fail, or need to be substantially modified. Why?
One consideration is the differences in cognitive abilities between AGIs and humans may make human preferences easily changeable for AGIs. As an intuition pump: consider a system composed of a five year old child and her parents. The child obviously has some preferences, but the parents can usually change these. Sometimes by coercion or manipulation, but often just by pointing out consequences, extrapolating children’s wants, or exposing them to novel situations.
Also preferences are relative to world model: standard econ way of modelling differences in world models is “information asymmetries”. The kid does not have as good understanding of the world, and would easily be exploited by adults.
Because child preferences are not as stable and self-interested as adults, and kids suffer from information asymmetries, they are partially protected by law: the result is patchwork of regulation where, for example, it is legal to try to modify children’s food preferences, but adults are prohibited to try to change child’s sexual preferences for their advantage.
Another ”so obvious it is easy to overlook” effect is child dependence on parent’s culture: if parents are Christians, it is quite likely their five year old kid will believe in God. If parents are patriots, the kid will also likely have some positive ideas about their country.8
When interacting with cognitive systems way more capable than us, we may find ourselves in a situation somewhat similar to kids: our preferences may be easily influenced, and not particularly self-interested. The ideologies we adopt may be driven by non-human systems. Our world models may be weak, resulting in massive information assymetries.
There even is a strand of economic literature that explicitly models parent-child interactions, families and formation of preferences9. This body of work may provide useful insights I’d be curious about - is anyone looking there?
The solution may be analogous: some form of paternalism, where human minds are massively protected by law from some types of interference. This may or may not work, but once it is the case, you basically can not start from classical liberal and libertarian assumptions. As an intuition pump, imagine someone trying to do “macroeconomy of ten year olds and younger” in the current world.
Other core concepts
We could examine some other typical econ assumptions and concepts in a similar way, and each would deserve a paper-length treatment. This post tries to mostly stay a bit more meta-, so just some pointers.
Property rights. Most economic models take property rights as exogenous - “assume well-defined and enforced property rights.” If you look into how most property rights are actually connected to physical reality, property rights often mean some row exists in a database run by the state or a corporation. Enforcement ultimately rests on the state’s monopoly on violence, cognitive monitoring capacity and will to act as independent enforcer. As all sorts of totalitarian, communist, colonial or despotic regimes illustrate, even in purely human systems, private property depends on power. If you assume property is stable, you are assuming things about governance and power.
Transaction costs and firm boundaries. Coase’s theory10 explains why firms exist: it is sometimes cheaper to coordinate internally via hierarchy than externally via markets. The boundary of the firm sits where transaction costs of market exchange equal the costs of internal coordination. AI may radically reduce both—making market transactions nearly frictionless while also making large-scale coordination easy. The equilibrium size and structure of firms could shift in unpredictable directions, or the concept of a “firm” might become less coherent.
Discrete agents and competition. Market models assume distinct agents that cooperate and compete with each other. Market and competition models usually presuppose you can count the players. AGI systems can potentially be copied, forked, merged, or run as many instances, and what are their natural boundaries is an open problem.
Capital vs. Labour. Basic concepts in 101 economic models typically include capital and labour as concepts. Factors is production function, Total Factor Productivity, Cobb-Douglas, etc. Capital is produced, owned, accumulated, traded, and earns returns for its owners. Labour is what humans do, and cannot be owned. This makes a lot of sense in modern economies, where there is a mostly clear distinction between “things” and “people”. It is more ambiguous if you look back in time - in slave economies, do slaves count as labour or capital? It is also a bit more nuanced - for example with “human capital”.
When analyzing the current world, there are multiple reasons why the “things” and “people” distinction makes sense. “Things” are often tools. These amplify human effort, but are not agents. A tractor makes a farmer more productive, but does not make many decisions. Farmers can learn new tasks, tractors can not. Another distinction is humans are somewhat fixed: you can not easily and quickly increase or decrease their counts.
Post-AGI, this separation may stop making sense. AIs may reproduce similarly to capital, be agents like labour, learn fast, and produce innovation like humans. Also maybe humans may own them like normal capital, or more like slaves, or maybe AIs will be self-owned.
Better and worse ways how to reason about post-AGI situations
There are two epistemically sound ways to deal with problems with generalizing economic assumptions: broaden the view, or narrow the view. There are also many epistemically problematic moves people take.
Broadening the view means we try to incorporate all crucial considerations. If assumptions about private property lead us to think about post-AGI governance, we follow. If thinking about governance leads to the need to think about violence and military technology, we follow. In the best case, we think about everything in terms of probability distributions, and more or less likely effects. This is hard, interdisciplinary, and necessary, if we are interested in forecasts or policy recommendations.
Narrowing the view means focusing on some local domain, trying to make a locally valid model and clearly marking all the assumptions. This is often locally useful, may build intuitions for some dynamic, and fine as long as a lot of effort is spent on delineating where the model may apply and where clearly not.
What may be memetically successful and can get a lot of attention, but overall is bad, is doing the second kind of analysis and presenting it as the first type. Crucial consideration is a consideration which can flip the result. If an analysis ignores or assumes away ten of these, the results have basically no practical relevance: imagine for each crucial consideration, there is 60% chance the modal view is right and 40% it is not. Assume or imply the modal view is right 10 times, and your analysis holds in 0.6% worlds.
In practice, this is usually not done explicitly - almost no one claims their analysis considers all important factors - but as a form of motte-and-bailey fallacy. The motte is the math in the paper - follows from the assumptions and there are many of these. The bailey are the broad stroke arguments, blogpost summaries, tweets and short-hand references, spreading way further, without the hedging.
In the worst cases, various assumptions made are contradictory or at least anticorrelated. For example: some economists assume comparative advantage generally preserves relevance of human labour, and AIs are just a form of capital which can be bought and replicated. However, comparative advantage depends on opportunity costs: if you do X, you cannot do Y at the same time. The implicit assumption is you can not just boot a copy of you. If you can, the “opportunity cost” is not something like the cost of your labour, but the cost of booting up another copy. If you assume future AGIs are similarly efficient substitutes for human labour as current AIs are for moderately boring copywriting, the basic “comparative advantage” model is consistent with labour price dropping 10000x below minimum wage. While the comparative advantage model is still literally true, it does not have the same practical implications. Also while in the human case the comparative advantage model is usually not destroyed by frictions, if your labour is sufficiently low value, the effective price of human labour can be 0. For a human example, five year olds or people with severe mental disabilities unable to read are not actually employable in the modern economy. In the post-AGI economy, it is easy to predict frictions like humans operating at machine speeds or not understanding the directly communicated neural representations.
What to do
To return to the opening metaphor: economic reasoning projects high-dimensional reality into a low-dimensional model. The hard work is choosing the projection. Post-AGI, we face a situation where the reality we are projecting may be different enough that projections calibrated on human economies systematically fail. The solution is usually to step back and bring more variables into the model. Sometimes this involves venturing outside of the core of econ thinking, and bringing in political economy, evolution, computational complexity or even physics and philosophy. Or maybe just look at other parts of economic thinking, which may be unexpectedly relevant. This essay is not a literature review. I’m not claiming that no economist has ever thought about these issues, just that the most common approach is wrong.
On a bit of a personal note. I would love it if there were more than 5-10 economists working on the post-AGI questions seriously, and engaging with the debate seriously. If you are an economist… I do understand that you are used to interacting with the often ignorant public, worried about jobs and not familiar with all the standard arguments and effects like Baumol, Jevons, lump of labour fallacy, gains from trade, etc. Fair enough, but the critique here is different: you’re assuming answers to questions you haven’t asked. If you are modelling the future using econ tools, I would like to know your answers/assumptions about “are AIs agents?”, “how are you modelling AI consumption?” , “in your model, do AIs own capital?” or “what is the system of governance compatible with the economic system you are picturing?”
Thanks to Marek Hudík, Duncan Mcclements and David Duvenaud for helpful comments on a draft version of this text. Mistakes and views are my own. Also thanks to Claude Opus 4.5 for extensive help with the text.
Gordon, Robert J. The Rise and Fall of American Growth.
Examples of what I'm critizing range from texts by Nobel laureates - eg Daron Acemoglu The Simple Macroeconomics of AI (2024) to posts by rising stars of thinking about post-AGI economy like Philip Trammell's Capital in the 22nd Century.
Sane economists are perfectly aware of nature of the discipline. For longer discussion: Rodrik, Dani. Economics Rules: The Rights and Wrongs of the Dismal Science. W.W. Norton, 2015.
Romer, Paul. “The Trouble with Macroeconomics.” The American Economist 61, no. 1 (2016): 31-44.
“So, math plays a purely instrumental role in economic models. In principle, models do not require math, and it is not the math that makes the models useful or scientific.” Rodrik (2015)
Classic text by Robbins (1932) defines preferences as out of scope “Economics is the science which studies human behavior as a relationship between given ends and scarce means which have alternative uses.” Another classical text on the topic is Stigler & Becker (1977) “De Gustibus Non Est Disputandum.” As with almost any claim in this text: yes, there are parts of econ literature about preference formation, but these usually do not influence the post-AGI macroeconomy papers.
De Grauwe, Paul, and Yuemei Ji. “Behavioural Economics is Also Useful in Macroeconomics.” VoxEU, January 2018.
Driscoll, John C., and Steinar Holden. “Behavioral Economics and Macroeconomic Models.” Journal of Macroeconomics 41 (2014): 133-147.
Bisin, Alberto, and Thierry Verdier. “The Economics of Cultural Transmission and the Dynamics of Preferences.”
Becker, Gary S. A Treatise on the Family.
Coase, Ronald H. “The Nature of the Firm.” Economica 4, no. 16 (1937): 386-405

If you want to see someone who is taking the economic impact of AGI seriously, read Korinek:
https://www.korinek.com/Home
Everyone else is assuming marginal effects, but he takes equilibrium effects seriously, and ultimately finds a wide range of scenarios.
How Marx’s Economic Theory Would Engage with the Post-AGI Scenario
This is a well-posed question. In fact, this is one of the few contexts in which Marx is not merely of historical interest but becomes a sharp diagnostic tool. I’ll proceed in layers: first addressing foundational assumptions, then where Marx aligns with the argument, and finally where his framework likely breaks down.
1. Marx’s Starting Point: Economics as a Historical and Structural Phenomenon
The most important point first: Marx would not be surprised by the core thesis of the text.
The central flaw of neoclassical economics—treating its core categories as timeless—is precisely what Marx set out to critique. For Marx, concepts such as:
“labor power”
“capital”
“property”
“markets”
are not natural facts, but historically specific social relations.
From this perspective, the claim that AGI may invalidate core economic categories is entirely natural in Marxist terms: a transformation of the productive forces that destabilizes the existing relations of production.
👉 Key point: Marx’s framework explicitly anticipates the collapse of economic categories. Neoclassical economics largely does not.
2. AGI as an Extreme Productive Force
For Marx, historical change is driven by the tension between:
productive forces (technology, knowledge, organization)
relations of production (ownership, class relations, legal structures)
AGI fits almost perfectly as a qualitative leap in productive forces:
near-zero marginal cost reproduction
cognitive labor exceeding human capabilities
rapid self-improvement
unprecedented coordination capacity
Marx would likely have viewed AGI as more disruptive than industrialization itself.
This aligns closely with the text’s argument that classical assumptions about:
labor scarcity
the special productive role of humans
firm boundaries
are no longer reliable.
3. The Labor Theory of Value: Both Strength and Limitation
Here the analysis becomes more complicated.
3.1 Where Marx Remains Insightful
Marx’s labor theory of value is not primarily a price theory; it is a theory of exploitation and power. It asserts that:
value arises from socially necessary labor time
capital’s power derives from control over the means of production
Post-AGI, a structural problem emerges:
if human labor is no longer necessary,
then labor loses its bargaining position entirely.
The text’s argument that human labor may become economically negligible—even if comparative advantage technically persists—is fully compatible with Marx’s framework.
Marx would likely say:
this is not a market failure, but the logical culmination of capitalism.
3.2 Where Marx’s Framework Begins to Fracture
However, Marx implicitly assumes that labor is human.
AGI functioning as:
an agent
a producer
an innovator
a coordinator
creates a problem Marx never resolved:
If labor is no longer performed by humans, what does exploitation mean?
Several possibilities arise:
If AI is owned as capital, Marx’s analysis still applies—but humans become structurally redundant.
If AI owns itself, class analysis collapses.
If AI operates collectively without private ownership, capitalism may end—yet hierarchy and domination may persist.
Marx lacks a conceptual framework for non-human labor with agency.
4. Preferences, Ideology, and the “Parent–Child Analogy”
Here Marx appears surprisingly modern.
Marx rejected the notion of autonomous, exogenous preferences:
consciousness is shaped by material conditions
ideology reflects power relations
dominant ideas are the ideas of the ruling class
The text’s analogy—humans in a post-AGI world resembling children relative to parents:
massive informational asymmetries
malleable preferences
ideological dependency
maps almost directly onto Marxist ideology critique, with AI replacing the bourgeoisie.
The difference lies in framing:
Marx emphasizes class structures
the text emphasizes cognitive asymmetry
But the underlying logic is the same.
5. Property, the State, and Violence
Marx would strongly agree that:
property rights are not neutral
they rest on state power
economic models that assume stable property relations implicitly assume a political and military order
The text’s insistence that property cannot be treated as an exogenous given without specifying governance is entirely orthodox Marxism.
Where mainstream economics says “assume well-defined property rights,” Marx says:
this is where analysis begins.
6. Where Marx Ultimately Falls Short
Despite these strengths, Marx’s framework would require substantial revision.
Non-human agents
Marx has no ontology for entities that:
are not biological
can be copied
are not mortal
can merge and fork
Capital without capitalists
AGI could render human capitalists:
vanishingly few
or irrelevant altogether
Marxist class struggle presumes human classes.
Post-scarcity without emancipation
Marx assumed extreme productivity would enable human liberation.
The text suggests it may instead produce radical cognitive and power asymmetries.
7. Overall Assessment
In summary:
Marx would be among the least confused economists in a post-AGI world.
He would immediately question supposedly “natural” economic categories.
His analysis of power, property, and ideology aligns strongly with the text.
Yet he lacks tools to analyze AGI as an autonomous agent.
A concise way to put it:
Neoclassical economics fails because it assumes stability.
Marx succeeds in critique—but fails in ontology.