Why Economists and AI Experts Disagree About the Future
27/10/2025 2025-10-31 14:28Why Economists and AI Experts Disagree About the Future
On 27 October 2025, Stanford University’s Digital Economy Lab hosted a hybrid seminar organized by the Stanford Digital Economy Lab. The session featured Tamay Besiroglu, co-founder and CEO of Mechanize Inc., in conversation with Erik Brynjolfsson, Director of the Stanford Digital Economy Lab, and examined why economists and AI researchers hold sharply divergent views on the trajectory of future economic growth.
Surveying the Divide
Tamay Besiroglu opened by contrasting two strikingly different forecasts. A recent survey of leading growth economists—including Angus Deaton, Chad Jones, and Robert Gordon—found that they overwhelmingly expect sustained global economic growth of around 2 percent per year over the coming century, with most placing their 90 percent confidence interval between 1 percent and 4 percent. By contrast, many AI experts foresee the possibility of orders-of-magnitude accelerations: Paul Christiano assigns a 40 percent chance of a Dyson sphere by 2040, and others, such as Leopold Aschenbrenner and Cole Suman, predict triple-digit annual growth rates once AI begins to substitute comprehensively for human labor.
The Case for Explosive Growth
Drawing on semi-endogenous growth theory, Tamay reminded attendees of the famous “hockey-stick” of GDP per capita on a logarithmic scale: millennia of slow technological advance give way to post-Industrial Revolution hyper-exponential growth. In that framework, ideas are non-rivalrous—once developed, they can be used by an unlimited number of workers without loss—and so technological progress fuels a self-reinforcing loop of population and output growth.
In today’s world, demographic transition has capped labor growth, converting humanity’s workforce into a non-cumulable input and yielding the steady 2–3 percent growth we observe post-World War II. But if AI can substitute flexibly for human labor, output could once again be reinvested to “feed” ever-larger populations of AI workers—effectively turning labor into a capital-like input and re-igniting hyperbolic expansion.
Why Economists Remain Skeptical
Despite this theory’s historical fit, economists voice a host of reservations:
- Anchoring on recent history. Many believe that the remarkably consistent postwar growth trend (well-approximated by a constant 2–3 percent exponential, even on a log-log plot) offers the safest basis for extrapolation, even while acknowledging that discarding earlier centuries raises the variance of any forecast.
- Conservatism as a discipline. By nature, economics prizes parsimonious models grounded in observable data. Abstract or speculative capabilities—such as full automation of cognitive tasks not yet achieved—are often regarded as “zero-order” forecasts, too unconstrained by present evidence.
- Focus on intensive over extensive margin. Economists tend to analyze what happens when existing technologies become cheaper and more widespread, rather than what might occur if entirely new classes of capabilities emerge—an approach that risks underestimating how qualitatively different AI agents could be from today’s tools.
Yet Tamay noted that some quantitative arguments for rapid change are equally concrete: for example, modern data-center GPUs deliver roughly the same number of floating-point operations per second as a human brain, cost about $50 000, and could—if programmed at brain-efficiency—reproduce a human’s annual economic value within a year, implying yearly doubling of productive capacity.
Bridging the Gap: What Would Convince Economists?
In closing, Tamay Besiroglu posed the key question: what evidence would shift economists’ priors? Would a sustained run of sub-2 percent growth despite major AI milestones persuade them? Or would breakthroughs in fully simulated white-collar work environments, driving down the cost of complex tasks, tip the balance? He suggested that carefully designed, transparent experiments—akin to large-scale field trials of behavioral interventions—might offer the clarity both camps seek.
As the conversation moved into audience questions, it became clear that resolving this debate will require not only sharper data but also a shared vocabulary across disciplines—combining economists’ rigor in measuring aggregate trends with AI researchers’ insights into the capabilities and limitations of emerging algorithms.
The Stanford University’s Digital Economy Lab is a pioneering research center dedicated to exploring how digital technologies reshape markets, organizations, and policy. By integrating economics, data science, and technology studies, it empowers scholars and practitioners to design data-driven strategies that foster inclusive growth and innovation in the digital age.
The Conf is a platform that reports on scholarly conferences, symposia, roundtables, book talks, and other academic events. It is managed by a group of students from leading American and European universities and is published by Alma Mater Europaea University, Location Vienna.




