Author: XAVIER FERRÉS and MILANA LISICHKINA
Read the full article by downloading it below.
ABSTRACT
Advanced artificial intelligence could cause the automation of many areas currently dominated by humans, including science. This paper argues for the view that if AI was capable of automating science, then explosive growth, i.e. at least 30% GWP yearly growth, would very likely follow as a result. Explosive growth would be the result of a positive feedback loop that leads to the acceleration of the economy. We explore the conditions under which this would be true, and then answer some objections to the argument, based on whether we will create General Artificial Intelligence (AGI), how difficult it will be to
automate science and what could bottleneck the feedback loop. We conclude that the bottleneck objection is the most convincing one, but that the mechanism could still stand, and that there’s a decent chance we will observe explosive growth during our lifetime.
Not experts on economic growth theory, machine learning or any other area we comment on. We make claims that might sound wild and weird to the average reader, but we support ourselves on highly intelligent people who also hold these views. It’s very likely we are wrong in some major and minor ways.
1. Introduction
Artificial Intelligence (AI, from now on) is expected to fundamentally change our economies. Economists have analyzed many of the possible outcomes that might result from the development of this new technology, ranging from unemployment to economic growth. One particular event that could be economically transformative is the automation of science. In this paper, we will explore a mechanism by which this could lead to explosive growth, i.e. rates of growth of the Gross World Product (GWP) exceeding 30% a year (Cotra, 2021). Then, we will review some objections to this seemingly wild idea.
But first, why should we care about explosive growth? There are many reasons, ranging from the more obvious and cheerful – economic growth is correlated with many things we value (Cowen, 2019) – to the more grim and under-discussed – AI capable of causing explosive growth might also cause human extinction (Jones, 2024). We will not explore those further in this paper. To motivate the discussion, it will suffice to understand how unprecedented explosive growth would be in human history – for good, or bad.
To illustrate this, consider that frontier economies have been growing steadily at a rate of 2% for the last one hundred years. Thus, explosive growth would imply that the world economy would grow more than 10 times faster than, for instance, the U.S. has since 1870. In other words, if GWP currently doubles every 20-30 years, with explosive growth the whole world economy would double every 2-3 years (Dadvison, 2021). At that point, history itself would have accelerated: technological, social and economic progress would be compressed so much that it would be difficult for humans to keep up with them (Cotra, 2021). With explosive growth, we would have entered, arguably, in a new phase of human history, just like we did during the Industrial Revolution.
2. From Science Automation to Explosive Growth
Why would explosive growth be a consequence of the automation of science by AI? To explore this question, we will consider an endogenous growth model, first presented by Aghion, Jones and Jones (2021).
2.1 The Kardashev Scale
To illustrate the idea of the paper without the need of any mathematical equations, we will base the discussion around a toy model representation of the original model, based on Clancy (2022).
What is really unique about this toy model is that we will measure both technological and economic progress with the same metric: the Kardashev scale. Although this scale is not widely used among economists, it will be useful for our purposes.
The Kardashev scale is based on a civilization’s energy use. In particular, according to this scale, there are three types of civilizations: Type 1, 2 and 3, where each uses 1010 times as much energy as its redecessor. Thus, we can think of these different types of civilizations as being separated from each other by a staircase with ten steps, where climbing one of them means that a civilization has increased its energy use by 10x (i.e. 0.1 in the Kardashev scale). The rate of economic growth / technological progress will therefore be given by how much time it takes for an economy to climb one of these 0.1 Kardashev steps.
Finally, let us posit some (simplifying) assumptions to introduce our first scenario:
- The entire population is dedicated to research (1)
- Global population growth is constant at 0.7% (i.e. world population doubles every 100 years) (2)
- There are decreasing marginal returns to technological progress: every 0.1 up in the Kardashev scale means we need twice as many researcher-years to keep climbing the staircase (3)
2.2 First scenario: full automation
What is the rate of growth of an economy where those first three premises hold? The answer is that the rate of economic growth is constant at one Kardashev step every century. Technological progress gets twice as hard every step, but researchers are also doubling, every hundred years. So, we manage to climb a step every century: that’s the time we need to wait for the research labor force to double. Importantly, economic growth is constant: the rate of growth is fixed at one every hundred years.
If we introduce AI, however, things can change drastically. There are different ways this could happen. To start, let’s change our model in the following ways:
- AI does all the research humans did previously (humans no longer work) (1’)
- The AI workforce increases with energy use: every step up the Kardashev scale, the supply of AIs doing research increases by a factor of 10 (2’)
These assumptions are extreme and unrealistic. Nonetheless, their value lies in alerting us of a way in which we (almost) certainly get explosive growth. To see this, consider that with AI a positive feedback loop has been created: every step up in the Kardashev scale more than doubles the supply of AI workers, which means climbing a step in the scale faster, which means more AI researchers, which … In particular, notice that every step up the Kardashev scale actually takes only ⅕ the time it took climbing the last step: economic growth is accelerating. Soon, explosive growth occurs – not merely as a yearly rate, but within the span of months, or even days! As Clancy points out:
“If it takes a century to get from 0.6 to 0.7 (roughly where we are today), then it takes twenty years to get from 0.7 to 0.8, four years to get from 0.8 to 0.9, and under one year to go from 0.9 to 1.0! This acceleration continues at a blistering pace: once we reach a Type 1 civilization, we’ll get to a galaxy-spanning Type 3 civilization in less than three months!”

This conclusion has many problems, one of them being that it would actually be physically impossible to get to a Type 3 civilization so quickly because of constraints like the speed of light. However, for our purposes this absurd conclusion doesn’t matter much: we are just interested in explosive growth as we defined it earlier, i.e. >30% GWP yearly growth. There seems to be no known physical constraint that would fundamentally impede this result. Thus, full science automation seems to be a very powerful mechanism that could result in explosive growth.
2.3 Second scenario: automation with substitution
Now, full science automation seems like an extreme scenario. What happens if we relax this assumption? In theory, we can still get explosive growth, but at the cost of adding one crucial assumption:
- The different tasks involved in scientific research are perfect substitutes in making scientific progress (4)
Notice that, in a way, if assumption (4) holds, then we have just gotten back to the full automation scenario! This is because if tasks are really perfect substitutes, then the AI automated tasks are just as good as any other to make progress in science and, thus, in the Kardashev scale. Thus, if something like assumptions (1’) and (2’) get back in place, then these two scenarios would be identical.
Now, because it seems intuitive that scientific progress made with an AI that automates every research task should be “more powerful” than if just some fraction of it is making all the work, we can formalize this in our model reducing the force of the feedback loop in the case of partial automation. For instance:
- The AI workforce increases with energy use: every step up the Kardashev scale, the supply of AIs doing research increases by a factor of 4 (2’’)
In this scenario, we get accelerating growth again – albeit at a lower rate than before. But, would explosive growth occur in this world? Eventually, it seems quite likely, just as before. However, because the feedback loop is not that powerful, it would seem that this reduces the likelihood of explosive growth with respect to full automation: automation needs to start early enough or be powerful enough for explosive growth to occur before 2100 in this scenario.

2.4 Third scenario: automation without substitution
Consider, finally, what happens if we weaken even more the premises of our model. In particular, let’s get rid of the assumption about perfect substitutability:
- AI and human research tasks are not perfect complements in making scientific progress (4’)
In this scenario, the key is that even if we get back assumption (2’) (“the AI labor force is increasing 10x every step up the Kardashev scale”), it is the human labor force growth rate what ends up determining economic growth, because every step of the research process is essential, and humans are still on the loop via the non-automated tasks that can’t be substituted. Thus, this scenario is similar to our baseline of only human researchers, with the only difference that growth occurs faster – we can now climb 0.1 Kardashev steps in only 24 years, thanks to automation – but it is still constant. Explosive growth could still theoretically occur in a world like this, if only the percentage of tasks automated is really high: however, this rate being constant means that there’s no possibility of eventually getting it if the yearly rate of automation isn’t enough in the first place.
Thus, in scenario three explosive growth is substantially less likely.

3. Objections
Our model has extreme implications, if true. It seems difficult to imagine rates of explosive economic growth, let alone in our lifetime. In this section, we will try to formulate objections that support this skepticism, and offer some answers.
1. AGI might be impossible
A first objection could go as follows: “For the automation of science to happen you would need some really advanced AI model being developed, at least AGI or more. But those models won’t be developed”.
This objection is making two different claims. First, that we need models at AGI-level (or more) for the automation of science to work. Second, that it’s impossible to develop these models.
To answer the first concern, it’s important to consider how slippery the term AGI is, and how different people use it in different ways. For clarity, let us define Artificial General Intelligence (AGI) as an AI that can perform as well or better than humans on a wide range of cognitive tasks (Morris et. al, 2024). In contrast, we define Narrow AI (NAI) as having a clearly scoped task or set of tasks it can do as well or better than humans. Thus, the two principal dimensions that distinguish AGI from NAI are (1) general-purpose intelligence, and (2) potential cross-domain uses and transferable intelligence.
So, in our view, AGI would very likely be sufficient for fully automating science. By definition, AGI would be at least as good as any human doing research (and at any other task). If we combine this with some properties these digital models are likely to have – in particular, making copies of themselves and running at higher subjective speeds (Karnofsky, 2021) – it seems reasonable to expect full science automation to not be difficult to achieve, and partial automation being more likely still.
Importantly, however, we don’t think it is clear that AGI is necessary for automating science and research. Instead of a general-purpose AI, we could instead train NAIs (Karnofsky, 2021) to only do specific scientific tasks, like doing experiments, gathering data or doing literature reviews. Given that scientists spend 80% of their time on “repetitive tasks” such as cleaning up data, this could be a promising strategy. Furthermore, there currently exist models which are superhuman (Morris et. al, 2024) at some very specific tasks, like AlphaZero at playing Go and AlphaFold at predicting protein structures. These systems are evidence that models can learn how to solve challenging problems in highly complex domains simply through trial and error. When, if ever, will NAI systems like these be developed for automating science would depend on the amount of investment made in R&D and their theoretical feasibility. But if developed, these models might even help pave the way to AGI, since a positive feedback loop could be created, where AIs help train the next generation of AIs! Thus, in our view the first part of the objection has some force, but it’s ultimately misguided.
What about the second part? Is it true that models as powerful as AGI will never be developed? Experts in machine learning disagree. In fact, especially since a year or two ago, the question seems more about “when” than about “if possible”. And they think the “when” is really close! For instance, Ayeja Cotra’s report estimates it is 80% likely that we will develop human-level AI by 2100 (Cotra, 2020) . A recent expert opinion survey found that the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047 by AI researchers (Grace et. al, 2024). If we instead consult prediction markets, AGI is much closer – less than 10 years away. Thus, what was once a conversation of science fiction coming from a movie or a book is closer to reality than ever.
What’s more, we currently use in our daily life systems that are already AGI, in some “emerging” sense (Morris et. al, 2024). GPT-4 is a general-purpose system, even if it doesn’t outperform most humans in most tasks yet. Considering all this evidence, it just seems a really wild prediction to say that AGI will never come to exist. But if that’s true, then at some point the automation of science could be feasible, and with it, explosive rates of growth.
2. High levels of automation seem unlikely
A somewhat similar objection could be: “What if automation is simply unfeasible in some jobs, no matter what the odds are, and science is one of those?”
This objection is closely linked to the first. We offered there an argument for thinking AGI is sufficient for automating science, but perhaps that’s not enough to convince some people, who may think many factors could influence whether (full) automation happens or not. Maybe some scientific tasks need human aid inevitably, and the best we can hope for is partial automation (notice, though, that even with partial automation it would be theoretically possible to have explosive growth). To answer this concern, let’s think about some properties a model would need for science automation to be feasible, and how far
current models are from having them.
First, scientific research often requires expertise across multiple disciplines and the ability to integrate knowledge from diverse sources. These models would need to possess a broad understanding of various scientific domains and be capable of synthesizing information from different fields. Second, they would need some agency, i.e. the capacity to adapt to new situations in a quick manner, acting without the need of human guidance. Also multimodality, i.e. being capable of processing video or audio data. Finally, they would need to be capable of retaining information over long-time horizons (Christiano, 2023).
As also mentioned earlier, our current large language models (LLMs) exhibit most of these properties already, in some form or another, except for agency. For instance, they exhibit some general intelligence, but some have pointed out the limitations when it comes to generalizing beyond their training data (Wolfram, 2024). In multimodality, very recent advancements have been made by Google, and even in agency there are advanced models who are very agentic, like the software engineer Devin. So, from the point of view of capabilities, the automation of science doesn’t seem impossible either. Rather, we seem to be heading in that direction.
Still, some people might be skeptical, perhaps because there have been examples where the introduction of AI systems have been disruptive of the workflow in some places – for instance, in fast-food jobs like Caliburger with the robot Flippy. Our view is that this is likely to occur at first, like with the implementation of every new technology, but it will eventually work. Even Flippy has managed to integrate itself by now! But it is true that we expect automation to be a continuous process, perhaps even occurring slowly over years or decades. In the end, however, we believe there’s no theoretical reason why some scientific tasks will be impossible to automate. After all, not everyone thought it would be possible to lift a 200 ton machine up in the sky for it to practically fly on its own in the 1800s.
3. Explosive growth might be impossible
This objection is very straightforward: “Is it even possible to experience such rates of economic growth (>30% annual GDP)? The world has never experienced rates that high in all its history.”
We should distinguish two different claims here. The first is a question about the feasibility of explosive economic growth. We have partly answered that question: our view is that there’s no physical limitation to rates of growth higher than 30% of GWP in a year. A different claim, however, would be that, because we haven’t observed such rates before, not even in the period of the Industrial Revolution or China’s catch-up growth, we should be wary that this outcome is very unlikely in the first place, and perhaps the model is just wrong or not reliable.
Our answer to this objection is that, indeed, the world has never experienced explosive growth. This should make us not be overconfident in our views, especially as economic growth models might be incapable of reaching sound conclusions in these extreme scenarios we are depicting. Nonetheless, it is not the case that we believe that explosive growth is something we should regard as very unlikely from a historical point of view (i.e. without taking into account any theory of growth) . On the contrary, there’s some historical evidence that points in that direction, and if we take it seriously enough then we should actually expect explosive growth in the next decades (Dadvison, 2021).
If we look at historical GWP, we can observe that we can divide the past roughly into two periods: the stagnation period and the growth period. We can appreciate both in the following well-known graph:

Because this graph is in logarithmic form, the increase in the slope means that growth rates have accelerated with time: the human economy has grown super-exponentially. That is, it now takes much less time for the world economy to double in size than it took for the economy to double in the middle ages – and the same for the middle ages with respect to prehistory! If we were to extrapolate that trend, we get striking results, very similar to the ones from our model with full AI automation. For example, one attempt of extrapolation by Roodman (2021) predicts that by 2047 output will be infinite!
Besides the absurd conclusions of such naive projection, we should take into account how frontier economies have actually grown at an exponential (i.e. constant) rate during the last hundred years, which suggests a very different trend indeed (Dadvison, 2021). However, the important point we want to make is that explosive growth isn’t ruled out by history; rather, the evidence is mixed. Even if 30% rates of growth a year might seem crazy, human history teaches us that someone living before the industrial revolution could have made that same prediction, perhaps even more justifiably, and he would have been wrong.
4. Regulation and physical limitations as a bottleneck
A final objection would be: “Many things could stop the feedback loop from happening, even if full automation is feasible. Regulation and physical constraints would limit growth”
The possibility of bottlenecks is really important for the mechanism described in the first sections to work. As we have seen, in scenario three human beings are a bottleneck to automation, and this severely limits the rates of growth. But there are many more bottlenecks imaginable . Political regulation, for example, could play a major role (Clancy and Besiroglu, 2023). First, it could hinder the training and deployment of AI systems, potentially limiting the economic growth effects of AI by restricting their power. Several reasons for this include concerns about new technologies, like privacy issues affecting training data
availability. To counter the obstacles set by the use of data, policies like the General Data Protection Regulation (GDPR) are being implemented already. Another source of regulation could be the reluctance to allow AI systems to automate tasks without human supervision. And, in fact, this could be a way of directly having scenario three: humans could be forced to be a part of the productive process and science development. However, the forces pushing for regulation will certainly face opposition, most likely from AI labs themselves – OpenAI is a recent example.
On the other hand, times could also be a constraint, as argued by Clancy (2023). Research, as well as many other activities in the economy, take time, and that could slow down growth as well. Overall, there are a lot of independent ways in which the feedback loop could have its power diminished, but not any one is a certainty, and they would need to be powerful enough to prevent high rates of growth (Besiroglu, 2023). Still, we think bottlenecks are the single most likely reason we will not observe explosive growth, at least soon.
4. Conclusions
Artificial intelligence could transform our economies if it automates the process of discovery and scientific innovation, by causing explosive rates of growth. This is the result of some endogenous economic growth models, like the one presented in this article. The fundamental reason this happens is the feedback loop of ideas and growth, where higher rates of growth leads to more and better machines, which accelerate growth further, and so on. Ultimately, it is very difficult to know where this process ends, but it seems likely that explosive rates of economic growth, that is, rates that exceed 30% of yearly GWP growth,
would be a result of this process.
Still, this mechanism is somewhat fragile. If the automation of science is not complete, then the feedback loop breaks – unless we assume that any automated scientific task is as good as any other to make science advance, which seems a strong assumption. The possibility of bottlenecks, whether caused by human beings in the loop, regulation or time reduce the likelihood of observing explosive growth.
Still, our view is that it is very likely that really powerful AI systems will be developed this century, as experts expect. This will certainly open the possibility of science automation, and thus, of explosive growth. Whether we end up experiencing history accelerate or not during our lifetimes, it seems guaranteed we will live during interesting times.