In the movie “2001: A Space Odyssey”, the computer HAL-9000 attempts to kill the human members of the mission to Jupiter. HAL alone knows the true aim of their mission; he reasons that the only way it could be done is if he alone does it. The humans are the weak link, and so he tries to kill them. He does not succeed. Neither does the mission.
Economists wonder if artificial general intelligence (AGI) will be a complement to, or a replacement for, human labor. These imply very different things for the future of wages. If it is a complement to labor, then we should expect wages to grow without bound. The rent will go to the scarce factor. On the other hand, if it is a replacement for, we should expect wages to fall, and all rent (in the long run) will go to the holders of capital.
What I want to point out is that we should expect AGI to not be used with humans at all. If AGI is an autonomous agent, with a skill exceeding humans, we should expect companies to form which are all AGI, and for there to be companies which are all humans. It is possible for humans to have unbounded wage growth, even if they and AI are not “complements” in an ordinary sense at all.
The formative paper here is Michael Kremer’s 1993 paper, “The O-Ring Theory of Economic Development”, which frequent readers of this blog have heard me reference countless times. Imagine that we are making something with multiple steps in the production process. An error in any one of the steps will wreck the whole product; if we think of error rate as “skill”, then we should expect workers doing dissimilar tasks to be arranged by skill. The opportunity cost of someone screwing up mundane tasks is higher the more valuable the company is.
This does not mean that humans will not experience unbounded wage growth. That is dependent upon whether or not all tasks can be automated or not. If there are some tasks which are essential, but not automatable, then wages will grow. Only if almost all tasks can be automated will AGI seriously compete with labor. What I want to caution against is the naive view that AGI being a complement to labor takes the form which it does now, of a human directing which problems to solve, and breaking the task down into specific tasks to be guided through. There is a world where most humans do not use AGI in any context, even as it causes a great economic boom.
Interesting! But what do you expect these companies of humans to be doing when AGI is around? (Besides physical stuff until robotics is solved)
Let’s just remember that the map is not the terrain. The o-ring model is full of simplifying assumptions. Perhaps those accurately represent how AI adoption may go, but I’m skeptical.