In the mid 1990s William Nordhaus, an economist at Yale University, looked at two ways of measuring the price of light over the past two centuries. You could do it the way someone calculating GDP would do: by adding up the change over time in the prices of the things people bought to make light. On this basis, he reckoned, the price of light rose by a factor of between three and five between 1800 and 1992. But each innovation in lighting, from candles to tungsten light bulbs, was far more efficient than the last. If you measured the price of light in the way a cost-conscious physicist might, in cents per lumen-hour, it plummeted more than a hundredfold.
So a cost-conscious physicist would point out, correctly, that the normal approach for calculating GDP doesn’t make sense in this situation. Instead we have to take a step back from the formula and ask why it doesn’t make sense, and what would work instead. The bigger problem in economics, however, is that these situations crop up all the time. Assuming that the price paid for goods and services is an accurate reflection of their underlying value is the assumption that most academic economists use, but it’s precisely the situations where the price doesn’t accurately reflect inherent value where non-academic economists (otherwise known as everyone else) make money.
Debating whether the core tenets of academic economics are viable would be fun, but we’ll save that for another day. Instead I want to focus on these situations where the underlying value of a good or service is out of whack with its market price. Loosely speaking, this is an arbitrage situation. What happens? A human steps back from the default formula of “it’s worth what the market will bear” and starts doing some original thinking.
Thus far this is a uniquely human ability - but maybe not for long. We already have machines that make financial trades based on the straightforward “what the market will bear” logic. And in other areas, the line between original human creativity and something produced by an algorithm is blurring, with AlphaGo’s Move 37 the most recent example. Was it simply chance that it went down that exceedingly unlikely path in its neural net to choose that move, or was there something else going on? And could we describe our own, “uniquely human” creativity as simply going down exceedingly unlikely paths in our own neural nets?
Back to pragmatics. One can imagine the next step for economic machines - instructing them to look at examples where people have not followed the formula for how they should act, and characterize what that person’s reasoning was, and whether or not it panned out. From there the machine could start making these decisions on its own.
Of course, this may just move humans one level higher up the abstraction ladder. Original thinking about markets, the economy, and how to allocate resources efficiently will move down into the realm of things that machines know how to do and it will soon become unremarkable when they do them for us. The goalposts for what counts as “AI” have been moving backward since Alan Turing was around, but I (and a lot of other people it seems) can’t shake the feeling that there’s something different about these newfound talents of machines.
For one thing, people are (somewhat ironically) taking a step back from the situation and realizing that the ratio of people on the planet to the number of jobs that will soon be left for humans to do is getting seriously skewed. Automation changed jobs in the past, sure, but previously, most people whose job was eliminated could reasonably be expected to move up the abstraction ladder in the same field. Not so much this time around, which is one reason why many people are now advocating a universal basic income from this standpoint.
But more poignantly, creating machines that are able to conjure these original-seeming, human-esque thoughts moves us a step closer to creating machines that understand why humans do things, and thinking about how you would explain things like this to a machine gets you to some pretty existential questions pretty quickly. In other words, there’s a meta component to this next set of tasks that we want to pass off, and it feels like we might be giving away the keys to the kingdom by doing so.