Editor’s Note: We do not support Artificial Intelligence, its development, or the vast majority of its potential uses. This article is intended to be informative to prepare Americans for what may be coming if we’re not able to stop it. Here’s Per Bylund from Mises…
In a recent article, we briefly summarized what it is that we today call artificial intelligence (AI). Whereas these technologies are certainly impressive and may even pass the Turing test, they are not beings and have no consciousness. Thus, this is neither the time nor the place to discuss philosophical issues of how to define a true or full AI—an artificial general intelligence—and whether we should recognize AI software legally as a person (after all, corporations are).
You DON’T need “collectable” coins. Physical Gold and Silver bullion protects your wealth at home or in a retirement account. Contact Ira and learn why “collectable” coins aren’t worth their weight in gold.
Economically speaking, AI as technology, whether it is used for entertainment or in production, is a good. As Carl Menger taught, what makes something a good is that it (whatever it may be) has the ability to satisfy a human need, that it must be recognized as such, and that a person—the consumer—has or can gain command over it to satisfy those actual needs. In other words, it must be scarce (there is less of it than we can use to satisfy wants) and understood as valuable (because we believe it can satisfy wants). AI certainly fits the criteria.
AI as a Consumption Good
When people entertain themselves by “discussing” with AI (try, for example, Windows Copilot) or generating quirky images using DALL-E (try it here), it is a good of the lowest order—a consumption good. As such, the economic consequences are limited to the effect this has on consumer behavior. But this may in turn have a significant impact on production.
Some consumption goods revolutionize the economy and society. Examples of such goods include the automobile (from the introduction of Ford’s Model-T) and the smartphone (starting with Apple’s iPhone). The former disrupted transportation and infrastructure and facilitated just-in-time manufacturing and urban sprawl, just to mention a few effects. The latter changed everything from how we bank to how we travel.
The point here is that as consumer behavior changes, the production structure follows along. For example, with the broad adoption of the smartphone, paper map production has all but disappeared; whereas, digital location services and intelligent logistics have seen enormous growth and development. And change leads to more change because entrepreneurs build on, add to, and challenge the new discoveries.
AI has the potential to change consumer behavior well beyond its designed functionality. Exactly how and in what ways remains to be seen. But it is safe to say that it has potential. (On the other hand, many goods have had potential to disrupt but didn’t leave a mark.) For example, we may see people produce their own stories, songs, images, and even movies. So perhaps, instead of relying on television or Netflix and Hollywood producers, we’ll make movie night into a make-a-movie night where we watch content we have generated and that fits us perfectly.
AI as a Higher-Order Good
As a tool and thus a good of a higher order, AI has already had an effect and promises to disrupt several trades. Because it is very effective at producing and presenting content, including translating and editing texts, content-related professions are threatened by AI. This includes journalists and copyeditors, as AI programs can write and edit faster than humans. After all, anyone can ask AI to produce or edit a text. Students already use AI to spice up or improve their papers—or let AI write them from scratch.
AI is similarly affecting photographers and illustrators. It only takes a minute to have DALL-E produce a new image exactly as directed, or to have an AI algorithm remove or add things in a picture you snapped. Whereas, having an illustrator create something takes much longer (not to mention the cost).
Programmers and system developers are also seeing the effects of AI, which has no problem both generating new code (without bugs!) or checking already written code. Legacy software written in dated and ineffective programming languages can be run through an AI to make the coding more efficient—and converted into a modern language.
AI is also affecting academia. Why have an instructor tell students about some subject matter instead of letting AI do it? After all, the AI can easily present content in a way that the student prefers. For example, make a movie to explain, say, biology or chemistry in an entertaining way. And it can answer all kinds of questions without ever getting bothered or cranky—and it has nowhere else to be. In research, AI can analyze data more effectively and run thousands of different regressions on data to find something that is significant and important (so-called HARKing, which is very poor research practice—but who will know?). It can write up the paper too, with citations and everything, in just seconds.
AI as Production Capital
All of this means AI can and will be used in production. In fact, it already is and we have only started to see the effects.
AI is best categorized as capital, which is used to make labor more productive (more value output per hour of labor invested) through facilitating more roundabout (but more effective) production structures. Capital goods in general have one (or both) of two functions: it makes existing production processes more effective by increasing productivity, or it makes possible types of production that were not previously possible. AI checks both boxes.
We have already seen how people working in several types of content-based professions can easily be made more productive or replaced entirely by AI. It can also do things that people may have been unable to do—or never thought of doing. This of course can cause so-called technological unemployment as people lose their jobs because AI can do them better (and cheaper). But this is a dystopian way of describing something quite normal and highly useful: that we relieve people, with all their ingenuity, from comparatively simple tasks so that they can create much more value elsewhere.
It is of course problematic for any person losing their source of income, but it is highly beneficial to consumers (and therefore society at large) that these (and other) professions are “creatively destroyed.” The economic point of employment is not to provide people with an income so they can pay taxes (although politicians seem to think so) but to produce goods that can satisfy consumer wants—to make our lives better. Just like there are very few stable boys or buggy-whip producers since the automobile revolution, the future will see fewer people doing news reporting, copyediting, or coding.
Note also that this revolution is not nearly as sudden and disruptive as it may at first seem: the news media, for example, have for many years reduced the number of journalists doing reporting (most outlets nowadays merely republishing standard articles from AP or Reuters). And software development already uses increasingly effective development environments that correct and predict commands, allow for WYSIWYG and drag-and-drop development, and can debug code and suggest solutions to bugs.
AI is only another step in this process. But the threat is greatly exaggerated. We tend to overestimate the impact of technology in the short term but underestimate it in the long term.
Limitations to Overcome
There is a problem, however, and it has to do with how large language models work and what responses they generate. When used in a setting that is strictly rules-based, such as in computer programming, the AI “understanding” of code can greatly improve the productivity of coders (or replace them). AI will not introduce bugs in software unless the specifications are incomplete or contradictory, and it will not make errors.
The same is true for AI’s language generation: it draws from large troves of text data and has a good “understanding” for how humans use language. But there are no rules-based ways by which it can distinguish fact from fiction. Instead, AI draws from what statistically is more likely to be a human-sounding response. For this reason, it produces content that can be entirely wrong.
For example, I asked AI to summarize the content of my 2022 economics primer, How to Think about the Economy. Since it has access to the text, it did a pretty good job summarizing what is in the book. But it also added comments on content that is typically in economics books but that is not in the primer (such as equilibrium theory, perfect competition, and mathematical equations). The AI is correct that economics books typically discuss such things and thus it is statistically probable that my primer would do the same. But it doesn’t.
There is a difference between statistical probability and truth. We will look at this problem and the potential threat that AI poses to human society in a future article.