By Edward Aguilar ’26 and ChatGPT

Staff Writer 

aguilare@lakeforest.edu 

Fears of technological unemployment are thousands of years old. Writing in his Politics (c. 350 BCE), Aristotle imagined a world where the strings of harps plucked on command and where “a shuttle could weave itself.” Nearly 2,000 years later, with the creation of textile mills in 19th century Britain, that world was realized and, in its wake, sparked the most infamous revolution against machines: The Luddite Movement. While largely branded as self-interested and economically-naive, their view of the future was shared by some of the greatest intellectuals of the time. Writing later in his 1930s piece, Economic Possibilities for our Grandchildren, economist John Maynard Keynes envisioned a future where automation would bring about “a new disease of…technological unemployment.”

However, a quick glance around the economy today shows that Keynes—and countless others who predicted a future of labor without humans—were wrong; at the very least, they were only half right. That’s because while estimates show that nearly 75,000,000 jobs in America have been lost to automation, the economy has, miraculously, created even more. When the robots came for our jobs, we went to find new ones; the result was monumental displays of human economic migration, with millions of workers getting up and reskilling at once. While the quality of those new jobs has tended to be worse than those they replaced, we have—at least for now—avoided what economist David Ricardo referred to as human redundancy.

However, clear to anyone plugged into the cultural zeitgeist is the growing rapidity of machine innovation and influence; DALLE-2 has allowed anyone to create expert-level art from their phone in seconds, Google’s new MusicLM is doing the same for music, and, of course, ChatGPT—in less than a month after launch—has become ubiquitous in universities and cubicles across the country. These innovations have brought the perilousness of humanity’s place in the economy—our dance on the edge of obsolescence—to the forefront of public debate, reigniting fears of automation. More than ever, it’s critical we understand exactly why it was that some of our greatest minds turned out to be wrong, and if this time, they’ll be right.

1) The Automation-Specialization Cycle

In order to understand the implications of automation on the workforce, it’s important we take a closer look at the cycle that drives it. At its core, automation is driven by an employer’s profit incentive to reduce costs. With human labor nearly always being the highest cost for businesses across the economy, it’s not surprising that firms are always looking to automate. However, the ability of robots to perform a job, the cost of creating and maintaining them, as well as the human moat (unique factors that make a human worker irreplaceable) all affect a company’s ability to automate a given job. A formalization would look something along the lines of

∆ Ability (between human and robot) + ∆ Cost + Human Moat = likeliness of human keeping their job.

When a job does get automated, two types of new jobs typically emerge. The first are jobs created to manage or maintain the machines, such as programmers, engineers, and self-checkout attendants. The second are specialized jobs that are created as a result of demand stemming from newly lowered prices (for example, sufficient demand for cooks made bakers possible, and sufficient demand for bakers made pastry chefs possible). However, as these specializations become more valuable, so too grows the profit incentive to automate those jobs as well. What follows is an endless cycle of human reskilling and robotic automation, chasing each other down the tree of possible specializations; it’s humanity’s never-ending race against the machine.

Thankfully for workers, both the speed and scale of the Automation-Specialization Cycle have been relatively sluggish so far, bottlenecked by the way we train computers. That is, until now.

2) A New Computational Paradigm

Modern machine automation traces its roots to the 1970s emergence of so-called “Expert Systems.” The idea was simple: take an expert, have them explain in detail how they do their job, and then have a programmer build a machine to automate said job. While the idea was heralded as a major breakthrough at the time, its limitations were immediately clear. Since every possible response needed to be hard-coded, even simple tasks would quickly balloon into unwieldy and expensive code bases. The result was that expert systems were only cost-effective if the job being automated was what economists commonly refer to as “narrow” or “repetitive”; it was simply too expensive and time-consuming to try and replace more complex tasks. This is why we’ve seen automation almost explicitly target manufacturing and retail jobs.

However, in recent years, a new computational paradigm—Machine Learning—has fundamentally redefined what’s possible. Put simply, the fundamental innovation of machine learning algorithms is that if we want to automate a job, we no longer need to tell the robot exactly how that job is done. Instead, we provide it with examples of a job well done, and like a human, it learns from experience.

Trained on a vast corpus of data (which was, perhaps ironically, made possible by the complete digitization of human life), machine learning programs are now capable of performing tasks once thought to be at the very core of humanity and safely out of reach of machines: they’ve won poetry contests, placed first in art competitions, and even passed the bar exam—all of this in the last six months alone. It would not be an understatement to say that machine learning has effectively destroyed the barrier between narrow and complex tasks.

The implications on not just our economy, but our society, are sobering, to say the least. One of the most immediate impacts we’ll face as this technology matures is the speeding up of the Automation-Specialization Cycle. As mentioned earlier, the primary safe haven of workers today has been the ability to specialize in a job faster than it could be automated. However, for reasons both biological and political, machine learning now far outstrips the speed, cost, and accuracy of human learning.

If humans are to remain ahead in our race against the machine, it will require a concerted, economy-wide push of human labor toward the third factor of automation: the Human Moat.

3) The Human Moat

The Human Moat is a concept that aims to describe the rents paid to humans simply for their being human, or for having some sort of factor that can only be acquired by a human. It’s a fairly amorphous concept until it gets put into practice. For example, imagine that you’re opening a small coffee shop and have to choose between two new baristas, Michael and Marie. Michael can make 50 drinks a day, at a cost of $20 an hour, while Marie—a superhuman barista—can make 100 drinks a day at $2 an hour. All else equal, your choice as an employer is clear: hire Marie and then marvel at your incredible business acumen. But now imagine Marie as a robotic arm attached to your coffee bar, and you might become hesitant. A large part of the reason people come to coffee shops in the first place is because of the experience of human interaction, you figure. After further reflection, you decide to eat the cost and hire Michael. That’s the Human Moat.

Other examples of the Human Moat include credentials, status, human touch, and life-or-death decision-making. For example, we’d most likely place a human in charge of criminal sentencing—even if they’re less effective and more expensive—simply because we lack the ethical frameworks for allowing a machine to make those decisions; that’s why judges are unlikely to be automated. Similarly, although GPT-3 has now publicly passed the multistate bar exam, it cannot practice on its own simply because robots cannot legally be awarded a license to practice law. In sum: the economy is made up of humans, and as such, we tend to play favorites.

Leaning into that fact—into our own humanity—might be how we survive what many have referred to as the Fourth Industrial Revolution; however, it won’t be without its drawbacks. Recall that the first three industrial revolutions didn’t destroy demand for hand-made goods, but they did turn them into a luxury. A similar fate for human-made goods, given the new computational paradigm, is no longer unthinkable. As such, new proposals and solutions to our economic, political, and social systems are becoming increasingly critical with each day that passes. My hope is that this piece serves as a conversation starter for a much more vivid and diverse debate of ideas. As it stands, so much of our current funding and brainpower is being directed toward building autonomous machines with little going to prepare us for the world that awaits us in their wake. It was that exact mistake—made by those who brought about prior industrial revolutions—that marked their age as one of child exploitation, environmental degradation, and social unrest. Living on the precipice of the next industrial revolution gives us a chance to rewrite the rules, and maybe, if we’re lucky, to finally win our race against the machine.

Share.

Leave A Reply