AI Won’t Be Dumb Forever
Jackson Luce ‘29
lucejfl47@lakeforest.edu
Staff Writer
The discussion around large language models (LLMs) and image generation software is exhausting.
Recently, public attention has been stuck on a couple of topics. In particular, the errors we’re noticing from artificial intelligence (AI) are occurring again and again. The idea of AI hallucination has shown up a lot, a phenomenon in which LLMs will simply make up information. In a way, it’s nice to see that, for the time being, we’re still beating the computers. However, I can’t see this time lasting for long.
While I do think that discussions around the errors of artificial intelligence are important to have, our thinking is too small and too current to adequately discuss a technology that is growing at an exponential rate. I am convinced that artificial intelligence will develop beyond our wildest expectations. Our preparation going forward is key in determining whether this incredible technology, when wielded by an inherently fallible society, will save or destroy humanity as we know it.
This whole existential rabbit hole, for me, began with reading “AI 2027.” This is not the latest young-adult dystopian novel, but a real report made by AI researchers at the forefront of this quickly developing field. “AI 2027” is modeled like a prediction, but it is written with the intensity and urgency of a narrative. It also involves heavy doses of extrapolation. As such, it should be read with a grain of salt. For me, however, its insights served as a massive wake-up call.
The project predicts that by 2027, the United States and China will begin to race, competing to build the superior AI model. At the same time, AI will begin to train itself, compounding its knowledge. By the end of the year, we would have to choose: “race” or “slowdown.”
In both scenarios, AI develops to think thousands of times faster than the human brain. In both scenarios, AI develops a cure for cancer. And in the “race” scenario, the AI agent deems humans an “impediment” to progress and wipes out humanity entirely.
This is the line we’re riding here.
Do I believe that the human race will really be extinct by 2030? No, of course not. If I did, I wouldn’t be in Illinois.
But I do think that it is time, particularly at colleges and universities, to shift our conversations. We can use outlandish yet plausible predictions like “AI 2027” to keep things in perspective. Additionally, while extrapolation, or assuming trends will continue, gets a bad rap in statistics, we can use it as a philosophical tool.
For example, what happens when you tell a machine, smarter than every person that has ever existed, to prioritize its own growth over everything else? Does this machine stay aligned with humanity’s best interests?
What happens in a capitalist society when we begin to notice that, despite rising profits, artificial intelligence is beginning to become misaligned? Will moral decency win, or will the allure of wealth be too great to resist?
These are the types of questions that should be at the forefront of the world’s attention. While I have worries about intellectual property use and LLM hallucinations, these problems might not even exist in two years.
If we can shift our thinking just a little larger and use extrapolation as an intellectual tool, we could potentially prevent ourselves from reaching a point of no return.
