Whether it is a flash in the pan or whether we're truly living in a time of great technological acceleration due to the advancement of LLMs, I do not know. What I do know is there are concerns and predictions across the board about the disruption LLMs are bringing -- to the economy, to education, to our very sense of what it means to be human.
This week I just happened to listen to and read quite a lot about "AI", that catch-all term we're currently using for the large language models built by Anthropic, OpenAI, Google, and so on (Another big 3! Go figure.).
A few links to what I came across this week:
Ezra Klein (maybe my favorite podcast interview show) interviewed Rebecca Winthrop, the director of the Center for Universal Education at the Brookings Institution. I don't know anything about her, or didn't before this interview. What I found important in this conversation was the grappling with education and its purpose. Of course, even prior to the rise of LLMs as a "cheating tool," the value of education -- especially higher education -- was in question across the U.S.
And rightfully so! In my opinion, anyway. We got the purpose of higher ed all mixed up with the economy and the job market. When that happened, most people started needing some form of higher ed credential to even "be successful" (a term I'll leave there for now, knowing it is fraught). So then college becomes the game we play to make money and live our life as adults. That was already a problem prior to the rise of LLMs. And now, we have this magic tool that will largely do the work for us. If the purpose is the credential, and the credential is for material success, then OF COURSE. Of course our students will cheat. It makes no sense not to cheat.
But this conversation brings to light the real, foundational questions, which are:
(For those of you following for a long time, you'll know these are my baseline questions for understanding anything: What is it? What is it for?)
And of course, to really answer those questions, we need to ask even MORE fundamental questions -- what does it mean to be human? what does it mean to live a good life? Yet again, we were already encountering this problem in society, having lost the moorings of common ethical and religious foundations. I'm off on a tangent now, but LLMs, in this way, are apocalyptic in the literal sense. They reveal the truth of the matter, which is that we largely lack a cohesive narrative for orienting our lives.
The episode doesn't quite get into those last questions, but it is a thought-provoking conversation, especially as we are very involved in our kids' education and have thoughts about what it should do for them.
This morning I also listened to Ross Douthat interview Daniel Kokotajlo, former OpenAI employee and current executive director of the AI Futures Project. Kokotajlo believes that by 2027, AGI will not only exist but will also fundamentally remake the economy and the world as we know it. There will largely be no jobs because AI superintelligence will far outstrip humans' ability in basically all areas. This will be followed by a time of superabundance and will obviously be highly disruptive worldwide. While this was an interesting conversation, and I do not doubt that LLMs and whatever other versions of AI will continue to rapidly expand in its capabilities, I ultimately find these predictions unconvincing. Human systems are slow. Governments are slow. Corporations are slow. Even if the technology expands the way the interviewee says they will, I think there are pretty hard limits on how these tools are utilized. My primary thought for this entire interview was "Maybe he should have just become a sci-fi writer." His imagination is certainly capable of producing fascinating (if outlandish) scenarios.
H/T Alan Jacobs for this quote from Neal Stephenson:
Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation... Today, quite suddenly, billions of people have access to AI systems that provide augmentations, and inflict amputations, far more substantial than anything McLuhan could have imagined. This is the main thing I worry about currently as far as AI is concerned. I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing.
Ultimately, my thoughts on LLMs are varied. They are technical miracles, and can produce "better than the best available human" in so many cases. I've used it multiple times for code questions, for gardening questions, and asked about illnesses. It is only getting "better." So perhaps the question we ought to ask ourselves is, what is it that this tool can never do that humans can? What do humans possess that this tool does not?