
“Are You a Real Person?” Wisdom after A.I.
To date, no computer has unambiguously passed the famous “Turing test,” which stipulates that a computer can be said to “think” in an important sense if it can fool an intelligent human observer over the course of a lengthy text conversation into believing that it is itself human. But recent advances in machine learning are making it ever harder for us human beings to pass the test in reverse. They are also forcing us to re-think what is specifically human (and thus inimitable) about our use of language.
As part of my job at a retail stock brokerage, I periodically work in the text chat team supporting users of our advanced trading platform. Clients chat in asking, for instance, how to use the platform’s charting tools, or why they didn’t get a good fill on their stock order, etc., and I type out instructions or explanations or apologies as the situation warrants. A few weeks ago, I handled a routine chat in which the client was asking how to deposit funds into her account while overseas. We cleared the whole thing up in about ten minutes. She thanked me enthusiastically, and then followed up with: “I’m just curious, are you a real person?”
“Well,” I responded, “I’d like to think I’m a real person, but you’d probably want to check with my wife about that.” My client’s question was a first for me, but I doubt it will be the last. Our interaction, after all, took place right around the time that the OpenAI project had released the latest iteration of its chatbot program, dubbed ChatGPT.
Chatbots have been around for a long time, but this is clearly next-level technology. In essence, ChatGPT is an extremely high-powered version of the predictive text tool on your phone — except that it’s “trained” not merely on one person’s texts, but rather pretty much the entire corpus of human writing.
As a result, when the bot is instructed to, let’s say, compose a short story in the style of Flannery O’Connor, it “knows” to set the tale in the First Baptist Church of a “small Southern town” called Hokepe, and also to include a mentally unhinged homeless man who interrupts the church’s Christmas Eve service while ranting about the coming wrath of God. Of course, the overall quality of the writing isn’t in the same galaxy as O’Connor’s, and the ending in particular takes on a gooey sentimentality that would have nauseated her.
ChatGPT is more successful at non-fiction. Our own Todd Brewer recently demonstrated that the bot can turn out a creditable Christmas sermon, complete with contextually appropriate quotations from figures as varied as Irenaeus of Lyon and Barack Obama. More impressive still is the technology’s ability to synthesize a wide range of sources into tight, well-organized essays. For instance, I just now asked the bot to write an essay on one of my field’s perennial questions, namely, “Are stocks and bonds correlated?” Within half a minute it had produced five paragraphs of passably undergraduate-level prose, explaining the general consensus that stocks and bonds are weakly correlated (citing professional research to support this consensus), but also listing examples of periods in which they moved in tandem (i.e., the 2008 global financial meltdown). It even tacked on a list of the sources it used.
It takes approximately no time at all to realize that this technology has serious implications for the age-old task of educating young people to organize and express their thoughts clearly in writing. In a piece for the Atlantic, veteran high school English teacher Daniel Herman expressed astonishment over what ChatGPT can accomplish. “Let me be candid,” writes Herman:
What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor. Over the past few days, I’ve given it a number of different prompts. And even if the bot’s results don’t exactly give you goosebumps, they do a more-than-adequate job of fulfilling a task.
Later, Herman explains that he typically spends about two months helping his students craft an independent research paper teasing out connections between two literary masterpieces; he found that ChatGPT could pour a serviceable foundation for this kind of project in about a minute. And there’s really no obvious reason why future iterations of the technology could not write the entire paper at an “A” level. While it’s usually a mug’s game to predict the future, I think that those observers declaring the death of the venerable essay assignment actually have a fair bet.
Well, if the essay assignment is going to die soon, it may as well prompt a re-examination of why we as a society forced our kids to write so many of them in the first place. The most basic answer, of course, is to make students demonstrate that they had bothered to read or skim the material (or at least its Cliff Notes). Another reason is to train students on the basic mechanics of writing: the avoidance of comma splices, the desirability of consistent subject-verb agreement, the elimination of superfluous words. Next is to hone the skill of argumentation, that faculty of saying something non-obvious (and hopefully just a bit controversial) about a topic of general interest, in a manner both persuasive and pleasing.
The necessity of these assignments is now much less apparent. We have a free-to-use bot that can “read” vast corpora of texts, that can write sensible and clear English, and that is apparently well on its way to making detailed and interesting arguments. There is, however, a fourth goal of the essay assignment, one that is usually implicit and often completely lost in the shuffle. That goal, I submit, was and is the pursuit of wisdom.
To cite David Foster Wallace’s celebrated maxim, “Fiction is about what it is to be a [redacted] human being.” And this is true of great literature generally: Boswell’s Life of Samuel Johnson, for instance, is a work of (mostly) nonfiction that has a great deal to say about what it’s like to be a human being. The main reason we’ve decided to make young people read and write about great literature is the hope that they will incidentally absorb something time-tested about what is truly valuable in life and what is not; how we ought to order our personal lives and our societies; how to be a friend; how to hope in the midst of disaster. I had to write a paper in college about Erasmus’s In Praise of Folly. The paper itself is very much not worth reading, but the prolonged engagement with Erasmus helped to conquer my (really annoying) cynicism by showing me how love and compassion can coexist with a healthy sense of ironic detachment from the world and its excesses.
Is reflective writing on literature the only approach to gaining wisdom? Of course not. It’s probably not even the best. But the academic essay assignment at least had the potential to expose students to some eternal verities. If the robots truly have killed the essay, we’ll need to invent some other ways for young people to not only read (or skim) but also to grapple with the best thinking our species has to offer.
In the meantime, this next-generation chatbot can startle us into remembering that words aren’t just objects to be cleverly re-arranged into new combinations. A machine can presumably beat us at that game. No matter; no matter how clumsy our engagement with words, they always have the potential to put us in touch with transcendent realities. To be related to those realities is an irreducible, and irreplicable, part of what it is to be a human being.
The post “Are You a Real Person?” Wisdom after A.I. appeared first on Mockingbird.
This content was originally published here.