Perfectly Ordinary: The Resistible Rise of the ChatBot
“Never forget that your computer, wherever it is in the world, is speaking Victorian English; its structure is that of Boolean algebra … with deep roots in the nineteenth-century English confidence in logic. Increasingly, orality, writing, and reading, as we have known them, will take on highly specialized functions, as did reading and writing throughout the ancient and medieval worlds.” – George Steiner, Grammars of Creation.
As readers, what do we expect from a story? From a poem? From a biography or an editorial or a historical analysis? We expect, at the very least, an individual point of view. This point of view is often arrived at through what we might call trial and error: A squidgy process that involves a human mind arguing with itself and with all the assumptive beliefs with which it’s soaked. An organic, fallible, laborious method. We recognize the method instantly because we, too, have these angles, these personal viewpoints—however unformed (or misinformed) they might be.
What, then, can we expect to give a story, a poem, a treatise? Our time, of course. Our attention.
We apply our inner systems of judgement and apprehension to them, shaped by years of experience. We open the text, and we are now in private conversation with something dormant. We resurrect it by reading, and if the text is any good, if it challenges, provokes, soothes, plucks the strings of memory, it continues to echo within us.
“What is reading, in the last analysis, but an interchange of thought between writer and reader?” asked Edith Wharton more than a century ago.
It's difficult to converse with a machine. We can feed it data. We can anthropomorphize its responses till it resembles something familiar, an uncanny valley of words, sentences, ideas. Here, however, an important distinction: Generative artificial intelligence programs of the sort that ChatGPT uses are not sentient. They are types of machine learning that, according to Noam Chomsky and others, are actually distracting us from the true goal of creating a technology that thinks.
I’m in no way qualified to predict when that day will dawn. I’m just here to draw some lines between what a human is capable of inventing and what a computer algorithm is likely to imitate, and to suggest that those lines matter.
ChatGPT scours the internet like a great, tireless trawl, filtering out material that the algorithm deems inappropriate or irrelevant. It uses large language models to assemble responses to our prompts, on demand, in seemingly limitless variation.
It collects facts, but it doesn’t know how to think about them or how to hypothesize in any imaginative sense. Humans, on the other hand, are wonderful at forgetting facts. We don’t hold them in our mental nets the way we hold, say, intuitions or self-related memories. This may appear to be a deficit, but it’s part of what characterizes our intelligence. It’s a feature, not a bug.
In his recent book Forgetting: The Benefits of Not Remembering, the neurologist Scott Small claims that memory is “flexible, form-shifting, and fragmented” (i.e. imperfect), and that the ability to relax our minds is essential to making new connections among disparate strands of information.
When I fed a sample of my own writing into GPTZero—an AI detector that predicts whether a document was written by a large language model—I was somewhat relieved to find that the work had sufficient “perplexity” and “burstiness” to pass as human.
In other words, it was weird.
This is why the stories, poems, and essays that ChatGPT vomits out with such alarming speed read as if they were penned by a well-meaning but deeply naïve high school student: Diligent to a fault, obsessed with false balance, and terrified of flunking. The program is specifically designed to produce average results, presumably because the company that owns the program understands this to be the best way to optimize its training data.
Such pedestrian concerns, and such outcomes, were probably not what Italo Calvino was envisioning when, in his 1967 essay Cybernetics and Ghosts, he waxed lyrical that “Mankind is beginning to understand how to dismantle and reassemble the most complex and unpredictable of all its machines: Language.” For Calvino, the writing of stories was as much a combinatorial, mathematical game as it was an opportunity to plumb the depths of the unconscious.
He went on to say, “The true literature machine will be one that itself feels the need to produce disorder, as a reaction against its preceding production of order: A machine that will produce avant-garde work to free its circuits when they are choked by too long a production of classicism.”
Alas, I don’t imagine that ChatGPT and its ilk will be the cause of any aesthetic revolutions, at least not until we change its parameters. But Calvino recognized the importance of playful anarchy and technical error in the production of great literature—literature that goes well beyond the evidence of its scaffolding. And when George Steiner referenced the limitations of computer brains that use Boolean algebra, he was circling the idea that no system of logic based on binary, true-false values could ever hope to create the intricate, unruly surprise that is The Mahabharata, or Don Quixote, or Ulysses, or Beloved.
So, what kind of readers, and writers, do we want to be?
Let’s limit ourselves to higher education for a moment. Many college professors are now tasked with identifying which of the assignments their students turn in were, in fact, written by a chatbot. Darren Hick, who teaches philosophy at Furman University in Greenville, South Carolina, sounded the alarm last December when he caught a student using AI to complete an assignment on David Hume and the paradox of horror.
“The first indicator that I was dealing with AI,” said Hick, “is that, despite the syntactic coherence of the essay, it made no sense.” That is, if someone does more than the bare minimum of skimming the text—and if they have a background in the discipline—they’ll spot the bullshit immediately.
We don’t need PhDs to spot shallow, lifeless writing—writing that seems “correct” yet is wholly divorced from reality and from an authorial voice. We do need to be attentive to what we read, however; to be willing to meet the work halfway and to open ourselves to the possibility of transcendence and painful maneuvers. In short, we need to be in some sort of dialogue with it, assuming we still care about the operation of our minds and about how we, as perennial learners, define literacy.
Given a few more years and some significant alterations, it's indeed possible that ChatGPT will acquire the cognitive power to discuss the future, to originate concepts, to rise up off the slab and walk among us as a fully conscious equal. That would present its own set of challenges (to put it mildly).
But for now, the problem is reversed: AI-generated compositions are as wildly mediocre as 90% of what we read online, and if we aren’t vocal about demanding more from our fiction, our narrative critiques, our Wikipedia entries, it will be difficult to regain what we’ve lost in the bargain.
Still, I’m willing to give my mechanical friend the benefit of the doubt. When I asked the program to write the opening paragraph of this essay, it leapt at the chance with all the enthusiasm of an over-caffeinated rhesus monkey.
Here, then, is how things might have gone:
“Stories are an essential part of human communication and have been used for thousands of years to convey information, entertain, and pass down cultural values. With the rise of artificial intelligence (AI), machines are now able to tell stories too. However, the way AI tells stories is fundamentally different from how humans do it. While humans rely on their creativity, emotions, and experiences to craft a compelling narrative, AI uses algorithms and data to generate stories. In this essay, we'll explore the similarities and differences between human and AI storytelling, and discuss the potential implications of AI-generated stories for the future of storytelling.”
Do tell.
*Feature image by By Psychoshadow (Adobe)