GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Could an A.I. Chatbot Rewrite My Novel? | The New Yorker

Could an A.I. Chatbot Rewrite My Novel? | The New Yorker

During one of my more desperate phases as a young novelist, I began to question whether I should actually be writing my own stories. I was deeply uninterested at the time in anything that resembled a plot, but I acknowledged that if I wanted to attain any sort of literary success I would need to tell a story that had a distinct beginning, middle, and end.

This was about twenty years ago. My graduate-school friends and I were obsessed with a Web site called the Postmodernism Generator that spat out nonsensical but hilarious critical-theory papers. The site, which was created by a coder named Andrew C. Bulhak, who was building off Jamie Zawinski’s Dada Engine, is still up today, and generates fake scholarly writing that reads like, “In the works of Tarantino, a predominant concept is the distinction between creation and destruction. Marx’s essay on capitalist socialism holds that society has objective value. But an abundance of appropriations concerning not theory, but subtheory exist.”

I figured that, if a bit of code could spit out an academic paper, it could probably just tell me what to write about. Most plots, I knew, followed very simple rules, and, because I couldn’t quite figure out how to string one of these out, I began talking to some computer-science graduate students about the possibilities of creating a bot that could just tell me who should go where, and what should happen to them. What I imagined was a simple text box in which I could type in a beginning—something like “A man and his dog arrive in a small town in Indiana”—and then the bot would just tell me that, on page 3, after six paragraphs of my beautiful descriptions and taut prose, the dog would find a mysterious set of bones in the back yard of their boarding house.

After a couple months of digging around, it became clear to me that I wasn’t going to find much backing for my plan. One of the computer-science students, as I recall, accused me of trying to strip everything good, original, and beautiful from the creative process. Bots, he argued, could imitate basic writing and would improve at that task, but A.I. could never tell you the way Karenin smiled, nor would it ever fixate on all the place names that filled Proust’s childhood. I understood why he felt that way, and agreed to a certain extent. But I didn’t see why a bot couldn’t just fill in all the parts where someone walks from point A to point B.

ChatGPT is the latest project released by OpenAI, a somewhat mysterious San Francisco company that is also responsible for DALL-E, a program that generates art. Both have been viral sensations on social media, prompting people to share their creations and then immediately catastrophize about what A.I. technology means for the future. The chat version runs on GPT-3—the abbreviation stands for “Generative Pre-Trained Transformer,” —a pattern-recognition artificial intelligence that “learns” from huge caches of Internet text to generate believable responses to queries. The interface is refreshingly simple: you write questions and statements to ChatGPT, and it spits back remarkably coherent, if occasionally hilariously wrong, answers.

The concepts behind GPT-3 have been around for more than half a century now. They derive from language models that assign probabilities to sequences of words. If, for example, the word “parsimonious” appears within a sentence, a language model will assess that word, and all the words before it, and try to guess what should come next. Patterns require input: if your corpus of words only extends to, say, Jane Austen, then everything your model produces will sound like a nineteenth-century British novel.

What OpenAI did was feed the Internet through a language model; this then opened up the possibilities for imitation. “If you scale a language model to the Internet, you can regurgitate really interesting patterns,” Ben Recht, a friend of mine who is a professor of computer science at the University of California, Berkeley, said. “The Internet itself is just patterns—so much of what we do online is just knee-jerk, meme reactions to everything, which means that most of the responses to things on the Internet are fairly predictable. So this is just showing that.”

GPT-3 itself has been around since 2020, and a variety of people have already run it through the paces. (The recent hype around it comes from the new chat version.) Back in 2020, the Guardian had the program write an article about itself with a moderate, but not entirely disqualifying series of prompts from a human and some reasonable, light editing. Gwern Branwen, a writer and researcher, asked GPT-3 to write everything from poems to dad jokes. In one particularly illustrative example, Branwen fed the machine the opening of Shel Silverstein’s “Where the Sidewalk Ends” and asked it to fill in the rest.

This is the prompt—the actual first six lines of “Where the Sidewalk Ends.”

There is a place where the sidewalk ends
And before the street begins,
And there the grass grows soft and white,
And there the sun burns crimson bright,
And there the moon-bird rests from his flight
To cool in the peppermint wind.

Here are Silverstein’s next six lines.

Let us leave this place where the smoke blows black
And the dark street winds and bends.
Past the pits where the asphalt flowers grow
We shall walk with a walk that is measured and slow,
And watch where the chalk-white arrows go
To the place where the sidewalk ends.

And here’s what GPT-3 came up with for what I’ve approximated is the next full stanza.

And quiet flows up from the ground
And steals into your mind,
And time is never found there,
Nor worry, nor “strain.”
And people come and people go,
And products push their way;
But there the milkman leaves his cart
And talks to the old lady next door
About her grandson away at school.
And there the linen girls walk home at noon,
With footsteps soft and light,
And the policeman on their beat
Comes by and knows them by their faces.

So GPT-3 struggles to recognize rhyme structure, and is perhaps a bit too indebted to “The Love Song of J. Alfred Prufrock” and its lines “the women come and go / Talking of Michelangelo.” But it’s still remarkable that a computer could recognize the basic structure of a poem, seemingly understand the tone of the Silverstein’s verse, and then create into what actually feels like a decent match to the original. (Though I would say that it reminds me a bit more of the opening pages of James Agee’s “A Death in the Family.”) The bot’s little word contraptions like “linen girls” are evocative, albeit somewhat superficially. The phrase “knows them by their faces” is actually quite beautiful.

The mind-bending part was trying to recognize and parse patterns in the bot’s responses. Was the line “people come and people go” really pulled from T. S. Eliot, or is it just a random series of words that triggers the correlation in my head? My response to the bot, then, isn’t really a reflection of my relationship with technology, but rather my sense of my own knowledge. This prompts a different question: why is my relationship with any other bit of text any different? To put it a bit more pointedly, why does it matter whether a human or a bot typed out the wall of text?

This content was originally published here.