A few words about the future of the writer’s craft in the era of generative AI
At the end of 2024, the renowned scientific journal Nature published the results of a study by two professors from the University of Pittsburgh — Brian Porter and Eduard Machery. They found that ordinary, non-professional poetry readers cannot distinguish between poems written by large language models and those written by humans. Moreover, readers who don’t know who wrote the poems tend to rate AI-generated texts higher on some parameters than human ones, and if they do know — then lower.
Canadian novelist and essayist Stephen Marche authored (under the pseudonym Aidan Marchine) the novella Death of an Author (2023), with approximately 95% of the text generated using AI tools such as ChatGPT and Cohere. Sean Michaels incorporated AI into his novel-writing process by developing “Moorebot,” a generative AI trained on poet Marianne Moore’s works. This tool was used to produce verse for his novel ”Do You Remember Being Born?”, blending human and machine-generated poetry. There are other examples, including David Jhave Johnston’s project “ReRites”, Pavel Pepperstein’s “Trying to Wake Up” and more.
Until recently, AI was poor at writing prose. However, everything changed at the end of February 2025, when OpenAI released the latest version of its large language model, ChatGPT 4.5. It’s being advertised as specifically adapted for creating literary texts.
The company’s CEO, Sam Altman, published a story written by GPT 4.5 on the social network X in response to the prompt “Please write a literary story in the metafiction genre about artificial intelligence and grief.” The story was widely discussed in literary circles. For instance, renowned British writer and playwright Jeanette Winterson reviewed the story as “beautiful and moving.” She suggested calling AI not “artificial” but “alternative” intelligence, adding that “in all the fear and anger foaming around AI just now, its capacity to be “other” is what the human race needs.” There were, of course, other opinions: Tracy Chevalier, author of the novel “Girl with a Pearl Earring,” which was made into a film of the same name, wrote that the story was “inevitably going to engender self-referential navel gazing that’s even more ridiculous than the worst we can imagine of AI “creative writing”.”
Specifically for this article, I conducted my own experiment: I asked GPT 4.5 to write a story about a Russian girl with a wombat in Berlin. This was, of course, only part of the prompt. I also outlined stylistic preferences indicated by the names of three specific authors. I won’t reveal who they were, and I’d like you to try to figure out this part of my prompt after reading the resulting text. Yes, I edited it before publication, but the volume of changes did not exceed 15% of the text . The story turned out like this:
Nadya arrived in Berlin with a wombat named Boris, tucked into a bright blue carrier emblazoned with “Property of Novosibirsk Zoo.” Berlin sprawled before her, gray — relentlessly gray and predictable: streets, clouds, people. Only the wombat stood apart — brown, warm, and unpredictably expressive.
“I’m here about an extension,” Nadya stated at the Ausländerbehörde. The bureaucrat, a man professionally committed to universal skepticism, regarded her with suspicion.
“For you or the wombat?”
“Both, ideally.”
“Citizenship?”
“Russian,” Nadya exhaled, the word heavy with implications.
“And the animal’s?”
“Australian by birth, but he left when he was still a child.”
The official scrutinized Boris. The wombat idly scratched his belly with one elongated claw, supremely indifferent to bureaucratic minutiae. For him, Australia existed as merely a hazy memory.
“Grounds for extension?”
“The circumstances, you understand,” Nadya sighed. “And this… melancholy. Terrible one. And anxiety.”
The official, visibly deflated under the weight of such existential justifications, invisibly stamped something on her documents and extended them toward her. She departed, the wombat carrier swaying gently with her steps. For the first time in months, Nadya felt the faintest whisper of hope — though she couldn’t quite articulate what she was hoping for.
Berlin, it transpired, was woefully unprepared for a wombat. Everywhere they went, people stopped them for photographs. The Kurfürstendamm, Brandenburg Gate, Tiergarten and other tourist-saturated zones became virtual no-go areas. Children giggled and pointed; mothers speculated aloud: “Some kind of hog?” “A bear cub?” “No, no, it’s this dog breed, I told you about recently…”
Boris, magnificently indifferent to his position in Berlin’s zoo-social taxonomy, mostly slept, waking only occasionally to nibble contemplatively on Nadya’s shoelaces. They settled in Neukölln, in a sprawling apartment shared with a philosophy doctoral candidate who strove, whenever possible, to think about nothing at all, and a programmer who maintained that everything was broken, nothing could be fixed, and what could be fixed wasn’t worth the effort.
Days lengthened, bills multiplied, and Nadya struggled to explain to potential German employers why emotional equilibrium required her to maintain a nocturnal, burrowing marsupial. Boris, too, grew melancholic.
One gray morning — indistinguishable from all the others — Nadya discovered Boris barely breathing. Panic rose in her chest like a dense fog. She bundled him into her coat and rushed to a veterinarian who specialized in lagomorphs; not ideal, but the fastest option available.
The veterinarian was somber:
“It’s not an illness,” he pronounced, his accent so thick it would be allowed even in carry-on luggage. “It’s depression.”
“But what should I do? Can it even be treated?”
“In Russians? That’s a complex question, there are multiple schools of thought.”
“No, in wombats.”
“Ah! In wombats… Frankly, I haven’t the faintest notion. With rabbits, I can tell you immediately, the survival rate is dismal.”
Nadya perched on a bench and watched the city slowly surrender to twilight. She cradled Boris against her chest, softly humming fragments of half-remembered lullabies. Passersby stared, but none ventured to approach.
When darkness had enveloped the city completely, Boris stirred, repositioned himself, yawned expansively, and bit Nadya’s finger — gently yet decisively, just shy of breaking skin.
“Are you alright?” she asked through tears.
“He simply needed to bite — what is it called… Wesentlicher Anderer,” declared an elderly Turkish man who had been observing them from the adjacent bench all this time.
“Why do you think so?”
“Well-known remedy,” the old man replied, “infallible.” And he vanished into the night.
Nadya embraced Boris more tightly, the ghost of a smile playing at her lips. Nothing had become easier, nothing had grown clearer — but it seemed they had both somehow reconsidered their commitment to despair.
*
Is this a good story? I don’t know. It’s clear that in terms of quality, it’s far from great literature and probably from the long lists of international prizes — though I’m not completely sure about national ones. But within the framework I’m proposing to think about this, the quality of text generated by a language model just in response to a prompt, before editing, is important but not decisive.
It seems reasonable to me to first understand what literature’s prospects we’re discussing here.
Let’s start with fiction. Its fate will likely be similar to that of popular music (see, for example, SUNO and UDIO services). The thing is, mass literature, like pop music, is easily algorithmized — fiction uses relatively predictable and frequently recurring plot schemes and character types. A romance novel is almost always a story of meeting, overcoming obstacles, and reuniting the main characters. A detective story is a mysterious crime, false clues, and the gradual revelation of the mystery by the detective. Genre conventions shape readers’ expectations and largely determine the structure and content of works.
Large language models are trained on a colossal volume of texts, which means they easily identify and reproduce such patterns/conventions. The model identifies typical plot moves, sequences of events, character types, and stylistic features of a particular genre. This is the basis on which the model can generate new texts that either conform to accepted formulas or deviate from them, precisely to the extent we want.
Language models can generate entire books or create drafts that publishers will pass on for refinement — to humans or other algorithms. Several major publishing houses, including Penguin Random House and HarperCollins are engaging quite actively with AI tools, though mainly limiting themselves to non-authorial capacities like marketing, translation and audiobook production. Experiments with reviewing new manuscripts based on a large volume of statistical data about which texts sell well and which don’t are already a huge step into the future of mass commercial literature. The fiction writer thus transforms into a literary producer.
What follows from this? On one hand, using language models to create literary texts can be considered a form of democratization of creativity: anyone who wishes can “write” a novel by cleverly composing a prompt for the model and editing (or not) the result. It’s also clear that from another perspective, the same process can be understood as a devaluation of the writer’s labor and literary creativity in general. There is no place for the uniqueness of the author’s vision or style in this picture — the worst predictions about the general degradation of culture and the decline of humanistic values in a dystopian technologized world are fulfilled. On the other hand, On the third hand, if we consider truly commercial literature — like the sensation novels of the 1860s or the railroad literature that dominated newsstands in the late 19th century — it’s clear that mass production of formulaic content has been around not just before AI, but before even the most basic mechanical writing technologies became widespread.
It’s much more interesting to think about how the widespread use of generative models in literature might blur the traditional roles of author and reader. If a literary text is created through close interaction between human and algorithm, the question of authorship arises, followed by the question of how much sense it makes to use this term at all. The reader, in turn, becomes a co-author here, guiding plot development through interaction with the model when creating, for example, a personalized story. The reading experience can become much more active and participatory. The reader simultaneously becomes a writer.
But what, then, is the language model? It’s clear that it’s not a co-author: an author is endowed with agency and intentionality. An author wants to say something. A language model responding to our prompts doesn’t want to tell us anything, as it cannot want. It also doesn’t have the human experience that it could, like a human author, translate into literary text. But if AI is not a co-author, then what is it?
This is the perfect time to talk not about fiction, but about “serious,” i.e., non-algorithmizable literature. What is AI’s place in this case?
I believe the most productive way to think about AI is as a technology of “extended thinking.” What do I mean? Language models can be used to generate unexpected ideas or associations. Working with AI, a writer gains access to a vast space of potential meanings (see world literature) — and can draw inspiration from it. AI acts as a kind of external cultural memory/archive, complementing and expanding the author’s own imaginative capabilities.
Another aspect is the “dialogue” between humans and language phenomena, or perhaps even language itself. The author addresses the language model with a prompt, it generates certain texts, which the human can develop, interpret, and subject to critical analysis. This is undoubtedly also a form of expanding and enriching the possibilities of writing.
The language model can also function as a kind of optical instrument for writerly self-reflection: such systems can be used for deep analysis of texts already written by the author and identification of characteristic stylistic devices, recurring images/motifs, features of plot construction, and so on. Finally, generative algorithms can be used to create alternative versions of one’s texts, embodying, say, lives not lived by the characters. In all these cases, AI creates a completely new experimental platform that allows pushing the boundaries of both creative and personal experience.
Viewing AI as an extended thinking device has another advantage: we see that something similar has already happened in culture: writing once radically changed the ways of storing and transmitting information, creating a kind of “external memory” of culture. Similarly, AI today acts as an externalized cognitive resource that expands our creative possibilities.
New technologies often influence the development of art forms that existed before their emergence. Thus, the appearance of photography not only gave birth to a new art form but also radically transformed painting, pushing it to search for new means of expression. One can imagine that the development and spread of AI will stimulate literature to experiment with new methods and styles of writing. There are less obvious parallels: for instance, the influence of psychoanalysis on 20th-century literature gave writers new tools for exploring the human psyche and the unconscious. Dialogue with AI can become a similar experience of introspection for the author. Not to replace the humans, but on the contrary — to help the humans know themselves better.
P.S. The prompt that resulted in the story you read mentioned three writers: Michael Cunningham, Mikhail Zhvanetsky, and Etgar Keret. The author thanks Zlata Ponirovskaya for the prompt idea.