You are viewing your 1 free article this month. Login to read more articles.
How and what we value from authors will determine their future in a world of AI.
Do we care about new thought, ideas, voices and the unpredictable lives of humans? Can the "AI revolution" trigger a creative revolution in which authors and forms of writing that large language models (LLMs) cannot produce are valued?
The publishing industry relies on the unpaid labour of authors. Looking around the FutureBook conference last week, I saw people who get paid to work. The title was FutureBook not FutureAuthor, but it was remarkable how often "book" was used as a metonym for "author" – with the exception of the keynote by Nicola Solomon of the Society of Authors.
From one panel, featuring sales leaders from major publishers: "It’s just a book, don’t worry if it fails, there is always another book, it does not define us." This approach helps those in PR and marketing to manage anxiety and avoid burnout, but for authors, our books do define us. We will not publish another book next week. It’s not just a book, it’s a life, a dream, a career – and our mental health is also on the line.
Gambling metaphors were used throughout the day. Investing in a book was likened to betting on a horse. Authors = books and books = horses. In another panel on publishing with emotion in the age of AI, bookseller Euan Tate rightly commented that it’s not just artificial intelligence that is removing the human from publishing.
The horses that score the biggest advances and returns will be easily generated by LLMs. Commercial books are predictable and generic and follow replicable patterns. LLMs can function as free ghost writers. AI can predict sales and produce books it predicts will sell. What does this say about what is valued?
As the Society of Authors’ 2022 survey revealed, median author earnings have fallen by 30.2% (60.2% adjusted for inflation) since 2006, to just £7k a year – about what I earned from writing last year, a massive drop from my earnings as a freelancer pre-pandemic and pre-child, when I wrote BBC radio plays and was paid for my time.
If LLMs can do the uncreative jobs, could the industry respond by employing writers to think, to dream, to fail productively and be creative and unpredictable?
Nancy Adimora’s consultancy OtherStories, which she talked about in the final keynote, finds new markets and predicts sales, giving publishers the confidence to take risks on underrepresented writers. My publisher, Footnote Press, (winner of the FutureBook Start-up of the Year Award) seeks to centre marginalised stories, such as mine, as a queer single parent. But regardless the size and ideology of the press, taking a risk on a book does not mean paying the writer to write it. Writing books is for the privileged, those with partners who earn, childcare and housing they can afford. In the UK, childcare and affordable housing are a "privilege". When an author burns out, no one notices because "there is always another book".
I could not have written my recent memoir My Child, the Algorithm without my university fellowship, subsidised by universal credit (London rent is 100% of my income). Still, I had to work until 1am each night. I wrote it in conversation with an earlier, queerer, glitchy LLM than the ones used and discussed today. Using AI didn’t save time, but it did infiltrate my thinking and writing, resulting in a book I couldn’t have written without it, and it certainly couldn’t have generated without me. Authors could use LLMs to write future books, but not the generic, sanitised LLMs available today.
Of course, authors want to fight the companies that are training LLMs on our books without consent or payment, to build models that will apparently take our jobs. But – what jobs? As Solomon pointed out, most writers don’t earn livings by writing books, but by bookselling, copyediting, translating – labour that LLMs can (poorly) replicate. The tiny royalties that might eventually trickle to authors will not pay bills, or compensate for lost work.
Professor John Domingue of the Open University asks, "What’s the point in educating people to do something that AI can do?" If LLMs can do the uncreative jobs, could the industry respond by employing writers to think, to dream, to fail productively, and be creative and unpredictable?
AI companies are reliant on low-paid workers; publishing is reliant on low-paid authors. AI won’t make authors’ lives easier. It won’t give us wages or holidays or parental leave or sick pay. It won’t stop wars. Humans can do all these things, and always could. Our governments just choose not to. AI is not the answer. Humans can be.
Instead of using low paid labour to reduce the biases of LLMs, censoring it to make it more palatable – as if there can be such a thing as unbiased AI – we should let it reflect us in all our toxicity and hallucinating creativity. Let’s look in the mirror of artificial intelligence, and then change.
How can we use the AI revolution to start a creative revolution in which the humans who are thinking, caring, writing, imagining, dreaming and doing all the activities that are so undervalued in our society, are given recognition, wages and somewhere secure to live? Without human intervention, AI will make the status quo even worse, which really is saying something.