You are viewing your 1 free article this month. Login to read more articles.
The industry needs to develop a clear ethical framework, now.
AI is changing everything, fast. I was recently speaking on a panel about AI in the creative industries, and to prepare I spent five minutes, and less than $5, using a program to write and illustrate a children’s book about my dog. The book, Rosie Makes a Mess, wasn’t perfect – it drew on an earlier version of AI program Midjourney for images so the fingers were…messy, but it was a indicative of how quick and simple (and tempting) it is to use AI to write a book.
But this isn’t new.
In 2019, Springer Nature published Lithium-ion Batteries: A Machine-generated Summary of Current Research, and today Amazon is awash with AI-written books on topics as varied as adult colouring books to science fiction.
Authors such as Leanne Leeds has been using Sudowrite since 2021 to help her write her "whimsical contemporary paranormal mysteries" that have sold remarkably well on Amazon. And, NanoGenMo has been running on GitHub since 2013 as a month of code-writing that generates a novel (an echo of NanoWriMo).
On the publisher side, generative AI tools such as Midjourney or Dall-E have been used to create cover art and internal graphics, generate blurbs, and is being used more in marketing via tools such as Jasper.ai. It can be used to quickly generate contracts, summarise a synopsis or chapter, scrape and analyse data, build a website, or write specific code to do all of the above.
AI is moving at lightning pace and publishers are rightly concerned about the speed at which AI, specifically generative AI, is driving change
Want to have readers chat to your bot and get personalised book recommendations based on your backlist?
Sure.
How about then selling them that book directly in the bot ecosystem?
Even better.
AI is moving at lightning pace and publishers are rightly concerned about the speed at which AI, specifically generative AI, is driving change. Across the industry they are working to get to grips with how it will alter copyright, translation, narration, development and more. Though most publishers believe that AI won’t replace the author or their staff, there are calls for wider consultation, definitions and regulations. This is where questions of ethical and responsible AI come in.
So what’s the difference between ethical and responsible AI?
In short, though these two terms are often used interchangeably and you arrive at the same goal of developing and implementing the use of AI in a safe, fair, accountable, and ethical manner, they are different in that responsible AI is about implementing the ethical considerations. Ethical AI relates to the moral underpinning of why, for example, it’s not ok to have biased algorithms developed from copyrighted materials; while responsible AI is about the structures in place within a company, industry, sector, or government that enable ethical AI practices to be implemented.
Ethics is a big deal, especially for an industry like publishing which revolves around the creation and sharing of content. The Society of Authors, specifically, offers writers clauses to look out for in their contracts that allow publishers to use their work for training machine learning or translating, and it warns authors about making voice recordings that could potentially be used for training AI speech tools.
Valid questions are raised over "pirated training content" and the actual value AI is adding to a publisher’s workflow. The SoA doesn’t go so far as to mention the more ominous questions about the environmental impact of natural language processing training sets (where a single AI training model can produce more than 5x the carbon emissions of a car), their links to surveillance capitalism, the underpaid workforce helping to train these tools, or the bias that is even threatening to widen the gender gap.
None of this is meant to scare anyone, or make publishers abandon AI, but it does need to make us think about how we can safeguard all those involved in the publishing industry. One of the key ways to do this is to develop a common framework for defining and implementing responsible AI across the industry.
This doesn’t require re-inventing the wheel. The proposed EU AI Act of 2021 suggests that all organisations that use or develop AI take into account the different risk levels involved and then focus on key areas of transparency, fairness, robustness with data governance, human oversight and accountability.
Other organisations such as the International Technology Law Association provide their own global framework and assessment tool that brings in respect for human rights, and consideration of the social, political and environmental impact that their use of AI might have and, importantly, to offer mitigation strategies. This echoes the Partnership on AI’s Responsible Practices for Synthetic Media’s key techniques of collaboration to prevent harm, identification of harmful AI, and development of strategies to mitigate any harm – a framework backed by global organisations from the BBC to Microsoft, Meta and Adobe.
The work already being done on implementing ethical and responsible AI puts publishing in a strong position to develop its own responsible AI standards: a framework that will ensure that the use of AI in the industry will be transparent, accountable, human-centred, ethical, environmentally sound, safe and fair. And the sooner the industry works together to understand the tools and develop a framework, the sooner they can start to put AI to work for them in a responsible manner.