You are viewing your 1 free article this month. Login to read more articles.
Cambridge University Press (CUP) has launched its first ever AI research ethics policy to help researchers use generative AI tools such as ChatGPT while "upholding academic standards around transparency, plagiarism, accuracy and originality".
The full set of rules are set out here and apply to research papers, books and other scholarly works. They include a ban on AI being treated as an "author" of academic papers and books published by CUP.
The role of ChatGPT and AI technologies has recently been a topic of debate among publishing professionals as regards copyright issues, the impact on authors and illustrators, and lack of governance. CUP’s new policy aims to provide clarity to academics amid concerns about "flawed or misleading use" of such technologies in the research space.
Mandy Hill, managing director for academic at Cambridge University Press & Assessment, said: “Generative AI can enable new avenues of research and experimentation. Researchers have asked us for guidance to navigate its use. We believe academic authors, peer reviewers and editors should be free to use emerging technologies as they see fit within appropriate guidelines, just as they do with other research tools.
“Like our academic community, we are approaching this new technology with a spirit of critical engagement. In prioritising transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research. It’s obvious that tools such as ChatGPT cannot and should not be treated as authors. We want our new policy to help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI."
Each year CUP publishes tens of thousands of research papers in more than 400 peer-reviewed journals and 1,500 research monographs, reference works and higher education textbooks. R Michael Alvarez, professor of political and computational social science at the California Institute of Technology, said: “Generative AI introduces many issues for academic researchers and educators.
“As a series editor for Cambridge University Press, I appreciate the leadership the press is taking to outline guidelines and policies for how we can use these new tools in our research and writing. I anticipate that we will be having this conversation about the opportunities and pitfalls presented by generative AI for academic publishing for many years to come.”