You are viewing your 1 free article this month. Login to read more articles.
Handled right, AI could make the process of getting research published more speedy and equitable.
In the past few months, due largely to the comet that is ChatGPT, artificial intelligence (AI) has gone from an esoteric topic discussed passionately by the technology community to global headline news to, in recent weeks, a geopolitical strategic concern on a par with climate change. Five months ago, the average person would have said that the incursion of AI into our daily lives was 20 or 30 years in the future; in recent weeks, Sam Altman (c.e.o. of OpenAI) and Chuck Schumer, US Senate Majority Leader, had a meaningful conversation around the possibility of creating a new government entity to try to limit the ways in which AI is transforming our lives today.
It’s breathtaking. But it’s also not unprecedented.
In fact, society today eagerly embraces such moments of “transformational awakening”, moments when it suddenly seems that everything might change, for better or for worse. A look back at the business landscape over the past 20 years reveals dozens of such moments. Some in fact materialised as envisioned. Within a few years, for example, of the creation of Wikipedia, which thrust the concept of “crowd-sourcing” in the centre of public conversation, the traditional encyclopaedia was essentially dead, after a successful 200-year run. But others of these would-be transformational moments were much less impactful than envisioned. At various points over the past 20 years, for instance, file-sharing sites, iTunes, and massive open online courses (MOOCs) were each going to sweep away the educational publishing industry, to a degree that nearly everyone both inside and outside of the industry agreed on; and yet in 2023 the traditional companies in that space still dominate, and the challengers have slipped into “adjacency” status.
Which will artificial intelligence be? Of course, I don’t have a crystal ball, but I think there are some important reasons why AI will not be a passing story. The first is that the path to this moment has been a slow and gradual build; computer scientists have been doggedly improving the abilities of AI for the past 40 years, moving from success to success, and pivoting around setbacks. I’ll date myself by mentioning that as a high schooler in the late 80s I spent several years on an ETA supercomputer writing an AI-based Latin-to-English translator in the then-hot language LISP. At best it might have satisfied a Roman child. Fortunately, AI has moved on since then. The AI that is poised to disrupt industry and society today is mature and solid, not a sudden invention that generates hype but is ultimately unproven.
Second, it’s evident that the uptake of AI, particularly in the wake of the ChatGPT launch, is genuinely widespread. Anyone can access and use this technology – students, patients, newsreaders, and on and on – are finding ways to incorporate generative AI into their lives. The situation was very different for some of the fizzy ideas I mentioned earlier; in those cases, it was often “insiders” who generated a sense of enthusiasm that leaped far ahead of general public interest.
I think the breakout of AI into public consciousness should be a moment of excitement for our industry. It’s in our hands to determine whether we benefit from AI or are knocked off course by it
So, I do think AI’s moment has come, and we will all need to grapple with it. As business leaders in the information space, how should we fundamentally feel? Is AI a massive threat to the industry (and there are hundreds of headlines that suggest this is the case)? Or is AI a significant opportunity (there are fewer headlines in this vein, but they are there)? Obviously, the answer is both. For instance, AI could render products whose main purpose is to synthesise published information obsolete. And companies that create AI-based products to synthesise published information could thrive. This illustrates how I think companies in our space should approach AI today: the risks are there, and it’s critical to understand and size them, but where the bulk of energy should go is in capitalising on the opportunities. After all, the risks will come, whether you study them or not. But the opportunities will only happen if you seize them.
I’ll close by focusing on academic research publication as an example. The major companies in the journals space publish millions of articles in scientific, technology and medical disciplines, generating billions of dollars of revenue. Will generative AI be a threat or an opportunity for this space? Now, a small part of what the industry does – brief summaries of key research papers, for example – might become displaced by ChatGPT, which can competently summarise a fixed set of reputable papers. But the larger part of what the industry does is specifically based on the value that researchers assign to human discretion, namely, the ability of peer reviewers and peer editors to read and select papers that they believe will be of maximum impact and interest to other researchers. AI could do the task of selection, but can it do it in a way that researchers will find credible and will consider worth staking their professional reputations on? I have my doubts. In medical research, for instance, human lives can be at stake. Will medical researchers be comfortable with papers that are selected without any human intervention? I think there’s much greater resilience in the academic research publishing space than the doomsaying headlines would suggest.
But the opportunities, on the other hand, are enormous. Today the majority of papers published in top-tier journals still come from traditional research engines such as the US and Europe. This is partly because those regions still disproportionately dominate research itself, but it’s also because a very high percentage of papers submitted by researchers outside of these traditional countries are rejected out of hand, due to the poor quality of English writing. I would expect generative AI to make a quick and meaningful difference to solving this problem, which is a problem of equity, on the one hand, but also of human welfare. After all, there are clinical insights in rejected papers written by doctors in low-and middle-income countries that could, if published, represent a significant step forward in various medical disciplines.
AI also has tremendous potential to help shorten the timeframe for translation of research into practice. Again focusing on medical research as an illustration, studies show that it often takes 17 years or more for new findings to become reflected in the state of patient treatment. There are many factors driving that timeframe, including training and policy. But the sheer magnitude of the task is also a critical reason. It takes many years for clinicians to find systematic ways to sort through the millions of articles published in their field to synthesise actionable new treatments. It’s easy to envision that AI can play a critical role in accelerating these activities.
On balance, I think the breakout of AI into public consciousness should be a moment of excitement for our industry. It’s in our hands to determine whether we benefit from AI or are knocked off course by it. If we embrace, explore, and invest, we will find that we ourselves become leaders in the story of how AI became a productive, lasting presence in the information landscape.