You are viewing your 1 free article this month. Login to read more articles.
Trade bodies The Publishers Association (PA) and Publishers’ Licensing Services (PLS) have welcomed the Lords Committee AI report which calls on the government to take urgent action against copyrighted material being used to train Large Language Models (LLMs).
The report says the government “cannot sit on its hands” while LLM developers exploit the works of rights-holders. It criticises tech firms for using data without permission or compensation, and says the government should end the copyright dispute “definitively” including through legislation if necessary.
It calls for greater transparency for rights-holders to see if their work has been used without consent and for investment in new datasets to encourage tech firms to pay for licensed content, noting there is “compelling evidence” that the UK benefits economically, politically and societally from its “globally respected” copyright framework.
Dan Conway, c.e.o. of the PA, said: “This report rightly recognises that the benefits of AI do not warrant the violation of copyright law and its underlying principles. As the committee states, it is not fair for tech firms to use rights-holders’ content for huge financial gain without permission or compensation.
“The Publishers Association welcomes the prominent call for the government to take action to support rights-holders. We gave evidence to the committee’s inquiry last year and it’s great to see their report backing many of our key arguments – that LLMs shouldn’t use copyright-protected works without permission or compensation, that there should be support for licensing, that there should be transparency, and that the government should legislate if necessary.
“Publishers have long embraced the benefits of AI in their work and share the committee’s ambition for a positive vision on AI, where the myriad opportunities are embraced but rightsholders and human creativity are respected, permissions are sought, and licensing is supported. This report is a call to action for government at a pivotal moment for the UK’s approach to AI.”
Publishers’ Licensing Services’ (PLS) chief executive, Tom West, also welcomed the report, saying: "PLS and many in the creative industries share the government’s ambition of making the UK an AI superpower and there are many innovative UK publishers already playing a vital role in that endeavour. Whilst PLS recognises the immense benefits that AI can bring, this cannot be at the expense of the UK’s world leading creative industries, nor the copyright framework that underpins the sector’s success. PLS firmly believes that a mix of direct and voluntary collective licensing offers the best solution for not only rightsholders but also content users too. The licensing market for AI should be allowed to evolve to meet market demand, continue to facilitate future technological progress, and help act as an incentive to human creativity.”
The report goes on to argue that the government’s approach to Artificial Intelligence and large language models (LLMs) has become “too focused on a narrow view of AI safety” and says “the UK must rebalance towards boosting opportunities while tackling near-term security and societal risks”.
“It will otherwise fail to keep pace with competitors, lose international influence and become strategically dependent on overseas tech firms for a critical technology,” it said.
The report also issues a warning about the “real and growing” risk of regulatory capture, as a race to dominate the market deepens. “Without action to prioritise open competition and transparency, a small number of tech firms may rapidly consolidate control of a critical market and stifle new players, mirroring the challenges seen elsewhere in internet services,” it said, arguing “a more positive vision for LLMs is needed to reap the social and economic benefits, and enable the UK to compete globally”. Measures include more support for AI start-ups, boosting computing infrastructure, improving skills, and exploring options for an ‘in-house’ sovereign UK large language model.
When considering the risks around LLMs, the committee said “the apocalyptic concerns about threats to human existence are exaggerated and must not distract policy makers from responding to more immediate issues”.
The report found there were more limited near-term security risks including cyber attacks, child sexual exploitation material, terrorist content and disinformation. The committee said catastrophic risks are less likely but cannot be ruled out, noting “the possibility of a rapid and uncontrollable proliferation of dangerous capabilities and the lack of early warning indicators”. The report called for mandatory safety tests for high-risk models and more focus on safety by design.