Many generative-AI platforms are trained on insights from a range of data sources including pre-published information on the internet and beyond. By their nature, they are creating content, advice, and material that is likely to be generated from multiple sources. As a result, the use of generative-AI potentially poses a number of media liability challenges around libel, privacy, trademarks and copyright and how the use of ‘derivative work’, ‘fair use’ and ‘passing off’ disclaimers could protect users and content creators.
These threats are firmly on the radar of media companies with 26% of business leaders from this sector ranking disruption risks, such as the emergence of AI, as the key challenge facing their businesses, according to Beazley’s latest Risk & Resilience research series.
What risk does Algiarism pose?
In the art world, generative-AI tools are enabling a new trend called ‘AIgiarism’ whereby some platforms create materials using multiple sources and produce alarmingly human-like produced content. For example, illustrator Kelly McKernan recently found that her name had been included in over twelve thousand prompts on generative-AI imagery tool Midjourney.1 The results generated by Midjourney created content that mirrored McKernan’s own original work and artistic style. Does this create a potential copyright infringement of her work?
Likewise generative-AI's footprint is extending across other forms of media and content generation. UK Chancellor of the Exchequer Jeremy Hunt used an AI tool to write parts of a speech on digital technology.2 Sir Paul McCartney told the BBC that he had used AI to "extricate" John Lennon's voice from an old demo and produce “the final Beatles record”.3 While McCartney had received permissions to use the demo, others creating AI-generated music have not. In April, Universal Music Group (UMG) condemned the production of an AI song ‘Heart on My Sleeve’ which used vocals that were trained to imitate those of Drake and the Weeknd.4
Many generative-AI platforms do not reveal their training data, and without this, it is extremely hard to prove that the AI’s output derives from the proprietary works. This makes it more challenging for authors to prove in court that there is an obvious example of copyright infringement. Equally, the regulation of AI is only in its infancy which means that platforms can operate, for now, largely with impunity.
The risks associated with generative AI
As the pace of AI disruption accelerates, the use of creators’ content without explicit permission, credit, or compensation to educate AI tools and ultimately compete with the original creator’s work and impact their livelihoods is a significant risk and is becoming more and more difficult to prevent. Our Risk & Resilience survey revealed that over a quarter (26%) of media executives surveyed feel unprepared to manage the risks of technological disruption such as AI.
Output generated by AI tools can be incorrect and needs a human eye to identify basic errors. Data misuse, in particular, is a serious privacy concern. OpenAI made headlines recently as a technological glitch enabled some ChatGPT users to see the title of other users' conversations.5 In an era of fake news, the role of media and content creators to provide authentic and trustworthy content has become even more important.
Who is liable?
There is a question mark around who is liable for potential copyright infringement – the user of AI or the developer of the tool? Right now, it remains unclear and AI users are unlikely to have much recourse against the owner of the AI tool.
Perhaps more concerning is the lack of protection for companies using the output of AI. Without knowing how an image or copy has been sourced via an AI-platform, it is very difficult to know if it infringes any third-party copyright.
As countries look to harness the benefits of AI whilst ensuring safeguards around the use of new technologies, regulators are now turning their attention to AI’s potential infringement of copyright. However, it is unlikely that there will be a global one-size fits all approach to regulation. Governments across the globe are diverging in their stance on AI with Italy temporarily banning ChatGPT earlier this year while Japan’s new AI rules favour developers over creators.6 7 For now, many creators and media companies find themselves in no-man’s land as copyright issues could be occurring, but the law is not yet clear as to whether AI generation output may or may not be copyrightable based on the recent US Copyright Office guidance on AI-assisted works.8 Furthermore, in periods of regulatory uncertainty, defending against a claim can be a costly endeavour, irrespective of outcome.
Human clearance essential
This is where media liability insurance may help, as it covers three main aspects linked to content in all its forms – libel, privacy, trademark and copyright – and are watching the market closely to see what regulatory developments emerge. However, insurers will likely expect ‘human’ clearance on anything AI generated.
There are also legal challenges emerging, but until the outcome of these is established and new laws drafted, we are operating within known copyright laws applied to a new technology where new expectations and precedents may result.
In the years to come, AI will likely become a key liability battleground for copyright related issues. However, until there is greater clarity over the legal landscape around AI and copyright infringement, ensuring that the normal ‘human’ checks and signoff processes are undertaken when using any AI generated content remains essential good practice. This is paramount for clients with media liability cover, as their insurer is likely to take a dim view of any AI generated content that has not gone through the insured’s usual clearance checks and compliance processes.
Underwriter - Media & Entertainment
1-https://www.newyorker.com/culture/infinite-scroll/is-ai-art-stealing-from-artists
2-https://www.gov.uk/government/speeches/chancellor-jeremy-hunts-speech-at-bloomberg
3-https://www.bbc.com/news/entertainment-arts-65881813
4-https://www.theguardian.com/music/2023/apr/18/ai-song-featuring-fake-drake-and-weeknd-vocals-pulled-from-streaming-services
5-https://www.bbc.co.uk/news/technology-65047304
6-https://www.reuters.com/technology/italy-watchdog-review-other-ai-systems-after-chatgpt-brief-ban-2023-05-22/
7-https://restofworld.org/2023/japans-new-ai-rules-favor-copycats-over-artists/
8-https://ipwatchdog.com/2023/02/23/u-s-copyright-office-clarifies-limits-copyright-ai-generated-works/id=157023/
The descriptions contained in this communication are for preliminary informational purposes only and coverages are available in the US only on a surplus lines basis through licensed surplus lines brokers underwritten by Beazley syndicates at Lloyd’s. The exact coverage afforded by the products described herein is subject to and governed by the terms and conditions of each policy issued. The publication and delivery of the information contained herein is not intended as a solicitation for the purchase of insurance on any US risk. Beazley USA Services, Inc. is licensed and regulated by insurance regulatory authorities in the respective states of the US and transacts business in the State of California as Beazley Insurance Services (License#: 0G55497).