The direction in which the protection of intellectual property rights from artificial intelligence will develop, particularly in the creative industry, is a topic full of unknowns that will remain the subject of serious debates, litigation, and public outcry in the years to come.
Photo Source: Fabebk – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=131753041
American actors and screenwriters went on strike in 2023, with one of their main demands being to regulate the use of works generated by artificial intelligence. The largest American actor union (SAG-AFTRA) demanded greater protections for the replication of actors’ faces and voices with the help of artificial intelligence, while US screenwriters, united in the Writers Guild of America (WGA), demanded protections against the use of screenwriters’ material as training data for algorithms using artificial intelligence.
The several-month-long strikes by American actors and screenwriters have brought to the forefront the topic of how content (photos, videos, audio, scripts, books, and other creative content) created by algorithms that use artificial intelligence, such as ChatGPT and MidJourney, will affect the protection of intellectual property rights.
What is intellectual property?
Before we move on to current trends in regulating the use of artificial intelligence in the creative industry, it would be useful to define what constitutes intellectual property.
According to the World Intellectual Property Organization, intellectual property refers to a wide range of mental creations, such as inventions, literary and artistic works, symbols, names, images used in commerce, designs, and the like.
Intellectual property is protected by law through patents, copyrights, and trademarks, which allow people to gain recognition or benefit financially from what they create or innovate. By striking the right balance between the interests of innovators and the broader public interest, the intellectual property system aims to foster an environment in which creativity and innovation can flourish.
At the same time, the greatest attention currently being paid to the impact of artificial intelligence is precisely the part of respecting copyright. The US Copyright Office defines copyright. A work of authorship is an individual and intellectual creation in the field of literature, science, art and other fields of creativity, regardless of the type, manner and form of expression. Moreover, copyright is a type of intellectual property that protects original works of authorship, as soon as the author makes the work into a tangible form of expression.
The group of copyrighted works includes a wide variety of types, ranging from artistic paintings, through photographs, illustrations, musical compositions, sound recordings, computer programs, books, songs, blog posts, films, architectural works, plays, and many other works.
Artificial intelligence and the works of actors and screenwriters
It’s been almost a year since it was announced in January 2023 that product design manager Ammaar Reshi had written and designed the first children’s book in 72 hours, using ChatGPT and Midjourney. While Ammaar was excited about being able to create a book in such a short time, the use of artificial intelligence to create this book has sparked a backlash from artists.
It only took a few months for the negative reactions of artists to turn into open protests and the initiation of strikes, which blocked the largest creative industry in the world–that of the United States–for months.
The creation of creative content generated by artificial intelligence technology and the question of its regulation in the United States has been the topic of most debate between the Screen Actors Guild and the Alliance of Motion Picture and Television Producers. The agreement reached last fall between the two parties provides for the adoption of a resolution that protects actors from reproductions of their faces and voices using artificial intelligence. Namely, such reproductions will be able to be used by film and television companies only with the consent of the actors themselves.
However, some prominent American actors, such as Justine Bateman and Matthew Modine, opposed the agreement, adding that it would threaten the independence and financial future of the actors.
The issue of generating creative content using artificial intelligence was also a topic of agreement during the American screenwriters’ strike with the Alliance of Motion Picture and Television Producers.
The deal with the screenwriters went much smoother, with greater protection for screenwriters’ intellectual property agreed upon through the adoption of the Rules for the Use of Artificial Intelligence. The agreement does not prohibit the use of AI tools in the screenwriting process, but it does set out safeguards to ensure that the new technology remains under the control of workers, rather than being used by their bosses to replace screenwriters.
Under the agreement, studios cannot use artificial intelligence to write scripts that have already been written by an author, and film and television studios are prevented from treating AI-generated content as original material–novels or other literary works.
For example, the agreement prohibits studios from using ChatGPT to generate a 100,000-word novel and then asking screenwriters to adapt the novel to create film or television content. The British newspaper The Guardian writes that this agreement could provide a fair basis for other creative industries to negotiate on the use of artificial intelligence.
The American screenwriters have reached an agreement with their employers that writers can use ChatGPT if they want to and if the companies themselves agree to it, but that writers cannot be forced to use artificial intelligence technology by film and television studios. Companies must also disclose whenever they assign material to writers that is supposed to be generated by artificial intelligence tools. In this way, workers, rather than employers, control the terms of use of artificial intelligence in creation.
Regulation of artificial intelligence in the EU
Unlike the current practice in the US, the EU is set to adopt the Artificial Intelligence Act, which has been the subject of intense debate in recent months. At the beginning of December 2023, the European Union reached an agreement on the legal regulation of artificial intelligence. On 9 December 2023, the President of the European Commission, Ursula von der Leyen, welcomed the political agreement of the European Parliament and the Council on the Artificial Intelligence Act.
“By guaranteeing the security and fundamental rights of people and businesses, the Act will support the human-centric, transparent and responsible development, deployment and take-up of artificial intelligence in the EU,” Ursula von der Leyen said in a statement.
While the legal solution has not yet been fully implemented, around 100 companies have already expressed their interest in joining and voluntarily implementing key obligations ahead of the legal deadline, the President of the European Commission emphasized.
According to information from the English-language Deutsche Welle service, EU lawmakers have determined that systems that generate content by analyzing huge amounts of data, such as ChatGPT, pose a medium risk. The upcoming EU legal rules mean that companies must be transparent about how the generative artificial intelligence models were trained, as well as disclose which copyrighted data was used for this purpose. Therefore, the content generated by ChatGPT must be appropriately labeled.
However, the EU institutions will need to continue to finalize the technical details of the final text of the Artificial Intelligence Act in the coming period, and once agreed, the final text will need to be approved by the European Parliament and the European Council in the first half of 2024. The expectation is that the legal solution will enter into force in the second or third quarter of 2024.
Otherwise, immediately after reaching an agreement at the EU level on the future Artificial Intelligence Act, civil society organizations expressed reservations and indicated that they are waiting for all the technical details that will need to be ironed out at the beginning of 2024. They believe that the agreement does not go far enough in protecting people from the harm caused by artificial intelligence systems.
The current version of the EU Artificial Intelligence Act was provisionally approved by members of the European Parliament (EP) in a vote in June 2023, and it contains several measures that will help determine what tech companies can and cannot do with copyrighted musical works. Among them is a legal requirement for companies using generative AI models, such as ChatGPT or Midjourney, to provide summaries of all copyrighted works, including music, that they use.
This means that future EU legislation will require AI technology developers to publish all the work of scientists, musicians, illustrators, photographers, and journalists used to train their programs. In addition, developers will have to prove that everything they did to train the machines complies with the law.
These transparency provisions in the EU Artificial Intelligence Act were openly welcomed by the authors themselves, but there are fears that behind the scenes tech companies in Brussels are actively lobbying EU policymakers to try to weaken the regulations, arguing that such obligations could put European AI developers at a competitive advantage.
Although the EU is a pioneer in the world in the area of legal regulation of artificial intelligence, the final technical details and ways of formulating the terms in the Artificial Intelligence Act will only be known after the European Parliament and the European Council adopt the text of the Act itself in the coming months.
Artificial intelligence and intellectual property rights in journalism
Reporters Without Borders, in collaboration with 16 partner organizations, published the Paris Charter on AI and Journalism on November 10, 2023. The document was drafted in July 2023, and was a collaborative effort between civil society organizations, experts in the field of artificial intelligence, media representatives, and journalists.
The Charter sets out a set of fundamental ethical principles for protecting the integrity of information in the age of artificial intelligence. Among the ten principles included in the document are the following:
- Journalistic ethics should lead the way in the use of technology by media outlets and journalists;
- Media outlets should prioritize the use of human labor;
- Artificial intelligence systems used in journalism should undergo prior independent evaluation;
- Media outlets are always the responsible party for the content they publish;
- Media outlets maintain transparency in the use of artificial intelligence systems;
- Media outlets ensure originality and the ability to verify it;
- Journalism draws a clear line of distinction between authentic and synthetic content;
- Personalized and recommended content derived from artificial intelligence supports diversity and integrity of information;
- Journalists, media outlets, and journalism will support groups involved in AI governance;
- Journalism upholds its ethical and economic foundations in engagements with organizations that create artificial intelligence.
The Paris Charter has been supported by prominent journalistic organizations, such as the European Federation of Journalists, the International Press Institute, the Thomson Foundation, the Global Investigative Journalism Network, and others. The basic principles imply that ethics must govern technological choices in the media, but humans must remain central to editorial decisions. Media outlets must distinguish between authentic and synthetic content, but they must also participate in the global governance of artificial intelligence and defend the sustainability of journalism when negotiating with technology companies.
It remains to be seen whether media outlets will adhere to the Paris Charter on AI and Journalism when creating everyday content, regulating the labor rights of media workers, and changing the way editorial decisions are made in newsrooms.
German media giant Axel Springer warned its staff in early 2023 that artificial intelligence would lead to job cuts in its media outlets, including the Bild and Welt newspapers. A few months later, in June 2023, the giant nevertheless retracted its statement. Axel Springer added that artificial intelligence would allow journalists to devote more time and energy to fulfilling basic tasks, and that boring work duties would be taken over by artificial intelligence. However, job cuts remain in the plans of this media company.
Returning to the question of why artificial intelligence could have a huge impact on intellectual property rights in the creative industries, the American scientific journal Scientific American provides an answer to this question through numerical data. The size of the data necessary to train the language models of artificial intelligence is striking.
When launching the Stable Diffusion artificial intelligence model, which generates images from text, it was necessary to train it with over 2.3 billion images so that this model could independently generate a final product–a photo generated by artificial intelligence.
“Clearing copyrights for a work can be difficult even for simple projects, and for very large projects or platforms, the challenges of even knowing who owns the rights are nearly impossible, given the practical requirements of locating metadata and evaluating agreements between authors or performers and publishers,” Scientific American states.
Of course, in which direction the protection of intellectual property rights from artificial intelligence will develop, especially in the creative industry, is a topic that has many unknowns, which will remain the subject of serious debates, litigation, and public outcry that are yet to follow.
Link to the original post: Artificial Intelligence and intellectual property rights–tough battles ahead
Share:
