Copyright Risks of using Generative AI – Are you prepared?

0
460
Generative AI

Generative AI, or GenAI for short, has transformed industries like advertising and copywriting.  Tools like ChatGPT, MidJourney, and Stable Diffusion allow companies to write engaging content and create stunning images without hiring any external company or consultant.  At the same time, using AI for generating such content has also opened a Pandora’s box of intellectual property issues that previously did not exist. The main question that gets asked is .. who owns this AI-generated content? This article covers the legal and ethical implications of AI-generated content and how companies can protect themselves from copyright claims. 

GenAI and Copyright

GenAI learning process is based on the content it is trained on and its capabilities improve with each new piece of content it assimilates.  We have already seen stunning images created by MidJourney that have gone viral and fooled millions of people. 

Artists are rightfully critical of GenAI usage as they feel it could end human creativity. If GenAI starts matching human creativity, can this mean that art as a career is about to end? 

Another key question that has emerged is .. who owns the content? Is it the person who issued the prompt to the GenAI system, or is the AI model itself? What if the AI was trained on copyrighted information and generated images or code that it did not have the right to access? 

As GenAI adoption skyrockets, this will become a significant issue for companies planning to use these tools in their content creation work, especially with images or written material. These companies could potentially expose themselves to legal consequences if it turns out that the GenAI model was trained on copyright material without the consent of the person who created the original content. This is the first time we have faced such an issue with technology where the boundaries between creativity and tech are effectively blurred. 

There are already cases emerging, with the famous comedian Sarah Silverman and other authors initiating legal action against OpenAI and Meta. They claim that the AI models used by these tech giants were trained on their copyrighted material without their consent. Another case earlier this year emerged when Getty Images filed a lawsuit against Stability AI, creators of the open-source AI art generator Stable Diffusion, stating that the company committed “brazen infringement of Getty Images’ intellectual property.” Stability AI allegedly copied more than 12 million images from Getty Images’ database without explicit permission or providing any compensation, infringing the company’s copyright and trademark protections.

We can expect this to be the start of similar cases as authors and artists wake up to the potential of their original works being taken away and absorbed by GenAI. 

Can existing laws help?

Companies can find themselves struggling when trying to answer these new questions as AI steps into a domain previously exclusive to humans. While monotonous and repetitive tasks were always vulnerable to automation, creativity was a skill that was thought only humans possessed ! 

Current laws are designed for human beings, not AI, and might not provide the guidance companies seek. A recent example was when an artwork generated by AI  won the Colorado State Fair’s art competition in 2022. This understandably led to outrage from the artistic community, who felt that AI-Generated images do not qualify for such prizes. 

The European Union is finalizing its AI act, which, similar to how GDPR did for data privacy, will set down the tone for how AI copyright issues will be treated. European Parliament members have advocated for regulations requiring companies to disclose any copyrighted material used to train AI systems. However, these efforts are very much in their early stages and will take time to be formalized and put into action. 

A New World

We are entering uncharted territory, and laws governing the usage of AI-generated content are very much needed to clarify what is and is not allowed. Until this issue is resolved, AI-generated content will continue to be a topic of hot debate. 

Companies heavily investing in Generative AI should check with their legal and compliance teams to ensure they do not stray into a legal minefield with such tools!  Without clarity, they can expose themselves to claims of infringement (intentional or unintentional ) if the GenAI model they use was trained on data containing copyrighted content. We can expect content creators to also start protecting their data with new techniques that allow them to be informed if their work is being used in GenAI without their consent. 

GenAI is here to stay, and the world of content creation will never be the same again. Artists, AI developers, and companies must align with this new reality and put in measures to ensure that the rights of creators are respected and that no form of infringement occurs when using AI-generated content. 

Frequently Asked Questions

Who owns the content generated by Generative AI?

The ownership of AI-generated content is a topic of much debate and legal ambiguity. It could belong to the individual or organization that issued the prompt, the developers who trained the AI, or no one. As AI progresses, legal systems worldwide are working to address these questions, but the answers still need clarification.

Can Generative AI infringe on copyright laws?

Yes, there’s a risk that Generative AI can infringe on copyright laws, especially if the AI was trained on copyrighted material without permission. This is an emerging issue, with cases already going through the courts. Users of AI-generated content should be aware of this risk and consult with legal counsel when necessary.

What is being done to protect the rights of original content creators?

Initiatives are underway globally to protect the rights of original content creators. For instance, the European Union is finalizing an AI act that could include regulations requiring companies to disclose any copyrighted material used to train AI systems. Content creators are also seeking ways to protect and monitor their work.

How can companies safeguard themselves when using Generative AI?

Companies can protect themselves by working closely with their legal and compliance teams to understand and mitigate potential risks. This could include being transparent about their use of AI and the data it’s trained on and seeking appropriate permissions where necessary. Staying abreast of changes and engaging in best practices is vital as legal frameworks evolve.