
The <span class="news-text_italic-underline">EU AI Act</span> defines Generative AI as “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video”.
Potential concerns for enterprise stakeholders, particularly legal and compliance professionals, arise as businesses explore how to use these new tools. Key areas of focus include intellectual property, data protection and contracts. The role of a legal executive is to advise stakeholders on the risks associated with business applications of Generative AI.
The materials used to train AI could be copyright protected and reproductions of these materials are likely made during the training process. Unless copyright exceptions apply, such reproductions may constitute infringement. These exceptions vary by jurisdiction - for example, fair use in the United States and exceptions for transient or incidental copying, and text and data mining in the EU. It is therefore difficult to identify which materials could be used to train an AI system without infringing any intellectual property rights, including copyrights.
Current copyright law generally grants rights to the author of a protected work, focusing on the human author's intellectual and personal relationship with their work. When it comes to outputs from Generative AI, a question arises as to whether these outputs can have an author, as the composition is done by an AI system, not a human mind. The European Parliament has stated that works created independently by an AI system are not currently eligible for copyright protection because intellectual property rights generally require human involvement in the creation process.
Lawmakers may move towards a position where modifying AI output to create a new work allows the human author to obtain copyright, whereas outputs created more by the AI system itself are less likely to be granted such rights.
Generative AI systems ingest and generate large amounts of data, requiring various levels of protection. When data qualifies as personally identifiable information, data protection laws like GDPR in Europe or CCPA in California may apply. Business data, such as financial and technical information, may be classified as confidential information. Organisations must carefully consider the categorisation of data inputted into these systems and ensure it is processed lawfully, securely and confidentially.
From an EU perspective, a starting point is to consider the roles of the parties involved (for example, data controller, data processor) to define responsibility for compliance. A Generative AI system provider may operate as a data controller for initial training data and an independent data controller for data-embedded products. When licensing the AI engine, the provider may also act as a data processor for a customer organisation's input and output data. Depending on the business model, the customer organisation will likely operate as a data controller for additional training and input/output data. Mixed roles or joint controllership are also possible.
Organisations should pay specific attention to transparency, describing the use and purpose of AI systems in privacy policies. Increased diligence is required for sensitive data, such as data concerning minors or health information. Data minimisation should be considered, limiting or excluding personal data from training sets. Organisations need to establish the lawfulness of processing personal data, potentially invoking legitimate interests or contractual necessity. Implementing processes to comply with individual rights, such as access, rectification and deletion, may be challenging.
Generative AI models can inadvertently learn and reproduce sensitive information from training data, potentially leading to the generation of outputs containing confidential information. Businesses must also be aware of their own confidentiality obligations regarding data shared by third parties. Ensuring the ongoing confidentiality of data across the entire AI lifecycle is essential. Organisations should consider measures such as limiting data access, adopting specific policies, adapting procedures for individual rights and providing employee training. Technical and organisational measures like AI governance, privacy-by-design, pseudonymisation, anonymisation and encryption are important.
Careful consideration of contract terms is crucial when licensing or entering into agreements for Generative AI solutions. Key points include liability, where organisations may seek indemnities from providers for intellectual property or data breaches. The insurance of providers, especially smaller ones, should be considered. Business continuity in case of unavailability is important. Privacy and confidentiality provisions are likely to be a key focus. The impact of emerging AI regulations on contract terms should also be addressed.
Legal executives can take a leading role in strategic decision-making related to the use of Generative AI. Their responsibilities are likely to include developing ethical and legal frameworks, curating the organisation’s risk appetite and ensuring compliance. They should stay closely engaged with the evolution of the technology and changing laws and regulations. Training people on the ethical and legal implications may also fall under their domain. Legal executives are increasingly likely to be undertaking legal assessments to determine their approach to these issues.