Questions related to the GenAI output

Topics: Questions; Input; GenAI
  • OUTPUTS
    • ❓When I use GenAI tools, do I own any of the outputs?

      or ❓If an AI-generated work is copyright-protected, who owns the outputs?

      Probably not .

      Traditionally, copyright law assigns ownership in a work to the author/artist.  If the author/artist is an employee, the employer typically owns it, and when they are an independent contractor, who owns it typically gets spelled out in the contract.

      With a prompt-based GenAI tool, it is unclear whether the user inputting text prompts or the developer of the AI tool itself would be the ‘author’, and thus provided that there is some copyright protection, the owner of the output. However, for the outputs to be “authored” and owned by the machines that create them contradict the fact that only human authors should own copyrights.

      Right now, we’re in a moment of flux about who, if anyone, owns GenAI outputs.

      Let us have a closer look at the current EU and UK cases in terms of intellectual property and copyright protection for AI-generated works as it develops:

      EU: Ownership possible?

      EU copyright law is a patchwork of 13 directives and two regulations. However, none of this legislation, nor the upcoming EU AI Act, directly addresses the ownership of AI-generated works, and, outside the legislation, there is little in terms of relevant EU-level case law. The Court of Justice of the European Union (CJEU) does provide some limited directional guidance in Infopaq International A/S v Danske Dagblades Forening (Case C-5/08), where it held that copyright will only subsist if there is originality flowing from the “author’s own intellectual creation.” This has been widely interpreted to mean that a significant form of human input is required. Nevertheless, it will be for individual EU member states to determine whether the output of an AI-generative model can meet this requirement. By way of example of the state of play, the German Copyright Act requires an author’s “own intellectual creation” for the existence of a copyrightable work – and it is thought that neither a machine nor a computer program can be the author, so it is presupposed that an “intellectual creation” must be created by a human. Likewise in France, the current presumption is that only natural persons can be considered authors, and originality requires “the personal touch or intellectual effort” of the author, whereas “implementation of automatic and constraining logic” without “genuine personal effort” will not qualify.

      UK: Ownership available

      The UK’s position is similar to the EU’s position, requiring a copyright work to be “the author’s own intellectual creation” and exhibiting an author’s “personal touch.”

      Significantly, the UK under UK Copyright Designs and Patents Act 1988 (‘CDPA’) grants copyright protection to computer-generated works even when no human creator is involved. Although the idea that a non-human “computer” can generate a copyrightable work embodying creative skill has been widely challenged, the UK law, (Section 9(3) of the CDP) clearly provides that when a work is “generated by computer in circumstances where there is no human author”, the author of such the resulting copyrightable work is “the person by whom the arrangements necessary for the creation of the work are undertaken”.

      With a prompt-based AI tool,  it is unclear whether the user inputting text prompts or the owner of the AI tool itself would be the person making “necessary arrangements” and thus the author and copyright owner of the outputs. Any remaining doubt about ownership as between the user and the creator of the tool can be resolved by contract, e.g. under an AI tool’s end-user licence agreement, but in practice this is not addressed in the T&Cs for all currently-available tools.

    • ❓What is the copyright status of AI-generated content? (the ‘originality’ test)

      Complicated for sure.

      To be protected under UK law, such works still need to be “original”. There is uncertainty in UK law both about the correct test for “originality” to be applied and whether the test requires a human author. For EU law, the originality test was introduced in EU Directives on software and databases as “author’s own intellectual creation” and has now been applied more broadly to encompass copyright works beyond software and databases. However, there is uncertainty over how broadly the EU originality test applies in the UK, with the UK’s current provision offering only partial guidance.

      The “author’s own intellectual creation” is generally regarded as requiring a higher standard of originality than the English case law standard. Many commentators consider that AI-created works that do not have a human author cannot meet this higher standard. However, there is uncertainty over how broadly the EU test applies in the UK, and whether it contradicts the CDPA, which seems to provide protection for non-human authored works.

      Under the CDPA, in cases of computer-generated works where no human author exists, the person overseeing the creation process is recognised as the ‘author’ because they are deemed to be closest to the creation of the work being undertaken. Such works encompass, not only computer programs but also, computer-generated industrial or architectural drawings (although it is rare in practice for there to be no human involvement). The question is, whether literary, dramatic, musical or artistic works created by AI can meet the “author’s own intellectual creation” originality test, and thereby, whether AI can be classed as an ‘author’ in the first instance. Without any legislative intervention, it is likely to be difficult to argue that a work created by AI could be ‘original’ under this test, and thus eligible to be protected by copyright.

      This position was recently ratified by the UK government, as well as the UK Intellectual Property Office – which in 2022 held an open consultation specifically on the application of Section 9(3) to generative AI.

      In October 2021, the IPO began consulting on potential changes to patent and copyright laws related to AI-generated technology. Options included;

      • introducing a new narrower and shorter duration right for such works, which would coexist with other rights,
      • adjusting copyright protection for computer-generated works and
      • enhancing text and data mining (TDM) rules.**

      Two conflicting views emerged. The technological sector believes the copyright to AI-generated content should belong to users, whereas the creative sector wants this content to be excluded from ownership entirely.

      On 28 June 2022, the government decided not to change the copyright protection for such works, citing a lack of evidence of harm, and early-stage AI use. They committed to ongoing review for future adjustments. In March 2023, following Sir Patrick Vallance’s Pro-Innovation Regulation of Technologies Review, the government tasked the IPO with creating a voluntary code of practice to guide the use of copyrighted material as in AI models. On 29 June 2023, the IPO announced that work had started to develop a voluntary code of practice for copyright and AI.

      In February 2024, the UK government confirmed in its response to its AI regulation consultation that no effective voluntary code can be agreed. It has promised that it will soon set out further proposals on the way forward and flagged that it intends to explore mechanisms for providing greater transparency so that rightsholders can better understand whether content they produce is used as an input into AI models.

    • ❓What am I allowed to do with the GenAI outputs?

      This is not as simple a question as you might think.

      The reason is that authorship and ownership are different elements and may in practice diverge. Legal frameworks and contractual agreements play a significant role in defining these concepts and rights. That is to say, just because you are the author of a work does not always mean you are its owner. This can lead to false attributions when it comes to AI-generated content.

      Proving or enforcing authorship or copyright ownership of a work may sometimes be difficult in practice. For this reason, in the EU, many Member States provide for rules that establish a (rebuttable) presumption of authorship or copyright ownership, in that the person indicated on or with the published work as the author is deemed to be the author, unless proven otherwise. Naturally, in the case of AI generated output, this may lead to false attributions of authorship and ownership to a natural person, e.g. the prompter that publishes the output as their own.

      However, when it comes to AI-generated tools, if the output is protected by copyright and the terms of use provided by the platform are silent on the matter, the person who initiated the AI process (’the user’) is typically considered both the author and owner of the output.

      Therefore, it’s crucial to consider the terms of use provided by these platforms, as they will often dictate the ownership of the generated content. Assuming these terms are valid and enforceable under different national laws, meaning that they will govern the question of ownership of the outputs.

    • ❓Is the output a derivative work of the “ingested” copyrighted works or its an original work?

      It depends.

      Putting aside the permissibility of data ingestion, another copyright issue is whether the resulting output constitutes a derivative work. Only copyright holders have the exclusive right to create works that are “based on” their preexisting work.

      A work that is generated by a GenAI system will be found to be an infringing derivative work only if it contains substantially similar expressive elements as the materials that were used for training. What “substantially similar” means will depend on the works at issue (e.g., images, software programs, text, music, etc.).

      This type of output is a risk to GenAI systems, so developers will need to use training data in ways that will avoid the generation of infringing derivatives (e.g., removing duplicative or repetitive content from the training data to reduce the probability that the output will be the same as or substantially similar to the input).

    • ❓Does an AI-generated output infringe on a copyrighted work of a third party, especially those works “ingested” during the training stage of the AI system? {under development}

      Most probably, yes.

      Gen AI outputs are based on models and systems usually trained by processing vast amounts of information. This training process can involve using copyright-protected or third-party content that might infringe on the copyright of that content.

      These concerns have led some jurisdictions to consider amendments to existing text and data mining exemptions to cover the training of generative AI systems. Such exemptions permit certain activities that would otherwise constitute copyright infringement.

      Copyright infringement under UK law occurs when there is copying of the ‘whole or substantial part’ of a particular work.  If the outputs of an AI tool reproduced specific, recognizable sentences or images of an original work, it could potentially be considered as copyright infringement. However, it may be difficult to identify such specific examples of direct copying, particularly when the AI model is trained on extensive datasets from various sources. Generally, well-built AI tools are designed to generate original content without performing literal copying, in part to avoid allegations of copyright infringement. Also , it is usually difficult to trace the training sources of AI models, and to identify potential ‘originals’.

      Whilst rights holders may struggle to bring actions for copying by reference solely to the works generated by the AI system (the outputs), they may be able to bring actions for copying of the training data itself (the inputs). Actually, the AI system/tool here performs a double copyright infringement, both in the input and the output stage.

    • ❓Do any copyright exceptions apply to outputs that might otherwise infringe copyright? {under development}
    • ❓Should novel output generated by AI be protected by copyright? {under development}

      https://creativecommons.org/2020/08/10/no-copyright-protection-for-ai-generated-output/