Become a member

Ownership of IP Outputs from ChatGPT, Dall-E And Copilot – Do You Own What You Receive?

With the increase in availability and use of generative artificial intelligence (AI) tools, it is worth taking a step back to the basics of IP law and consider who owns the IP in the outputs that these tools produce.

This blog will focus on ownership of copyright in those outputs, but also other IP rights which may be relevant, such as design rights.

English law as it stands – current open questions

Initial ownership of copyright in a work in the UK is governed by section 11 of the Copyright Designs & Patents Act 1988 (“CDPA”). This provides that the ‘author’ of a work is typically the first owner of copyright (or employer if the work is made during the course of employment). In English law, this author must be a real person and not, for example, the AI tool itself.

However, there is currently an open question as to just who the author in the circumstances of an output generated by an AI chatbot is, and should be, if anyone at all.

Is there any copyright in the output at all?

Copyright will only subsist in an ’original’ output, i.e. one that is not copied and one that is the ‘author’s own intellectual creation’. This is based on EU caselaw, where it has been further explained that the content needs to:

  • reflect “human personality”;
  • result from “free and creative choices” and the “author’s personal touch”; and
  • not be dictated by technical considerations, rules or other constraints.

Where an output has been generated as a result of the AI tool having trained itself based on third party sources (for example the output was generated as a result of the AI tool having already analysed many existing copyright works, such as bodies of text or images already existing online), there is an open question as to whether that output can ever be sufficiently ‘original’ so as to benefit from copyright protection at all.

Even if it the output is seen as original or ‘not copied’, there is the question as to whether the output is the ‘author’s own intellectual creation’. This could depend on the level of input that the user has and how the AI tool works. For example, a tool which generates an initial image or text as a result of a detailed prompt, and where the user then edits that output using various other tools would be more likely to meet the copyright requirements than if the user simply inputted a basic prompt and did not edit the end output.

However, there are many different factual situations that lie between these positions, and each situation would have to be assessed based on the detailed facts.

Is the output a computer-generated work?

Section 9(3) of the CDPA also provides that, in the case of a literary, dramatic, musical or artistic work which is “computer-generated”, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken. Section 178 of the CDPA defines a computer-generated work as being where the “work is generated by computer in circumstances such that there is no human author of the work”.

The determination of who the author is where there is no human author (e.g. if the user is not found to be the author using the test set out above) is also unclear. For example, would the person who made the ‘arrangements necessary for the creation of the work’ be the designer of the learning algorithm that the AI tool uses, or the person who trained the system?

Of course, there could be many people involved with each step, resulting in there being multiple joint owners of copyright, and again there would need to be a decision made based on the detailed facts in each case. However, there are many difficulties when dealing with jointly owned copyright, relating to exploiting the work, enforcing the rights, and making decisions.

Developments in the US and the EU

What’s happening in the US?

The U.S. Copyright Office has previously determined that particular images generated by an AI image generating tool as a result of prompts inputted by the user were not protected by U.S. copyright law at all, as such outputs lacked sufficient human authorship.

However, the recent Part 2 of the US Copyright Office Report on ‘Copyright and Artificial Intelligence’ provides that the use of AI tools is not strictly a bar for copyright protection of the output in every case, and so there is again a need for a determination based on the facts as to whether there is sufficient human control over and input into the output.

Evidence that this subject continues to be a moving target can be shown by two diametrically opposed recent decisions in the US. In January 2025, the US Copyright office registered an image entitled “A Single Piece of American Cheese” created entirely by AI. Two months later, the US Court of Appeals (Washington DC) found that an image created by the DABUS AI model was not entitled to copyright protection and that creative works must have human authors.

The US Copyright Office has announced that a Part 3 of AI guidance will be forthcoming, which is intended to address the legal implications of training AI models on copyright works, including licensing considerations and potential liability.

What’s happening in the EU?

In the EU, at present, the position is that protection is only granted to works created by a human author.

As to the treatment of copyright works by AI models, the EU Artificial Intelligence Act came into force on 1 August 2024 and will be largely applicable by August 2026. The text provides for AI systems such as ChatGPT, and the models they are based on to have to adhere to transparency requirements. These include:

  • the preparation of technical documentation explaining how the model has been trained;
  • how the model performs; and
  • how it should be used.

The AI models will need to comply with EU copyright law (particularly the need to obtain authorisation from or enable content owners to opt out from the text and data mining of their content) and disseminating “sufficiently detailed” summaries about the content used for training.

Specific chatbot terms and conditions relating to IP ownership

The above default rules on first ownership of copyright are subject to terms and conditions that provide otherwise. However, any such terms cannot determine whether there is any copyright subsisting in the output, as they can only deal with the ownership of such rights if copyright does subsist.

The OpenAI terms of use apply to the use of text generating AI chatbot ChatGPT and text-to-image model DALL-E. The version of the terms of use effective on 11 December 2024 provides:

Ownership of content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.

While this appears to be clear in providing the user with any rights in the output, even if OpenAI is the ‘author’/’first owner’, this is only subject to the extent permitted by law. Therefore, if English law ultimately provides that there is no copyright in the output for example as a result of it failing to meet the originality test, then the user will have no such rights despite this wording in the terms of use.

Interestingly, Microsoft’s FAQs relating to CoPilot states that it does not claim ownership in the outputs, but also recognises the problems discussed in this blog in relation to determining whether there is any copyright in the outputs at all:

Microsoft doesn’t claim ownership of the output of the service. That said, we don’t make a determination on whether a customer’s output is copyright protected or enforceable against other users. This is because generative AI systems may produce similar responses to similar prompts or queries from multiple customers. Consequently, multiple customers may have or claim rights in content that is the same or substantially similar.

The potential future position in the UK

The UK government ran a consultation at the beginning of the year on copyright and AI, with its key focus on the wider use of copyright materials to train AI models in the UK whilst providing copyright owners with an opt-out to prevent unauthorised use.

The consultation has also been looking at copyright ownership of AI outputs, with the very real possibility that the government will remove the specific protection for computer generated works with a view to it being brought closer to the position in the EU.

Outside of new legislation, we are waiting for the UK court to consider the case of Getty Images (US) Inc v Stability AI Ltd on the issue of whether Stability AI has infringed Getty Images copyright by:

  1. Downloading and storing copyright works on servers in the UK during the development and training of the Stable Diffusion AI model; and
  2. Making Stable Diffusion available in the UK by which it provides the means for using text and/or image prompts to generate images that infringe Getty Images’ copyright works.

Key takeaways and next steps for UK businesses

If your business is currently, or is considering, using AI tools as part of its daily activities, then you should consider the following points arising from the above review of the current status of the ownership of AI tool outputs:

Audit current use of AI tools within your organisation. It’s important to consider how widely, and which, AI tools are currently used in your company, while also appreciating that this is likely to increase in the coming years.

Consider and communicate the implications of using AI generative tools. It’s important for those who use these tools to understand the implications in terms of ownership of IP in the outputs and what further use may be made of the outputs.

Review your internal policies. Consider your internal policies regarding which AI tools are permitted or recommended for use by employees. Such use may, for example, be only permitted relating to specific tasks – i.e. not tasks that would ordinarily lead to outputs in which it is important for you to retain all of the IP rights, and not want those outputs to be used as a basis for the generation of outputs for other users. This is particularly important if the user ‘inputs’ may potentially contain confidential information.

If you are using AI tools in your business, and are unsure about your IP position, our specialist team can help you make sense of the evolving legal framework and advise on the best way forward.

Mitigating Risks of Employees Using AI in the Workplace

Increasing productivity and improving decision-making are two common goals for organisations, and they can be improved through use of artificial intelligence (AI).

What are the benefits of using AI in the workplace?

Embracing AI tools in the workplace can align the wants and needs of employees with those of organisations, by increasing employee autonomy and empowering them to make decisions.

AI can also take over mundane, repetitive tasks, such as data entry or scheduling, and can equip employees with real-time insights and trends, offering data-driven recommendations. This enables employees to make smarter and more confident decisions. By leveraging AI for repetitive tasks, businesses can redirect labour, freeing employees to focus on more creative work, develop their skills and improve their work-life balance. AI can do this by providing companies with tools that boost efficiency, such as intelligent assistants, data analysis software, and collaborative platforms, potentially reducing stress and overtime. It can also reduce operational costs, streamline supply chains, and improve resource allocation. AI can help facilitate personalised customer experiences, paradoxically fostering deeper human connections with stakeholders and boosting engagement and loyalty.

Similarly, cost-saving remains a constant and unpredictable challenge for companies and tends to go hand in hand with retaining talent. Saving costs in one area of a business can lead to better salaries, higher morale, and reduced turnover.

AI is the modern-day industrial revolution, except it is not just about machines replacing tasks, it’s about enhancing human potential and improving the collective experience of work within society.

What are the risks of using AI?

Well managed risk provides the safest opportunity for success. Companies should consider some risks of using AI which are:

  • Bias and discrimination: AI systems can inadvertently preserve biases present in their training data, leading to unfair outcomes in hiring, promotions, or decision-making.
  • Privacy concerns: Employee and customer personal data, as well as commercially sensitive information might be processed by AI systems, raising questions about data security and proper handling of sensitive information.
  • Inaccuracy and errors: AI systems are not infallible and may produce inaccurate results, especially in complex or dynamic environments.
  • Security risks: AI systems can be susceptible to cyberattacks or manipulation, potentially compromising sensitive workplace operations.
  • IP ownership: Legal frameworks around intellectual property rights (“IPR”) ownership for AI-generated outputs are still evolving, creating uncertainty about who holds the rights. Additionally, generative AI may create content that closely resembles material from its training data, and that is protected by IPR, potentially leading to IPR infringement.

The importance of policies and training

AI is still learning (pun intended) – as are we all. Without effective policies and training in place employee’s morale may be knocked, and company reputation may be damaged, if, for example an employee blindly uses the results of an AI tool, and it turns out to be inaccurate or incorrect.

With appropriate guidelines in place, the benefits of AI in the workplace can be twofold. The risk can be mitigated quite simply by the collective efforts of employees and employers. Guidance and training are essential, and need to be evolving and collaborative, to complement similar traits in the AI systems, and in the individuals using them.

Implement a workplace policy

In order to mitigate the risk around use of AI, it is best practice for companies to put in place a workplace policy to set out the rules around the use of AI in the workplace.

A workplace AI policy could address issues, such as:

  • The use of AI in recruitment, appraisal and promotion processes,
  • How AI is being used in the company’s own services or products, and
  • Liability arising from the use of AI included in contractual arrangements the company has with third parties.

The workplace policy should go hand in hand with other policies the company already has in place such as an IT and communications policy, work devices policy, data protection policy, and diversity, equity, or an inclusion policy.

The policy should try and include a list of permitted and/or prohibited AI applications and guidelines for the use of authorised AI applications, including prohibition of the use of business, customer and personal data, discriminatory or inappropriate language in prompts, as well as guidance in relation to use of the IP rights of third parties, cyber security and the principles of ethical and responsible use in general.

Ensure employees understand the risks

Similarly, training should be provided on the policies, as well as on the use of AI in the workplace generally. For example, training employees on any resulting data from AI, including checking the accuracy of the data relied on, and on any restrictions or limitations on AI’s use.

Companies should only collect the minimum possible information needed to achieve the purpose of the relevant AI tool, and ensure this information is only process for that limited purpose and is not stored, shared, or reprocessed for any alternative purpose.

Non-compliance

The UK is taking a principle-based approach in respect of the regulation of AI, and we do not currently have any legislation in the UK which apply to the direct regulation of AI. However, this may change after the Artificial Intelligence (Regulation) Bill was reintroduced into the House of Lords in March 2025.

Although there is no current UK legislation, companies and employees should be aware of the effects of the EU AI Act, which came into force on 1 August 2024. The main goal of the EU AI Act is for the prevention of risks potentially posed by AI systems and products from arising in the first place. The EU AI Act will come into effect by 2 August 2026, however, certain provisions will be effective earlier, such as for prohibited AI practices on 2 February 2025 and for general purpose of AI on 2 August 2025. The EU AI Act extends beyond the EU and will affect businesses in the UK that develop or deploy AI system or products that are used in the EU, so these businesses must be compliant with the EU AI Act.

Non-compliance could lead to significant fines under the EU AI Act, which are capped at a percentage of global annual turnover in the previous financial year or a fixed amount (whichever is higher).

Any fines or dismissal of employees may severely impact a company’s brand and reputation, including tarnishing their image in the market.

How can businesses utilise AI safely?

AI offers transformative potential for workplaces, but its benefits come with responsibilities. By prioritising compliance with workplace policies and the EU AI Act, companies can harness AI’s power while safeguarding their reputation, legal standing, and operational integrity. In an era where trust and accountability are paramount, responsible AI use is not just a regulatory requirement—it is a business imperative.

Our experienced team of commercial solicitors can support you if you would like to learn more about AI in your workplace, or if you require further advice on your own AI policies.

UPDATED GUIDANCE ON IMPORTING COMPOSITE PRODUCTS TO GREAT BRITAIN

DEFRA have recently published updated guidance on importing composite products into GB. 

By definition, composite products are food products for human consumption that contain both:  

  • processed products of animal origin (POAO) 
  • plant products 

When you import any food that contains products of animal origin (POAO) such as meat, dairy or eggs, you must follow the guidance for importing animal products for human consumption. 

This additional guidance applies to composite products from EU and non-EU countries and explains which composite products are exempt from import controls and what documents you need for your specific product. 

The guidance can be accessed here 

STUDY ANALYSES LOW-DOSE ALLERGY REACTIONS TO AID DEVELOPMENT OF PAL

Researchers from the Netherlands Organisation for Applied Scientific Research (TNO) and the Food Allergy Research and Resource Program (FARRP) of the University of Nebraska-Lincoln, have analysed thousands of individual threshold data food allergy symptoms held in a database to shed light on the severity of symptoms caused by low doses of allergenic foods.   

The study investigated all symptoms recorded in the TNO-FARRP threshold database occurring at doses ≤ the eliciting dose 10 (≤ED10) for the priority allergenic foods for which population threshold dose distributions have been reported. ED10 is the threshold where up to 10 per cent of allergic individuals exhibit objective symptoms.    

 Almost all of the symptoms in the dose range up to the ED10, and all symptoms in the dose range up to an including the ED05, were mild or moderate and mainly concern subjective or objective symptoms of the skin, eyes or nose, or oral cavity. To a lesser extent, gastro-intestinal or respiratory symptoms were reported. 

The researchers concluded that exposure to doses ≤ ED05 generally results in mild to moderate symptoms for a small subset of allergic individuals.  

Insight into the severity of symptoms at low dose intakes of protein may support decision making and acceptance of harmonized reference doses for Precautionary Allergen Labelling (PAL). A risk-based approach for applying PAL is widely considered a solution for improved protection of food allergic consumers and better-informed food choices. 

Read the paper in full here  

RE‐EVALUATION OF ACESULFAME K (E 950) AS A FOOD ADDITIVE

The Food Additives and Flavourings Panel of the European Food Safety Authority (EFSA) recently published a scientific opinion on the reevaluation of acesulfame K (E 950) as a food additive.  

Acesulfame K (E 950) is the chemically manufactured compound 6methyl1,2,3oxathiazin4(3H)one2,2dioxide potassium salt. It is authorised for use in the European Union (EU) in accordance with Regulation (EC) No 1333/2008. 

The Panel established an acceptable daily intake (ADI) of 15 mg/kg body weight (bw) per day based on the highest dose tested without adverse effects in a chronic toxicity and carcinogenicity study in rats; This revised ADI replaces the ADI of 9 mg/kg bw per day established by the Scientific Committee on Food (SCF) . 

The Panel noted that the highest estimate of exposure to acesulfame K (E 950) was generally below the ADI in all population groups. The Panel recommended the European Commission to consider the revision of the EU specifications of acesulfame K (E 950). 

You can read the scientific opinion in full in the EFSA Journal  

ONLINE COURSE: ALLERGYWISE® FOR WORKPLACES

Anaphylaxis UK have developed a new online course designed to ensure all staff in a workplace or business are allergy aware, can recognise the signs of a serious allergic reaction (anaphylaxis) and have the confidence to safely manage a reaction. 

The AllergyWise® for Workplaces course includes: 

  • 5 modules (allergic reactions, anaphylaxis, allergen avoidance & risk factors, treating anaphylaxis with adrenaline auto-injectors, allergy management in the workplace) 
  • Links to useful resources, such as a Workplace Anaphylaxis Risk Assessment and Adult Allergy Action Plan 
  • 5 quizzes 
  • Practical scenarios 
  • Optional narration of lessons 
  • Final assessment 
  • Personalised downloadable digital certificate of completion 

The course takes approximately 1 – 1.5 hours to complete, costs just £21 including VAT per person (minimum of 5 staff) and can be purchased here. 

Creating a working environment where employees can safely do their jobs is part of an employer’s duty of care under the Health and Safety at Work Act 1974. 

The most serious allergic reaction (anaphylaxis) usually begins within minutes and can be life-threatening. The more staff that are allergy aware, understand the importance of allergen avoidance, know the signs of an allergic reaction and what to do in an emergency, the safer everyone with allergies in the workplace will be. 

 

Member Benefits

Exclusive Partnership deals on key products and services:

  • BFFF energy deals and rates
  • Vypr member deals and introduction
  • Defib Plus deals
  • Company Shop – membership
  • Mentor – MHE training health check

Exclusive access to networking opportunities and events:

  • Meet the Buyer events (retail & foodservice)
  • Annual Business Conference with networking dinner
  • Specialist H&S and Technical Conferences
  • Special interest groups (packaging, frozen food temperatures)
  • Annual Lunch
  • Awards Night
Upcoming Events More Events
Sponsorship Packages

We offer a range of sponsorship opportunities to BFFF members across our events throughout the year, with flexible packages that can be tailored to suit your business objectives.

Contact Us
British Frozen Food Federation Members Logo
what our members say...
  • Wakefield Council

    “What an amazing piece of work and indicative of how BFFF respond to the concerns of their members and make an impact on the whole industry sector.”

    See Full Quote

  • Sysco

    “You guys really ‘Do The Right Thing’ for the good of the industry”

    See Full Quote

  • Darta

    “The BFFF awards night is becoming an “appointment not to miss” on our calendar and we again enjoyed it immensely together with lots of well-known people from our industry. The…

    See Full Quote

  • Kantar Worldpanel

    “The Business Conference was an excellent day that was very well organised and allowed so many likeminded individuals in the room to learn so much more around the Frozen industry….

    See Full Quote

  • Lakeside Food Group Ltd

    “This Not For EU labelling situation alarmed us and quickly became a major worry to our business. These are times when you really rely on some support and from previous…

    See Full Quote

  • Meadow Vale Foods Limited

    “We had a few questions with respect to the new EPR waste packaging legislative changes. I know some of my colleagues have been assisted by BFFF in the past so…

    See Full Quote

  • Newberry International Produce Ltd

    “I am writing to express my heartfelt gratitude for the outstanding event you organised. I have only worked in this sector for the past nineteen months coming from twenty-five years…

    See Full Quote

  • Place UK Ltd

    “The BFFF 2024 Conference was compelling and thought provoking, with a many relevant and interesting topics covered at great pace and some depth by excellent speakers – will certainly attend…

    See Full Quote

  • Roswel Spedition GMBH

    “Thank you and the team for rushing around so brilliantly before, during and after the conference. It was pleasure to be part of the conference.”

    See Full Quote

  • Seara

    “The event was great, in my opinion. Not only it was very well organised, but the venue and the catering were excellent too. Furthermore, the content of the presentations was…

    See Full Quote

Website Designed & Built by we are CODA