our insights

Update from the Government of Canada on Issues of Copyright in the Age of Generative Artificial Intelligence

05/08/2025

The Government of Canada recently released its What We Heard Report, which summarizes the comments it received as part of its Consultation on Copyright in the Age of Generative Artificial Intelligence (the Consultation). The Consultation called for comments and technical evidence from stakeholders, including those in the creative and technology industries, on copyright policy issues relating to generative AI. The What We Heard Report summarizes observations based on the feedback it received in the Consultation.

The United States Copyright Office also recently released its second report, Copyright and Artificial Intelligence Part 2: Copyrightability, as part of its ongoing study on copyright and AI. Our comments on this report can be found here.

Background

 The What We Heard Report follows the Canadian Government’s third consultation relating to AI and copyright, which was held from October 12, 2023 to January 15, 2024, and sought to better understand the effects of generative AI on copyright and the marketplace. (Our detailed comments on the Consultation can be found here: Canada Launches Consultation on Copyright in the Age of Generative Artificial Intelligence.)

The Consultation sought feedback on three key topics:

  1. the use of copyright-protected works in the training of AI systems, notably for text and data mining (TDM) activities;
  2. authorship and ownership rights related to AI-generated content; and
  3. questions of liability, notably when AI-generated content infringes copyright.

The Consultation also sought engagement with the Indigenous community to better understand some of the unique concerns about the effects of generative AI on their rights and cultural expression.

In total, about 1,000 interested Canadians submitted comments during the Consultation. The government received 103 responses from organizations or expert stakeholders across different industries, which can be found here: Submissions: Consultation on Copyright in the Age of Generative Artificial Intelligence. In addition, a diverse group of 62 stakeholders participated in seven roundtables held during the Consultation.

The What We Heard Report summarizes the feedback received during the Consultation into 11 observations. A summary of certain of those observations is set out below.

Highlights From The What We Heard Report

Text and Data Mining (TDM)

The Consultation paper considered whether the Copyright Act, R.S.C., 1985, c. C-42 (the Act) should clarify when the use of copyright-protected works for AI training and TDM requires authorization from rights holders or falls under an existing exception to copyright infringement or whether a new exception should be introduced to address TDM. The views were divided. Overall, the report made three observations with respect to TDM:

Observation 1: Creators oppose the use of their content in AI without consent and compensation

Several stakeholders, including individual creators and cultural industries, argued that the unauthorized use of copyright-protected works for TDM and AI training violates their rights under current copyright law and emphasized the need for rights holders to give consent and receive compensation for such uses. They also expressed the view that there is no existing exception, nor should there be an exception, covering the use of copyright-protected works for TDM purposes.

Most of these submissions considered licensing to be a viable option, emphasizing the necessity of robust licensing frameworks to ensure fair compensation and enforcement mechanisms for creators. There were differing views on the best licensing model, with some advocating for downstream remuneration and collective management. Many stakeholders favored a voluntary licensing model over a so-called “opt-out” regime.

Observation 2: User groups support clarifications that TDM does not infringe copyright 

Some stakeholders, especially those in the technology industry, generally supported clarifications to copyright law to facilitate TDM and AI training. They expressed support for the introduction of a standalone TDM exception or a broadening of existing exceptions, including fair dealing. Other stakeholders were open to more limited exceptions for TDM, such as for research or public interest purposes. Some stakeholders argued that TDM and AI training on copyright-protected works do not infringe copyright by stating that TDM often does not reproduce the expressive content of works.

Observation 3: Support for greater transparency regarding TDM inputs 

Many stakeholders support the development of transparency requirements (i.e., recordkeeping and disclosure requirements) surrounding the use of copyright-protected works in the training of AI, which they argued would help rights holders evaluate their legal options including when TDM engages with their works and when they may want to seek compensation for such engagement.

Other stakeholders, mainly from the technology industries, were of the view that such requirements could force them to disclose potentially sensitive data and may not be appropriate in certain circumstances. They also argued that imposing such requirements may harm the competitiveness of the Canadian AI industry relative to other jurisdictions that do not impose such requirements.

The Government of Canada stated that in light of the feedback received, it will consider options to bring more clarity into the marketplace and examine how a balanced copyright approach to TDM activities could support the rights of creators while fostering Canadian innovation in an evolving global context.

Authorship and Ownership of Works Generated by AI

Observation 4: Support for keeping human authorship central to copyright protection

The fourth observation expressed that participants in the Consultation supported keeping human authorship central to copyright protection. Generally, participants supported that only AI-generated content with sufficient human contributions should be protected. However, it was also noted that human authors must be allowed flexibility to use AI as a tool in their creation. This was noted by the Government of Canada as one notable area of consensus.

Some stakeholders suggested that, in managing the copyright registration system, the Canadian Intellectual Property Office should adopt a disclosure requirement for AI-generated elements of works, similar to those of the United States Copyright Office. Some stakeholders commented that it may be appropriate to create a new legal regime specific to AI-generated content but did not specify how that regime might operate.

The Government of Canada stated that it will examine whether any legislative or policy intervention could be made to support this proposition and study its implications.

Infringement and Liability Regarding AI

There were three observations relating to the third policy area, which addressed whether the Act should be amended to provide more clarity in the marketplace on copyright liability regarding AI-generated content:

Observation 5: No consensus about whether existing legal tests and remedies are adequate

Existing legal tests and remedies for copyright infringement have not been applied by courts in the context of AI. Overall, there were no unified calls for specific changes or submissions about specific gaps in the current legislation. Some stakeholders felt that current laws are sufficient while others suggested there should be greater legal guidance or changes to the law, such as greater penalties or a presumption that users should not be liable for the infringing outputs of AI systems.

Observation 6: No consensus about who may be liable for infringing AI-generated content

While there was no agreement across stakeholder groups as to where liability should rest, many stakeholders submitted that liability should fall on the developers and deployers of AI systems. Some legal practitioners and scholars proposed liability for developers, deployers, or users, depending on the facts. Other stakeholders emphasized the importance of limited liability for users of AI systems, which would ensure they are not held responsible for damages or consequences arising from the infringing AI-generated content, provided they have followed relevant laws and regulations.

By contrast, stakeholders in the technology industries raised concerns about imposing liability on developers, arguing that in the cases where copyright infringement does occur, users of AI systems should be liable for prompting systems to infringe while developers should have no or limited liability.

Observation 7: Support for greater transparency to facilitate determining liability

Most stakeholders expressed a desire to see some type of transparency requirements regarding inputs used to train AI, submitting that greater transparency would allow rights holders to exercise existing copyright remedies in Canada, where appropriate.

Engagement with Indigenous Communities

Observation 8: Concerns relating to the use of Indigenous cultural expressions in AI

Observation 8 addressed concerns relating to the use of Indigenous cultural expressions in AI, including concerns relating to underlying policy issues, such as Indigenous data sovereignty and the potential misuse of Indigenous cultural expressions by AI technologies.

Some preliminary suggestions included making efforts to include perspectives from diverse Indigenous communities and non-biased works in AI training datasets, penalizing the inappropriate use of knowledge-related resources, and funding Indigenous artists and arts advocacy organizations that would support Indigenous rights holders in protecting their rights against infringement.

There was also interest in the potential opportunities presented by AI, including prospective economic reconciliation opportunities through ownership and participation in the AI industry and a possible role for AI in efforts to revitalize Indigenous cultures and languages. The Government of Canada noted that engagement with the Indigenous community will continue and will provide a forum to understand and explore these issues further.

Further Commentary

Finally, the What We Heard Report summarized three additional AI-related observations:

Observation 9: Some support for labelling of AI-generated content

Many stakeholders expressed interest in a legal requirement for transparency regarding AI-generated content, including the labelling of mostly or fully AI-generated content to protect rights holders and consumers. Supporters of this idea submitted that labelling AI-generated content could allow consumers to know the source of the content they are viewing or reading, thus allowing them to choose whether to engage with AI-generated content.

Observation 10: Some concern over the use of performers’ likenesses in deepfakes 

Particularly concerned about deepfakes and how they could compete in the market with original works, several stakeholders advocated for new “personality rights” that would protect performers’ name, image, or likeness (e.g., voice, animated images). A handful of stakeholders also advocated for Canada to extend the protection of audiovisual performances in copyright law, notably by providing moral rights in such performances. Proponents suggested that additional protection on audiovisual performances would enable performers to exercise more control over the use of their performances in AI and that could address some of their concerns about deepfakes.

Observation 11: Concerns about negative impacts of AI on job security and unfair competition

A number of participants in the Consultation expressed general concerns about the negative impacts of AI in relation to the labour market. They advocated prioritizing urgent policy initiatives to facilitate workforce protection and modernization. Some called for stronger legal frameworks to support fair compensation for the use of content in TDM activities and greater corporate accountability to prevent potential AI misuse.

Conclusion

While the What We Heard Report offers important observations on the frontier of AI and copyright issues, the Government of Canada is yet to offer concrete plans of action in relation to Canada’s copyright framework in the age of generative AI.

The effects of generative AI on copyright and the marketplace raise many different issues across the globe, including in Canada. The What We Heard Report makes it clear that these issues are ongoing. It will be interesting to watch how the Government of Canada transforms its observations into a course of action, which provides clarity to Canadians on this emerging landscape.

The Cassels copyright team is a leader in copyright policy and reform matters, including in relation to AI. We would be pleased to speak with you if you would like to discuss any of the issues raised in the What We Heard Report.

This publication is a general summary of the law. It does not replace legal advice tailored to your specific circumstances.

For more information, please contact the authors of this article or any member of our Intellectual Property Group.