our insights

The Landscape of AI Regulation in Canada


The purpose of this update is to provide a high-level overview of the areas of the economy where the use of artificial intelligence (AI) may be important for Canadian businesses in the near future and the challenges of AI regulation.


Artificial Intelligence and Data Act

As noted in previous Cassels updates,1 the Canadian federal government has introduced legislation, Bill C-27, to modernize Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA)2 and to introduce new legislation that would regulate the use of AI in Canada. Bill C-27 is presently before the House of Commons and has passed second reading as of April 24, 2023. If enacted, Bill C-27 would create the Artificial Intelligence and Data Act (AIDA).3 AIDA introduces a principles-based approach that is focused on ensuring that the use of AI is properly governed and controlled. AIDA is primarily concerned with preventing harm to individuals, damage to property, and economic loss, including by preventing biased outputs of AI systems. AIDA targets “high-impact” AI systems and aims to mitigate risks involved with the use of such AI systems.4 The range of persons that are subject to AIDA compliance is broadly scoped to include developers, providers, and managers of AI systems. As a result, persons developing, utilizing, and commercializing AI systems must be aware of the requirements set out by AIDA and the forthcoming regulations under AIDA.

Canadian Guardrails for Generative AI – Code of Practice

Since tabling Bill C-27, Canada has recognized an urgent need and broad support for the regulation of generative AI systems. Canada intends to make this a priority if Bill C-27 is passed into law. To that end, Canada has recently published a proposed Code of Practice for generative AI systems and has invited stakeholders to provide comments.5 The elements of the Code of Practice include safety, fairness and equity, transparency, human oversight and monitoring, validity and robustness, and accountability. The Code of Practice is intended to ensure that developers, deployers, and operators of generative AI systems can avoid harmful impacts, build trust in their systems, and transition smoothly to compliance with Canada’s forthcoming regulatory regime. Stakeholders who might wish to comment on the proposed Code of Practice are encouraged to contact counsel for additional information.


 While many different industries will be impacted by the use and development of AI, this update addresses key industries where we foresee that AI regulation will have some effect.


There are four main areas of concern that are relevant to both non-unionized and unionized workplaces:

The Displacement of Employees by AI

Some employers may contemplate a partial or complete displacement of their workforce because of the adoption of AI tools.

Provincially regulated employers can terminate employees for any reason by providing notice or paying severance in lieu of notice, provided those reasons are not arbitrary or discriminatory. Provincially regulated employers should also be aware of group termination requirements where groups of employees will be terminated.

Different rules apply to federally regulated employers. There are restrictions on the termination of non-management employees who have completed more than 12 months of consecutive employment. Such employees may have a claim for unjust dismissal under the Canada Labour Code, unless it falls under one of the exemptions, such as the “discontinuation of a function”. If an employer seeks to rely on this exemption, an employer would need to be able to justify any decisions to discontinue a function and should be able to justify why a certain employee was selected over another.

Unionized employers need to take into consideration the language of the applicable collective agreement.

The Use of AI Tools in the Workplace

Many employers are considering the use of AI to improve business processes and outcomes, but employers should consider how this may impact the duties that employees currently perform.

For non-unionized employers, there is a risk that the adoption of AI may have a significant impact on the duties that employees perform:

  • If an employer unilaterally adopts AI that results in a significant change to a fundamental term and condition of an employee’s employment (such as their duties), an employee may claim to have been constructively dismissed and may commence a wrongful termination claim and seek damages. To mitigate this risk, employers should consider whether it is necessary to get an employee’s consent or to give advance notice of the changes.
  • Employers should also be aware that, if an employee’s job is significantly changed because of the adoption of AI, there is a risk that their employment contract may be unenforceable because the essence or “foundation” of the employment contract no longer applies. This is most likely to become an issue on termination of employment if an employer tries to rely on a termination provision in the employment contract that limits the employee’s severance to the minimums set out in employment standards legislation.

For unionized employers, collective agreements and provincial statutes may require the giving of notice and other procedural steps before implementing technological changes. Employers should review the language of their collective agreements.

Confidentiality, Privacy, and Intellectual Property

Employers should be mindful of potential risks associated with the use of AI tools in the workplace, which might be managed by an appropriate workplace policy. For example, employers should ensure that confidential information belonging to the company, or which the company has received from a third party, is not inputted into an AI tool in a manner that does not ensure the information’s protection. Similar considerations apply in relation to information that might be subject to privacy laws, which are discussed below. It is therefore important to understand an AI tool’s terms of use relating to the treatment of data inputted into the tool, as well as any contractual obligations owed to third parties that might impact on the company’s use of an AI tool.

Certain uses of an AI tool, particularly a generative AI tool, may also raise copyright issues. Among other things, it may be necessary to consider: (i) whether a company and its employees are entitled to input certain content into an AI tool; (ii) whether the use of content generated by AI might infringe on a third party’s copyright, and (iii) questions of ownership of copyright in content that an employee creates using, or with the assistance of, an AI tool.

Human Rights Considerations

Many employers are considering the use of AI tools in employment-related decision-making processes, such as recruitment, hiring, pay equity and performance management. While there certainly is the potential for AI to streamline and improve these processes, there are risks associated with doing so.

From a human rights perspective, the concern is whether AI tools may produce outcomes that are based on data that differentiate on grounds protected by human rights legislation, such as age, sex, gender, and race. Regardless of whether AI is used in decision-making processes, employers should always be able to explain why a decision was made and be able to support decisions with objective and non-discriminatory reasons. It may be impossible or difficult to explain a decision that was based on AI-generated information.


The use of AI in the education sector promises efficiency gains and ethical challenges.  For example, technology tools made possible through the use of AI could possibly facilitate the admissions process, grade papers and exams, design curricula and prepare course materials, correct grammar and assist students with learning disabilities or language fluency issues. The use of those AI tools would be useful to free up time for educators to provide more individualized instruction to students. On the other hand, students using AI to assist with assignments may not develop critical thinking skills or actually learn the applicable subject matter.

A balance between the “provider and the students”6 using AI within the education system is necessary. The issues surrounding bias and misinformation that may be inherent in AI systems are important to consider for the use of AI within the education system.  Educational institutions must develop policies and procedures governing the use of AI programs by teachers and students. These policies will need to address the inappropriate use of AI – both by educators and students.

Commenters have noted that, with the current lack of proper regulations, the relationship between AI and the education system still needs to be “mapped out and regulated.”7

Organizations selling goods and services to the education sector will also need to be aware of any emerging AI regulations when drafting textbooks or external learning materials that schools use, including whether textbooks are permitted to be integrated or used with AI systems.


A key concern within the healthcare industry will centre around the application of privacy laws to AI.8 Healthcare professionals using AI need to be aware of how they use sensitive patient information within these AI programs. There will need to be regulations drafted for both public and private healthcare providers around this issue. AI will also likely see continued and expanded usage within healthcare providers as useful tools in assisting with multiple aspects in the healthcare industry.9

The Canadian healthcare industry consists of many different sectors and areas, each of which are heavily regulated at the provincial and/or federal level. It has been widely acknowledged that “AI and AI-Assisted technologies are set to transform the pharmaceutical, biologic and medical device industries.”10 Thus, new regulatory frameworks will need to be developed to address AI in each sector of the healthcare industry. Each province will need to regulate how AI is used within their healthcare system and for what purposes it will be allowed, and Health Canada will need to address and regulate how AI can be used within each healthcare product sector.

Some sectors are starting to consider this issue. For example, the International Medical Device Regulators Forum, a collection of countries, for which Canada is a member, is working on harmonizing international medical device regulation and has issued a standard AI definition for healthcare companies to use when drafting AI regulations.11 Regulatory lawyers will need to be attentive to such definitions when drafting AI regulations within each sector of the healthcare industry.

In the United States, the FDA has put out commentary on AI regulations discussing how AI can be used to develop new products and how regulators should handle the data generated by AI.12 Health Canada is currently “developing a new regulatory pathway for the approval of medical devices with adaptive [machine learning].”13 The details and timeline for the release of draft guidance or seeking industry comment is currently unknown.

Intellectual Property

The use of AI systems in connection with the authorship, creation, invention, reduction to practice, or adaptation of intellectual property raises a variety of novel issues under Canadian intellectual property laws. Canadian courts have not yet considered whether copyright subsists in AI-generated content or whether an AI system can be named as an inventor of a patent application or the author of a copyrightable work. Questions regarding whether an AI system can create and/or own intellectual property and how such intellectual property may be protected and enforced are quickly becoming more significant as organizations develop and adopt AI systems in connection with content creation and development initiatives.

The Canadian federal government is currently conducting a consultation on the framework for copyright policy to address the challenges posed by the use of AI.14 The issues being considered include the following:15

  • Whether the use of copyrighted works to train an AI system is, or should be, an infringement of copyright in those works?
  • Is there an “author” of an AI-generated work?
  • Who owns the copyright, if any, in an AI-generated work?
  • If an AI-generated work infringes copyright in an existing work, who if anyone might be liable, (e.g., the developer(s) of the AI system, the end users, or others)?

In the consultation, the federal government suggested three potential approaches to clarifying the status of author and owner of a copyrightable work where an AI system is involved, which are as follows:

  • Identifying the individual that arranged for the AI system to create the work as the author and owner of the work;
  • Clarifying that copyright (and authorship of the copyright) would only apply to works generated by humans involving some form of human participation; and
  • Creating new rights to apply to AI-generated works.

Privacy and Data Protection

The collection, use, and disclosure of personal information by private organizations in Canada is governed by the PIPEDA16 and substantially similar provincial privacy legislation in Alberta, British Columbia, and Quebec.17 AI systems are often trained using personal information and, in our view, organizations involved in the development, use, and commercialization of AI systems must be aware of the data privacy obligations that apply to the personal information that is used to train, develop, and otherwise exploit AI systems.

The main legal ground for the collection, use, and disclosure of personal information in Canada is informed consent, unless an exception to obtaining such consent applies. Canadian privacy laws also require that the purpose of any collection, use, or disclosure of personal information must be reasonable in the circumstances, regardless of whether the individual has provided their informed consent for such activity. In the context of training or using an AI system, obtaining valid consent may not be feasible or possible, particularly when the output(s) of the applicable AI system are not clear and are subject to change based on the developing use case(s) of the AI system. Canadian privacy laws also provide individuals with the right to withdraw their consent to the processing of their personal information and it is unclear how individuals may be able to exercise this right in an effective manner.

AI systems are capable of making decisions about individuals, including with respect to their behaviours, creditworthiness, and suitability for employment, academic, or other opportunities. As a result, there is potential that AI systems could cause an individual significant harm where the AI system makes an unfair, biased, discriminatory, or incorrect decision of the individual. The outputs of an AI system may also result in an individual losing control over the collection, use, and disclosure of their personal information. Where an AI system makes an unfair, biased, or discriminatory decision about an individual, the AI system’s use of the personal information of an individual will be considered unlawful.

It is strongly recommended that any organization that wishes to develop, use, or otherwise exploit an AI system that processes personal information, conduct a thorough review of and/or prepare, as applicable, the privacy policies and data practices that apply to the AI system to ensure that the intended and actual uses of personal information are compliant with applicable privacy and data protection laws. In some cases, organizations should also consider undertaking privacy impact assessments prior to implementing any AI systems. This is particularly important in connection with the use and/or development of generative AI systems such as ChatGPT, which may use or disclose user inputs, including personal information, when providing outputs to other users.


Regulations around insurance policies may need to be updated to encompass the use of AI. The use of machine learning and other AI techniques has been increasing within the insurance industry. Insurance companies are using technologies to study individuals’ behaviour and predict their future driving claims.18

Regulatory lawyers may need to work with insurance providers to help them navigate what they are allowed to do within the upcoming AIDA. There is a possibility that insurance companies using AI to predict individual behaviour may require additional review under forthcoming policies.

Impacts for Canadian Businesses

With the increasing surge of AI, clients will need to be aware of all the issues that come with AI regulation and their business. Rapid use and development of AI is occurring, and the government is drafting legislation, such as the AIDA, to help regulate this new development.

Canadian businesses involved in, but not limited to, employment, education, entertainment, art, healthcare, IP, copyright, privacy and data protection, and insurance will need to consider emerging AI issues and potential regulations.


1 Bernice Karn, “Be (Artificially) Intelligent – Examining New Guidance for the Artificial Intelligence and Data Act” (2023), online: <cassels.com>.
2 Personal Information Protection and Electronic Documents Act, SC 2000, c 5 [PIPEDA].
3 Bill C-27, Digital Charter Implementation Act, 2022, 1st Sess, 44th Parl, 2021, (second reading 24 April 2023).
4 Karn, supra note 1.
5 Canada, Innovation, Science and Economic Development Canada, Canadian Guardrails for Generative AI – Code of Practice (16 August 2023), online: <ised-isde.canada.ca>.
6 Sandra Leaton Gray, “Artificial Intelligence in schools: Towards a democratic future” (2020) 18:2 London Review of Education 163 at 173, online: <eric.ed.gov>.
7 Ibid at 13.
8 David W Opderbeck, “Artificial Intelligence in Pharmaceuticals, Biologics, and Medical Devices: Present and Future Regulatory Models” (2019) 88:2 Fordham Law Review at 31, online: <fordham.edu>.
9 Mohammed Yousef Shaheen, “Applications of Artificial Intelligence (AI) in healthcare: A review” (2021), online: <scienceopen.com>.
10 Opderbeck, supra note 8 at 2.
11 Intelligence Medical Devices (AIMD) Working Group, “Machine Learning-enabled Medical Devices: Key Terms and Definitions” (2022), online: <imdrf.org>.
12 US Food & Drug Administration, “Artificial Intelligence in Drug Manufacturing” (2022), online: <www.fda.gov>.
13 Michael Da Silva et al, “Regulating the Safety of Health-Related Artificial Intelligence” (2022) 17:4 Healthcare Policy 63, online: <ncbi.nlm.nih.gov>.
14 Bernice Karn & Eric Mayzel, “The Legal 500 Artificial Intelligence Guide Questions” (2023), online: <legal500.com>.
15 Ibid.
16 PIPEDA, supra note 2.
17 See Personal Information Protection Act, SA 2003, c P-6.5; Freedom of Information and Protection of Privacy Act, RSBC 1996, c 165; Act respecting the protection of personal information in the private sector, CQLR c P-39.1; and, Access to documents held by public bodies and the Protection of personal information, CQLR c A-2.1.
18 Ramnath Balasubramanian, Ari Libarikian & Doug McElhaney, “Insurance 2030 – The impact of AI on the future of insurance” (2021), online: <mckinsey.com>.

This publication is a general summary of the law. It does not replace legal advice tailored to your specific circumstances.

For more information, please contact the authors of this article or any member of our Regulatory team.