As the new year begins, the use of artificial intelligence (AI) in the legal profession continues to be top of mind. On December 20, 2023, the Federal Court of Canada published two new documents providing key guidance on the use of AI in Court proceedings. The first, a Notice to the Parties and the Profession on the Use of Artificial Intelligence in Court Proceedings (the Notice), implements procedural safeguards for the use of generative AI by parties, counsel, and interveners in Court proceedings. The second document, the Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence (the Policy), provides principles and guidelines respecting the Federal Court’s own use of AI. These policies follow the Federal Court’s Strategic Plan 2020-2025, which references the Court’s interest in exploring the use of AI to improve the efficiency, fairness, and accessibility of the legal system, as well as consultations with relevant stakeholders.
Notice to the Parties and the Profession on the Use of Artificial Intelligence in Court Proceedings
Although the Notice does not prohibit the use of AI in proceedings, counsel, parties, and interveners are required to provide notice and to consider certain principles if they choose to use AI to prepare documentation filed with the Federal Court.
Parties are required to inform the Court and each other if they have used AI-generated content in preparing any document that is submitted to the Court and prepared for the purpose of litigation. The first paragraph of the document must include a declaration that AI was used to generate content in the document. This declaration is not required for Certified Tribunal Records submitted by tribunals or other third-party decision-makers.
The Notice advises parties to consider two guiding principles when using generative AI: caution and “human in the loop”. The Court urges caution when using AI-generated legal references or analysis and emphasizes the importance of using only well-recognized and reliable sources when referring to jurisprudence, statutes, policies, or commentaries. The Court also urges parties and counsel to ensure human involvement and verification of any AI-created content in Court documents.
The Notice only applies to generative AI systems — those capable of generating new content, information or documents based on user prompts. The Notice does not apply to AI that lacks the ability to generate new content or that only follows pre-set instructions, such as system automation, voice recognition or document editing programs.
Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence
The Policy provides three guidelines for the Court’s use of AI. First, the Federal Court will not use AI or automated decision-making tools in making judgments and orders, including its determination of the issues, without first engaging in public consultations. Second, the Court will consult relevant stakeholders before implementing any specific use of AI which may have an impact on the profession or the public. Finally, the Court will embrace the following principles when implementing any internal use of AI by law clerks and members of the Court:
- Accountability: The Court will be fully accountable to the public for any potential use of AI in its decision-making function;
- Respect of fundamental rights: The Court will ensure its uses of AI do not undermine judicial independence, access to justice, or fundamental rights, such as the right to a fair hearing;
- Non-discrimination: The Court will ensure that its use of AI does not reproduce or aggravate discrimination;
- Accuracy: For any processing of judicial decisions and data for purely administrative purposes, the Court will use certified or verified sources and data;
- Transparency: The Court will authorize external audits of any AI-assisted data processing methods that it embraces;
- Cybersecurity: The Court will store and manage its data in a secure technological environment that protects the confidentiality, privacy, provenance, and purpose of the data managed; and
- “Human in the loop”: The Court will ensure that members of the Court and their law clerks are aware of the need to verify the results of any AI-generated outputs that they may be inclined to use in their work.
The Policy also illustrates some potential uses of AI which the Federal Court will begin investigating and piloting, including a new process for translating decisions. These translations will be reviewed by a translator or jurilinguist for accuracy and to keep a “human in the loop.”
The Federal Court is the latest of several Canadian courts that have implemented policies or provided guidance for the use of AI in legal proceedings. Earlier this year, the Manitoba, Yukon and Alberta courts issued similar guidelines, indicating a recognition by Canadian courts of the prevalence—as well as the risks, challenges, and immense potential—of generative AI in the legal context.