In one of the first decisions of its kind in Canada — and anywhere in the world — a company has been found liable for its AI chatbot providing misinformation.
On February 14, 2024, the British Columbia Civil Resolution Tribunal (the Tribunal) found that Air Canada was liable for negligent misrepresentation after the AI chatbot on Air Canada’s website erroneously advised the plaintiff that he could apply for bereavement fares retroactively.1 Relying on the chatbot’s misinformation, the plaintiff, whose grandmother had passed away in Ontario, booked flights from Vancouver to Toronto and from Toronto to Vancouver at the regular rates and then submitted an application for a partial refund at the bereavement fare rates. Air Canada rejected the plaintiff’s application for the partial refund, citing its “Bereavement Travel” policy, which stipulated that bereavement fares could not be applied retroactively.
In considering the plaintiff’s claim for a refund, the Tribunal was asked to determine whether Air Canada negligently misrepresented the procedure for claiming bereavement fares, and if so, the remedy.
To establish the tort of negligent misrepresentation, a plaintiff must demonstrate that (i) the defendant owed the plaintiff a duty of care, (ii) the defendant’s representation was untrue, inaccurate, or misleading, (iii) the defendant made the representation negligently, (iv) the plaintiff reasonably relied on it, and (v) the plaintiff’s reliance resulted in damages.2
The Tribunal found that Air Canada owed the plaintiff a duty of care in its capacity as a service provider and that Air Canada did not take reasonable care to ensure that its chatbot was accurate. The Tribunal dismissed Air Canada’s argument that, in effect, “the chatbot is a separate legal entity that is responsible for its own actions” and that Air Canada therefore could not “be held liable for information provided by one of its agents, servants, or representatives – including a chatbot.”3 Although the AI chatbot was interactive, the Tribunal found that “it is still just a part of Air Canada’s website,” and thus, “it should be obvious to Air Canada that it is responsible for all the information on its website.”4
The AI chatbot did provide a link to the webpage of Air Canada’s “Bereavement Travel” policy, but the Tribunal found that customers should not have to double-check information found in one part of a company’s website on another part of its website.5 In addition, the Tribunal held that Air Canada failed to explain why the webpage of the “Bereavement Travel” policy was “inherently more trustworthy than its chatbot.”6
After determining that the plaintiff had successfully made out its claim of negligent misrepresentation, the Tribunal held that the plaintiff must be “put in the position they would have been in if the misrepresentation had not been made,” which required Air Canada to honour the bereavement fare rates for the plaintiff retroactively.7 Air Canada was also responsible for pre-judgment and post-judgment interest and the reimbursement of the plaintiff’s fees in relation to the Tribunal proceeding.8
Key Takeaways
As large language models (LLMs), such as ChatGPT, have further popularized the use of chatbots for various purposes, organizations must fully consider the implications of using these technologies. It is essential that an organization monitor the way its chatbots are trained and deployed, including where the organization has engaged a third-party service provider to train, provide, support, or deploy the chatbot.
In Moffatt, Air Canada’s failure to disclaim the accuracy of its chatbot or to emphasize the paramountcy of its written policies was a key factor in the successful claim for negligent misrepresentation. As a result, whenever possible, an organization should consider contractually limiting its liability in relation to the use of or reliance upon information provided by an AI chatbot. In seeking to limit liability, an organization should ensure that the disclaimers and limitations are lawful under applicable consumer protection legislation. If an organization is not able to fully limit its liability, it should take steps to ensure that it allocates a commercially reasonable degree of risk to the third-party service providers that are involved in the training, provision, support, or deployment of the chatbot.
In addition, an organization should also be aware of the requirements with respect to certain high-impact AI systems under Canada’s proposed Artificial Intelligence and Data Act (AIDA), which may capture certain chatbots depending on their purpose. For example, a proposed class of high-impact AI system under the proposed legislation is an AI system relating to determinations of whether to provide services to an individual, the type or cost of services to be provided to an individual, or prioritization of services to be provided to individuals. Further, with the recent AIDA amendments, LLMs may be captured under the definition of “general-purpose system” if it is an “artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes, and activities not contemplated during the system’s development.”9 A similar set of principles governs both high-impact AI systems and general-purpose AI systems under AIDA.
Moffatt is likely the first word in a much larger conversation about AI and liability. Cassels will continue to monitor the legal developments in AI, including updates on AIDA. If you have any questions about the use of AI and how it may affect you or your organization, please contact the authors or any member of the Cassels Information Technology & Data Privacy team.
_____________________________
1 Moffatt v Air Canada, 2024 BCCRT 149.
2 Queen v. Cognos Inc., 1993 CanLII 146 (SCC).
3 Supra note 1, at para 27.
4 Ibid.
5 Ibid at para 28.
6 Ibid.
7 Ibid at para 33.
8 Ibid at paras 44-45.
9 Canada. Innovation, Science and Economic Development Canada, Letter to the Chair of the Standing Committee on Industry and Technology on Bill C-27 (November 28, 2023) at p 9.