---
title: "Air Canada Lost in Court to Its Own AI — What Custom Guardrails Will Save You"
slug: "air-canada-lost-in-court-to-its-own-ai-what-custom-guardrails-will-save-you"
locale: "en"
canonical: "https://ireadcustomer.com/en/blog/air-canada-lost-in-court-to-its-own-ai-what-custom-guardrails-will-save-you"
markdown_url: "https://ireadcustomer.com/en/blog/air-canada-lost-in-court-to-its-own-ai-what-custom-guardrails-will-save-you.md"
published: "2026-05-07"
updated: "2026-05-07"
author: "iReadCustomer Team"
description: "Air Canada was forced to pay damages after its customer service chatbot invented a fake refund policy. Here is how to build the custom AI guardrails that prevent your business from facing the same legal disaster."
quick_answer: "A tribunal ruled Air Canada liable for damages after its customer service AI chatbot hallucinated a fake refund policy, establishing that companies are legally responsible for their chatbots. Businesses must implement retrieval grounding and human-in-the-loop guardrails to avoid similar legal exposure."
categories: []
tags: 
  - "ai hallucination"
  - "chatbot liability"
  - "retrieval augmented generation"
  - "ai guardrails"
  - "customer service automation"
source_urls: []
faq:
  - question: "What happened in the Moffatt v. Air Canada chatbot lawsuit?"
    answer: "A customer sued Air Canada after its website chatbot invented a fake bereavement refund policy. The tribunal ruled against the airline, stating that a company is completely legally liable for the information provided by the AI tools hosted on its website."
  - question: "Why do customer service chatbots hallucinate fake policies?"
    answer: "Standard off-the-shelf AI models are designed to be conversational and helpful, not purely factual. They predict the most logical next word based on broad training data. If they don't know your specific internal policy, they will confidently guess or invent one to satisfy the user's question."
  - question: "What is retrieval grounding in AI chatbots?"
    answer: "Retrieval grounding is a technical guardrail where the AI is disconnected from its general internet knowledge and restricted to only reading and answering from a specific set of verified corporate documents, like official PDF return policies or pricing sheets."
  - question: "How can businesses legally protect themselves when using AI for customer support?"
    answer: "Businesses must implement strict guardrails: use retrieval grounding to force citations from official policies, program verifiable refusals so the bot knows when to say no, and require a human-in-the-loop to approve any actions involving money, contracts, or refunds."
robots: "noindex, follow"
---

# Air Canada Lost in Court to Its Own AI — What Custom Guardrails Will Save You

Air Canada was forced to pay damages after its customer service chatbot invented a fake refund policy. Here is how to build the custom AI guardrails that prevent your business from facing the same legal disaster.

In November 2022, Jake Moffatt was scrambling to book a last-minute flight after his grandmother passed away. He visited the Air Canada website and asked the customer support chatbot if the airline offered bereavement rates.

The chatbot confidently replied that Moffatt could book a regular ticket immediately and request a refund for the price difference within 90 days. Relieved, he used his credit card to book the flight on the spot.

But the chatbot lied. Air Canada's actual bereavement policy strictly states that discounts cannot be applied retroactively. When Moffatt asked for his money back, the airline refused, blaming the AI for providing incorrect information.

The dispute ended up in a small claims tribunal, creating a landmark legal precedent that every business owner running a public AI tool needs to understand immediately.

## The Landmark Ruling: You Are Your Chatbot

When Air Canada found itself in court, its legal team attempted a baffling defense strategy. They argued that the chatbot was a separate legal entity responsible for its own actions, effectively trying to sever the company from the technology it hosted.

The judge dismissed this argument instantly. The tribunal ruled that the chatbot is an interactive component of Air Canada's website, meaning the airline is entirely liable for the information it provides.

**The court ruled that a company cannot separate itself from the tools it puts on its website, meaning you are fully liable for every word your AI invents.**

Imagine you own a specialized dental clinic. You install a $20-per-month generic AI chatbot widget on your homepage to answer late-night inquiries. A prospective patient asks if a complex implant procedure is fully covered by their specific insurance provider. The AI, wanting to be helpful, says "Yes, absolutely."

When that patient walks in, gets the surgery, and the insurance denies the claim, your clinic is legally bound to eat the cost. This is not a hypothetical worst-case scenario; it is the immediate legal reality of deploying unmonitored AI.

## Why Off-the-Shelf AI is a Ticking Time Bomb

The fundamental flaw with generic ChatGPT wrappers and basic SaaS customer support bots is their underlying architecture. Standard language models are designed to be helpful conversationalists, and they absolutely hate saying "I don't know."

This behavior leads directly to hallucinations (when an AI system invents false information but presents it as factual). AI models predict the next most logical word in a sequence based on vast internet data; they do not inherently search a factual database like a traditional search engine does.

An off-the-shelf bot does not understand the nuanced, unwritten rules of your specific business. It reads general internet consensus and tries to stitch together an answer that sounds professional and polite, often at the expense of accuracy.

Bolting a raw, unfiltered AI model onto your customer-facing website without engineering strict technical limitations is corporate negligence.

## Step 1: Lock Down the Facts (Retrieval Grounding)

The most effective way to stop an AI from hallucinating policies is to restrict its brain. You must stop the chatbot from relying on its generalized training data and force it to act like an open-book test taker.

Engineers call this retrieval grounding (restricting the AI to only read from a specific set of verified company documents). To implement this, you connect the AI exclusively to a secure folder containing your actual PDF return policies, warranty terms, and pricing sheets.

You then enforce a strict prompt command: If the answer cannot be found directly in those provided documents, the AI is not allowed to formulate a response. Furthermore, you must engineer the bot to provide exact citations, linking the user back to the specific paragraph in your official terms of service.

## Step 2: Teach Your AI How to Say "No" (Refusal Patterns)

Because base AI models are eager to please, you must explicitly program them to be stubborn when necessary. This means building verifiable refusal patterns into the software.

You need to map out the exact topics your AI is forbidden from resolving. This list should include issuing refunds, granting discounts, diagnosing technical hardware failures, or answering legal compliance questions.

When the AI detects a user asking about one of these restricted topics, it must trigger a hard-coded refusal script. It should cleanly state, "I cannot process refund requests directly. Let me transfer you to a human support agent who can handle this for you right now."

## Step 3: Require a Human Check for the Money

The golden rule of modern AI deployment is simple: AI should draft, but humans must send. Any process that touches corporate funds, legal contracts, or customer health must include a human-in-the-loop.

If a customer wants to claim a warranty on a broken product, the AI can absolutely do the heavy lifting. It can collect their order number, ask for photos of the damage, verify the purchase date, and summarize the entire case in a clean dashboard.

But the final button that actually authorizes sending a replacement product or issuing a credit card refund must be clicked by a human employee. This workflow eliminates 80% of the tedious data-collection work while keeping your legal risk perfectly at zero.

## Audit Your AI Exposure Tomorrow

The Air Canada tribunal is just the first drop in a coming wave of consumer litigation against poorly implemented AI. The companies that survive this transition won't be the ones hiding from AI—they will be the ones that engineered guardrails to control it.

Tomorrow morning, ask your IT lead or customer success manager exactly what documents your website chatbot is allowed to read. Ask them for the specific list of questions the bot is programmed to refuse.

Your AI chatbot is a corporate representative. If you would not allow a junior intern on their first day to independently authorize a $1,000 refund, you cannot allow a generic AI to do it either.
