← Back to Insights
AI for Legal

Law Society of Ontario AI Guidance: What It Means for Your Practice

LSO AI guidance requires competence, confidentiality, and data security. Learn compliance requirements for Canadian legal AI use.

By Augure·
Canadian technology and compliance

The Law Society of Ontario's AI guidance creates specific professional obligations for lawyers using artificial intelligence tools. Under Rules 3.1-2 (competence) and 3.3-1 (confidentiality), you must understand your AI system's capabilities, protect client information, and maintain professional judgment over all outputs. The guidance requires "reasonable measures" to safeguard confidential information — a standard that US-hosted AI platforms may not meet due to CLOUD Act exposure.


Understanding your compliance obligations

The LSO's position on AI centers on three existing Rules of Professional Conduct that apply with new force to artificial intelligence use.

Rule 3.1-2 requires competent service, which now includes understanding AI limitations and maintaining professional oversight. You cannot delegate professional judgment to an AI system, regardless of its sophistication.

Rule 3.3-1 governs confidential information protection. The commentary specifies that confidentiality extends to all client information, whether privileged or not. This creates a higher bar than general privacy laws.

"A lawyer must take reasonable measures to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information concerning the lawyer's client" — LSO Rules of Professional Conduct, Rule 3.3-1

Rule 7.1 requires adequate supervision of non-lawyer assistants. AI systems fall under this supervision requirement, meaning you remain responsible for all AI-generated work product.

For Quebec practitioners, Law 25 section 93 adds Privacy Impact Assessment requirements for AI systems processing personal data of Quebec residents, with potential penalties reaching C$25 million or 4% of global revenue.


Cross-border data risks and the CLOUD Act

US-hosted AI platforms create specific compliance risks under LSO confidentiality requirements. The Clarifying Lawful Overseas Use of Data (CLOUD) Act allows US authorities to compel disclosure of data held by US companies, regardless of where that data is stored.

This creates a fundamental conflict with solicitor-client privilege. When you upload client information to platforms like OpenAI's ChatGPT, Microsoft Copilot, or Google's Gemini, that data becomes accessible to US authorities without notice to you or your client.

"The CLOUD Act's extraterritorial reach means Canadian lawyers using US-hosted AI platforms cannot guarantee solicitor-client privilege protection, as data may be accessed by foreign authorities without Canadian court oversight or client notification."

The LSO has not explicitly prohibited US-hosted AI, but the "reasonable measures" standard in Rule 3.3-1 requires evaluating cross-border risks. Many practitioners interpret this as requiring Canadian-hosted solutions for confidential client work.

Consider the practical scenario: You're reviewing a merger agreement using ChatGPT. The client names, transaction details, and strategic information are now subject to potential US government access. Even if the AI provider claims not to store your data, the CLOUD Act compels access to data "in transit" through US infrastructure.


Competence requirements for AI use

Rule 3.1-2's competence standard applies directly to AI tool selection and use. The LSO expects lawyers to understand their AI system's training data, capabilities, and limitations before relying on its outputs.

This means you must be able to explain to a client how your AI system works and why you chose it for their matter. Generic claims about "advanced AI" or "machine learning" don't satisfy this standard.

"Competent service requires the lawyer to have sufficient knowledge, skill, thoroughness and preparation reasonably necessary for the representation" — LSO Rules, Rule 3.1-2

For contract review, you need to know whether your AI was trained on Canadian law, understands Quebec's civil law system, and can identify jurisdiction-specific clauses. For litigation support, you must understand how the AI handles precedent analysis and case law citation.

Augure's Ossington 3 model, for example, trains specifically on Canadian legal frameworks and includes Quebec law in its training corpus. This targeted approach helps satisfy the competence requirement by providing tools built for Canadian legal practice.


Documentation and supervision requirements

The LSO guidance implies documentation requirements for AI use, though not explicitly stated. Best practices include maintaining records of:

  • Which AI tools you used for each matter
  • How you verified AI-generated outputs
  • What client consent you obtained for AI use
  • How you supervised AI-assisted work

Rule 7.1's supervision requirements mean you cannot treat AI as a "black box." You must maintain sufficient oversight to take responsibility for all work product, whether human or AI-generated.

This supervision requirement extends to articling students and paralegals using AI tools. You remain responsible for their AI use and must ensure they understand the same competence and confidentiality obligations.


Client consent and disclosure obligations

While the LSO doesn't explicitly require client consent for AI use, the confidentiality obligations in Rule 3.3-1 suggest disclosure is prudent, particularly for cross-border AI platforms.

The reasonable lawyer test applies: would a reasonable client want to know that their confidential information is being processed by US-hosted AI systems subject to foreign government access?

Some practitioners include AI use clauses in retainer agreements, specifying which tools they may use and under what circumstances. Others obtain matter-specific consent for AI assistance on sensitive files.

For government clients or regulated industries, explicit consent becomes more critical. Provincial governments, federal contractors, and financial services clients often have their own data residency requirements that may conflict with US-hosted AI use.

Under PIPEDA Principle 3 (Consent), organizations must obtain meaningful consent for collection, use, and disclosure of personal information. For Quebec residents, Law 25 section 14 requires express consent for AI processing of personal data.


Insurance and risk management implications

Professional liability insurers are beginning to adjust coverage based on AI use. Some policies now include specific exclusions for breaches involving unauthorized AI use or inadequate confidentiality protection.

The key risk is coverage denial for claims involving confidential information breaches. If you use an AI platform that doesn't meet LSO confidentiality standards, your insurer may argue that the resulting breach was foreseeable and preventable.

"Insurance coverage depends on taking reasonable precautions to protect client information. Using AI platforms that expose confidential data to foreign government access may not qualify as reasonable precautions under current policy language"

Risk management requires evaluating AI platforms against the same standards you'd apply to any third-party service provider. This includes reviewing data processing agreements, understanding data residency, and confirming compliance with Canadian privacy laws.


Practical compliance strategies

Meeting LSO AI guidance requires systematic evaluation of your AI tools and processes. Start with an inventory of current AI use in your practice, including both formal tools and ad hoc use by staff.

Evaluate each AI platform against three criteria:

  • Data residency and sovereignty (Canadian vs. cross-border)
  • Training data relevance to Canadian law
  • Transparency about capabilities and limitations

For document review and contract analysis, platforms like Augure provide Canadian-hosted alternatives specifically designed for legal practice, with 100% Canadian data residency eliminating CLOUD Act exposure risks. The platform's Knowledge Base feature allows secure document analysis without cross-border data transfer.

Consider implementing AI use policies that specify approved tools, required supervision, and documentation standards. Train all staff on confidentiality requirements and the risks of using unauthorized AI platforms.


Industry-specific considerations

Different practice areas face varying AI compliance challenges under LSO guidance.

Corporate lawyers handling M&A transactions deal with highly confidential commercial information that requires strict data residency controls. Cross-border AI use creates disclosure risks that could trigger securities law violations or breach confidentiality agreements.

Family lawyers manage sensitive personal information protected by both solicitor-client privilege and privacy legislation. AI platforms must comply with both LSO requirements and provincial privacy laws like Ontario's proposed Consumer Privacy Protection Act.

Criminal defence lawyers face the highest confidentiality standards, as inadvertent disclosure could compromise client defence strategies or violate Charter rights. The Crown's potential access to AI-processed defence materials through US government data requests creates particular risks.

For Quebec practitioners, the Civil Code's distinct legal framework requires AI systems trained on Quebec civil law, not just common law principles. Law 25 section 93's Privacy Impact Assessment requirement applies to any AI processing Quebec residents' personal data.


Monitoring regulatory developments

The LSO's AI guidance continues evolving as technology and regulatory frameworks develop. The Law Society regularly updates its position based on member feedback and emerging risks.

Recent developments include increased scrutiny of AI hallucination risks in legal research and document drafting. The LSO emphasizes that lawyers cannot rely on AI-generated legal citations or precedent analysis without independent verification.

The Federation of Law Societies of Canada is working toward national AI guidance that may supersede provincial approaches. Monitor both LSO updates and federal developments that could affect AI compliance requirements.

Provincial privacy legislation like Ontario's proposed Consumer Privacy Protection Act may create additional AI compliance obligations beyond LSO professional conduct rules. Quebec's Law 25 already imposes C$25M penalties for privacy violations involving AI systems.


Getting started with compliant AI

Meeting LSO AI guidance starts with choosing platforms designed for Canadian legal practice. Look for providers that offer explicit data residency guarantees, understand Canadian legal frameworks, and provide transparency about AI capabilities and limitations.

Augure provides a sovereign AI platform specifically built for Canadian legal professionals, with 100% Canadian data residency and models trained on Canadian law. The platform's chat and knowledge base features support common legal workflows while maintaining compliance with LSO confidentiality requirements.

Visit augureai.ca to explore how Canadian-hosted AI can support your practice while meeting Law Society of Ontario professional obligations.

A

About Augure

Augure is a sovereign AI platform for regulated Canadian organizations. Chat, knowledge base, and compliance tools — all running on Canadian infrastructure.

Ready to try sovereign AI?

Start free. No credit card required.

Get Started