← Back to Insights
AI for Legal

Law Society of Ontario guidance on AI use: A practical breakdown

LSO's AI guidance for lawyers: competence, confidentiality, and compliance requirements. What Ontario lawyers need to know about AI tools and ethics.

By Augure·
Canadian technology and compliance

The Law Society of Ontario released guidance on artificial intelligence use by lawyers in 2024, clarifying professional obligations rather than imposing blanket restrictions. The guidance centers on existing Rules of Professional Conduct — particularly competence (Rule 3.1-2) and confidentiality (Rule 3.3-1) — applied to AI tools. Ontario lawyers can use AI but must understand data flows, maintain professional responsibility for outputs, and ensure client confidentiality remains protected.

Core professional obligations under LSO guidance

The LSO's approach anchors AI use in established professional conduct rules rather than creating new regulatory frameworks. This means lawyers already know the standards — the challenge is applying them to AI technology.

Competence requirements under Rule 3.1-2 extend to understanding AI tools sufficiently to use them responsibly. This includes knowing the tool's limitations, potential for errors, and appropriate use cases. A lawyer using AI for contract review must understand what the system can and cannot detect.

Confidentiality obligations under Rule 3.3-1 require lawyers to protect client information when using AI systems. This means understanding data residency, retention policies, and access controls. If an AI tool processes client documents on US servers subject to the CLOUD Act, that creates potential confidentiality risks.

The LSO guidance emphasizes that lawyers remain fully responsible for work product, regardless of AI assistance. Professional judgment cannot be delegated to algorithms, and Rule 3.1-2 competence standards apply to all client deliverables whether AI-assisted or not.


Data residency and cross-border considerations

Ontario lawyers face particular challenges with AI tools hosted outside Canada. Many popular AI platforms operate under US jurisdiction, creating potential conflicts with confidentiality obligations.

US CLOUD Act implications allow US authorities to compel disclosure of data from US companies, even when stored abroad. Under 18 U.S.C. § 2713, this includes Canadian client data processed by American AI providers. For lawyers handling sensitive client matters — particularly those involving government entities or cross-border disputes — this creates measurable risk.

PIPEDA compliance under Principle 4.1.3 and section 7(1)(b) requires organizations to obtain meaningful consent before transferring personal information outside Canada. Law firms using AI tools must consider whether client consent covers international data transfers to AI providers.

Law 25 requirements in Quebec add provincial privacy obligations. Section 17 requires organizations to conduct privacy impact assessments for cross-border transfers, while section 161 imposes penalties up to C$25 million or 4% of worldwide revenue for non-compliance. Section 93 specifically mandates Privacy Impact Assessments for AI systems processing personal information of Quebec residents.

Platforms like Augure address these concerns by maintaining 100% Canadian data residency with servers in Canadian data centers and no US corporate parent or investor relationships, eliminating CLOUD Act exposure entirely.


Practical compliance steps for law firms

The LSO guidance translates to specific operational requirements for law firms implementing AI tools.

Vendor due diligence must include data flow mapping, security certifications, and jurisdiction analysis. Firms should document where data goes, how long it's retained, and who can access it. This documentation supports both LSO compliance and client transparency.

Client consent protocols should address AI use explicitly. While general retainer agreements may cover technology use broadly, specific disclosure builds stronger client relationships and reduces professional liability exposure.

Staff training requirements ensure all users understand both the AI tool's capabilities and professional conduct obligations. A paralegal using AI for document review needs the same confidentiality training as lawyers using the system.

Under PIPEDA Principle 4.1.4 and Law 25 section 3.5, Canadian law firms are accountable for personal information protection throughout AI processing. This means conducting due diligence on AI vendors' data practices, security measures, and cross-border data transfer policies before implementation.


Specific use cases and risk assessment

Different AI applications carry different risk profiles under LSO guidance.

Document review and analysis presents moderate risk when properly supervised. AI can flag potentially relevant documents or extract key contract terms, but lawyers must verify results and make final determinations. Tools designed for legal use — like contract review features — typically offer better accuracy than general-purpose AI.

Legal research and analysis requires careful verification of AI outputs. AI systems can hallucinate case citations or misstate legal principles. The LSO guidance emphasizes that lawyers cannot rely on AI research without independent verification.

Client communication using AI must maintain confidentiality standards. AI-powered email drafting or client intake systems must meet the same data protection standards as other client communication tools.

Litigation support involving AI requires disclosure considerations. Some jurisdictions require disclosure of AI use in document production or legal brief preparation. Ontario lawyers should monitor evolving disclosure requirements.


Financial and professional liability considerations

LSO guidance doesn't eliminate professional liability for AI-assisted work — it clarifies that existing liability standards apply.

Malpractice insurance coverage may require notification of AI tool use. Some insurers offer specific coverage for AI-related claims, while others may exclude certain AI applications. Firms should confirm coverage before implementing AI tools.

Client billing transparency supports both ethical compliance and client relationships. While lawyers aren't required to discount AI-assisted work, clear communication about AI use prevents billing disputes and maintains trust.

Quality control systems become more critical with AI assistance. The LSO guidance emphasizes that AI doesn't reduce the lawyer's responsibility for work quality — it may actually increase the need for systematic review processes.


Technical safeguards and compliance practices

Implementing AI tools responsibly requires both technical and procedural safeguards aligned with LSO guidance.

Data encryption both in transit and at rest protects client confidentiality during AI processing. End-to-end encryption ensures that even the AI provider cannot access client data in readable form.

Access controls limit AI tool use to authorized personnel with appropriate training. Role-based permissions ensure that only qualified users can access AI features for sensitive client matters.

Audit trails document AI use for quality control and professional liability defense. Knowing when, how, and by whom AI tools were used supports both client service and risk management.

Data retention policies must align with both professional conduct obligations and privacy law requirements. Some AI tools retain training data indefinitely, creating ongoing confidentiality risks.

Sovereign AI platforms built specifically for Canadian legal professionals, like Augure, incorporate these safeguards by design rather than as add-on features.


Monitoring regulatory developments

The LSO guidance represents current thinking but acknowledges that AI regulation continues evolving.

Professional conduct rule updates may address AI use more specifically as technology adoption increases. The LSO has indicated ongoing monitoring of AI developments and potential rule clarifications.

Privacy law changes at federal and provincial levels affect AI tool selection and use. Bill C-27's proposed Artificial Intelligence and Data Act (AIDA) could impose additional requirements on AI system use by professional service providers, including mandatory risk assessments under section 10.

Cross-jurisdictional considerations matter for law firms practicing across provinces. Different Law Societies may develop varying AI guidance, requiring firms to meet the most restrictive applicable standards.

International developments in legal AI regulation provide insight into potential Canadian changes. The EU's AI Act and various US state initiatives offer models for potential Canadian regulatory approaches.


Building sustainable AI practices

Long-term success with AI tools requires embedding compliance into firm culture rather than treating it as a technical checklist.

Regular training updates ensure staff understand both evolving AI capabilities and unchanged professional obligations. Technology changes faster than professional conduct rules, but both matter for compliant practice.

Client communication standards should address AI use proactively. Clients increasingly ask about AI use in legal services — clear policies and communication protocols support both transparency and business development.

Vendor relationship management requires ongoing monitoring of AI tool providers. Changes in ownership, data policies, or security practices can affect compliance status even after initial implementation.

The LSO guidance provides a framework for responsible AI adoption rather than a barrier to innovation. Ontario lawyers who understand their professional obligations can implement AI tools effectively while maintaining the highest standards of client service and professional conduct.

Canadian law firms must treat AI vendor selection as a fundamental compliance decision. Under LSO Rule 3.3-1 and PIPEDA Principle 4.1.3, lawyers remain accountable for client data protection regardless of third-party AI processing. This means selecting providers with verifiable Canadian data residency and robust security certifications.

For law firms seeking AI solutions designed specifically for Canadian legal practice, platforms like Augure offer comprehensive compliance features built into the architecture rather than retrofitted afterward. Learn more about sovereign AI for Canadian legal professionals at augureai.ca.

A

About Augure

Augure is a sovereign AI platform for regulated Canadian organizations. Chat, knowledge base, and compliance tools — all running on Canadian infrastructure.

Ready to try sovereign AI?

Start free. No credit card required.

Get Started