← Back to Insights
Compliance

How to document AI compliance for Law 25

Essential documentation requirements for AI systems under Quebec's Law 25. Privacy impact assessments, data mapping, and audit trails explained.

By Augure·
Two businessmen collaborating over a laptop.

Law 25 requires specific documentation when your organization deploys AI systems that process personal information of Quebec residents. The key documents include privacy impact assessments mandated under Section 93 for high-risk AI applications, comprehensive data mapping per Section 3.5 showing information flows, consent records where required under Section 14, and detailed audit trails of automated decisions per Section 12. Organizations face penalties up to C$25 million or 4% of worldwide turnover for non-compliance with Quebec's provincial privacy framework.


Privacy impact assessment requirements

Every AI system that processes personal information in high-risk scenarios triggers Law 25's privacy impact assessment requirements under Section 93. This mandatory assessment must be completed before deployment and submitted to the Commission d'accès à l'information du Québec (CAI).

Your PIA must address automated decision-making capabilities under Section 12, profiling activities, and any processing that could significantly impact individuals. Document the AI's purpose, data sources, processing logic, and safeguards against discriminatory outcomes as required by Section 3.3.

"Law 25 Section 93 mandates privacy impact assessments for AI systems processing personal information where there are high risks to privacy, including automated decision-making affecting legal rights, with penalties up to C$25 million for organizations that fail to comply."

Include technical specifications about model training, inference processes, and data retention periods. The CAI expects detailed analysis of privacy risks and mitigation measures, not superficial compliance exercises.

For AI systems affecting legal rights or similarly significant outcomes, document your human oversight procedures. Law 25 Section 12 gives individuals the right to obtain human intervention in automated decisions affecting them.


Data mapping and information flows

Comprehensive data mapping forms the foundation of Law 25 compliance for AI systems under Section 3.5. Document every personal information touchpoint from collection through processing to deletion or anonymization.

Your data map must identify information sources, processing purposes under Section 13, storage locations, and third-party sharing arrangements. For AI systems, this includes training data, real-time inputs, and outputs that might contain personal information.

Map cross-border data transfers per Sections 17-19 with particular attention to adequacy decisions. Law 25 follows similar frameworks to GDPR for international transfers, requiring additional safeguards for transfers to jurisdictions without adequate protection.

"Data mapping under Law 25 Section 3.5 must trace personal information flows through AI systems from initial collection under Section 6 through model training, inference processing, and eventual deletion per Section 8's retention requirements."

Document data minimization practices specific to your AI implementation per Section 5. Show how you limit collection to information necessary for the stated purpose and avoid function creep in AI capabilities.

Include retention schedules that align with Law 25's purpose limitation principle under Section 7. Personal information used for AI training may have different retention requirements than operational data.


Consent documentation and legal basis

Law 25 requires clear documentation of your legal basis for AI processing under Section 12. Consent per Section 14 remains the primary mechanism for most AI applications, but other legal bases may apply in specific contexts.

When relying on consent, document the information provided to individuals about AI processing under Section 13. This includes the automated nature of decisions, logic involved, and potential consequences. Generic privacy notices don't satisfy Law 25's transparency requirements.

For AI systems involving minors under 14, document enhanced consent procedures per Section 15. Quebec law requires particular attention to children's capacity to understand automated processing implications.

"Consent for AI processing under Law 25 Section 14 must be specific, informed, and freely given. Section 13 requires clear information about automated decision-making logic and consequences - general privacy notices mentioning 'automated processing' are insufficient."

Maintain withdrawal mechanisms and document how consent revocation under Section 16 affects AI operations. Individuals retain the right to withdraw consent even if this limits AI functionality.

Document assessments for other legal bases where applicable under Section 12. B2B AI applications or fraud prevention systems might qualify under legitimate interest provisions, but require balancing individual rights.


Audit trails and decision logging

Law 25 Section 12 establishes individual rights regarding automated decision-making, creating documentation requirements for AI systems that affect people. Maintain audit trails that enable meaningful explanations of AI outputs per Section 13's transparency obligations.

Your audit logs should capture input data, processing parameters, and decision rationale in human-understandable format. Technical logs alone don't satisfy Section 13's requirement to provide meaningful information about processing.

Document the human oversight procedures for contested decisions under Section 12. When someone exercises their right to human intervention, you need records showing qualified human review, not just technical validation.

Store audit information securely with access controls per Section 23's security requirements. These records often contain personal information requiring protection under Law 25's security obligations.

For high-stakes AI decisions affecting employment, credit, or similar significant outcomes, maintain detailed decision trails. Include the factors considered, weightings applied, and alternative outcomes analyzed to support Section 12 rights.


Vendor and processor agreements

AI systems often involve third-party processors requiring specific documentation under Sections 18-20 of Law 25. Your processor agreements must address AI-specific risks and compliance obligations.

Document data processing instructions that account for AI model training, inference processing, and output handling per Section 19. Generic cloud service agreements typically don't cover AI processing adequately.

For AI platforms like Augure that maintain Canadian data residency, document the jurisdiction-specific protections in your processor relationships. Augure's Canadian infrastructure eliminates cross-border transfer complexities under Law 25 Sections 17-19.

Include security requirements appropriate for AI workloads per Section 23. This covers model protection, training data security, and output confidentiality beyond standard data processing safeguards.

Document sub-processor relationships in AI supply chains per Section 20. Many AI services involve multiple vendors for different processing stages, each requiring adequate processor agreements.

Maintain records of processor compliance monitoring, including security assessments and audit rights exercised. Section 18 holds controllers responsible for processor compliance.


Individual rights response procedures

Document your procedures for handling individual rights requests under Sections 27-39 related to AI systems. Law 25 provides specific rights regarding automated decision-making that require operational procedures.

Maintain processes for providing meaningful information about AI logic when individuals request explanation of automated decisions under Section 13. This goes beyond technical documentation to human-understandable explanations.

Document your human review procedures for contested AI decisions per Section 12. Law 25 requires human intervention capabilities, not just automated appeals processes.

"Individual rights responses for AI systems under Law 25 Sections 12-13 must provide meaningful explanations of decision logic and genuine human review options. Section 27 establishes the right to access, while Section 12 specifically addresses automated decision-making rights."

Create templates and procedures for handling AI-related access requests under Section 27. Individuals can request information about profiling and automated decision-making affecting them.

Document your processes for AI decision correction under Section 31 or deletion under Section 29 when individuals successfully challenge automated outcomes. This requires coordination between legal rights and technical operations teams.


Ongoing compliance monitoring

Law 25 compliance for AI isn't a one-time documentation exercise under Section 3.1's accountability principle. Establish ongoing monitoring procedures that track regulatory compliance as AI systems evolve.

Document your change management processes for AI systems. Significant functionality changes may require updated privacy impact assessments under Section 93 and revised individual notifications per Section 25.

Maintain records of compliance training for staff involved in AI operations. Section 3.1's accountability principle requires demonstrable compliance programs, not just documentation.

Schedule regular reviews of AI system documentation to ensure accuracy as processing evolves. Machine learning systems that adapt over time need corresponding documentation updates to maintain Section 8 compliance.

For organizations using platforms like Augure for AI operations, document how the platform's built-in Law 25 compliance features support your overall compliance program. This includes data residency confirmations and processing logs that demonstrate Section 3.1 accountability.


Proper documentation transforms Law 25 compliance from reactive scrambling to systematic risk management. Organizations that invest in comprehensive AI documentation find CAI inquiries manageable rather than existential threats to operations.

The documentation requirements align with sound AI governance practices that reduce operational risks beyond regulatory compliance. Start with privacy impact assessments under Section 93 and data mapping per Section 3.5 — these foundational documents inform all other compliance efforts.

A

About Augure

Augure is a sovereign AI platform for regulated Canadian organizations. Chat, knowledge base, and compliance tools — all running on Canadian infrastructure.

Ready to try sovereign AI?

Start free. No credit card required.

Get Started