On June 15, 2022, the Minister of Innovation, Science and Industry, François-Phillippe Champagne introduced Bill C-27An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts (or Digital Charter Implementation Act, 2022).

The greater part of Bill-27 is the successor to the former Bill C-11, tabled in 2020, reintroducing in modified form the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA).

C-27 also introduces new proposed legislation that is the subject of this bulletin: the Artificial Intelligence and Data Act (AIDA). AIDA aims to regulate the development and use of artificial intelligence systems (AI or AI systems) in the private sector.

AIDA, if enacted, would be entirely new legislation in the Canadian context: no provincial or territorial governments have yet tabled bills that seek to regulate private sector development and use of AI. Moreover, among major market jurisdictions, Canada is second only to the European Union in formally introducing such draft legislation for consideration.

What you need to know

  • This article provides an overview of the key aspects of AIDA and its impact on Canadian businesses. As more fully detailed herein, this new regime governing AI systems would include the following:
  • A principles-based, as opposed to a rights-based framework:
    • AIDA is focused on ensuring proper governance and control of AI systems and does not create any new individual rights.
    • AIDA is concerned with preventing (i) physical or psychological harm to an individual, damage to an individual’s property, and economic loss to an individual, and (ii) biased output (output of AI systems that adversely differentiates without justification on one or more of the prohibited grounds of discrimination set out in the Canadian Human Rights Act).
  • Broad scope:
    • The definition of AI system provided in AIDA is broad, presumably to address the wide range of risks to individual rights that the use of AI systems presents.
    • The range of persons obliged to abide by AIDA’s requirements is broadly scoped to include designers, developers, providers and managers of AI systems.
    • Although not expressly applicable to intra-provincial development and use of AI systems, given the nature of development and use, it seems likely that the federal government intends AIDA to govern substantially all development and use of AI in Canada.
  • Assessment, mitigation and monitoring obligations:
    • Developers, designers, providers and managers of AI systems will need to undertake assessments to determine whether they are “high-impact”.
    • High-impact systems will require mitigation measures and ongoing monitoring for compliance, to be undertaken by designers, developers, providers and managers of AI systems.
  • Transparency:
    • AIDA creates a nuanced transparency regime for high-impact systems.
    • Persons making AI systems available for use will be required to publish a plain-language explanation of the intended use of the AI system, and the decisions, recommendations or predictions that it is intended to make.
    • Persons managing the operations of an AI system (e.g., organizations putting it to use) will be required to publish a plain-language explanation of the actual use of the AI system, and the decisions, recommendations or predictions that it makes.
  • Obligations in relation to anonymized data:
    • Designers, developers, providers and managers of AI systems that use anonymized data and persons that make anonymized data available for the purpose of designing, developing or using an AI system must establish measures with respect to (a) the manner in which the data is anonymized and (b) the use or management of anonymized data.
    • Note that these obligations are general and not limited to high-impact systems.
  • Obligations to report to the Minister:
    • A person who is responsible for a high-impact system must notify the Minister if the use of the system results or is likely to result in material (a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual.
  • Protections for confidential business information:
    • Despite stringent transparency requirements for designers and operators and the powers of the Minister to publish information about AI systems, AIDA contains many provisions designed to protect commercial interest in trade secrets.
  • Ministerial powers and enforcement tools:
    • The Minister may delegate its powers apart from the power to make regulations, and may designate a senior official of the department to be the Artificial Intelligence and Data Commissioner.
    • The Minister or its delegate will have order-making powers that may be enforced as orders of the Federal Court.
    • AIDA proposes to introduce by regulation an administrative monetary penalty scheme, with the potential that the power to apply such penalties will be granted directly to the newly created Artificial Intelligence and Data Commissioner.
    • Fines for most offences under AIDA can go up to a maximum of C$10,000,000, or, if greater, the amount corresponding to 3 per cent of the organization’s global gross revenues in its previous fiscal year. Fines for certain offences can climb to a maximum of C$25,000,000, or, if greater, the amount corresponding to 5 per cent of the organization’s global gross revenues in its previous fiscal year.
    • Provisions that enable the Minister to publish information about AI systems that it believes could give rise to a serious risk of imminent harm.

Introduction

The federal government has indicated that AIDA aims to foster trust in the development and deployment of AI systems, by focusing on governance of “high-impact” systems, establishing a new AI and Data Commissioner to monitor compliance, and providing for criminal penalties where data is obtained unlawfully for AI development or the reckless deployment of AI poses serious harm.

Although it may be inspired to some extent by the EU’s 2021 proposal for an Artificial Intelligence Act (the EU Proposed AI Regulation),1 particularly insofar as both take a risk-based approach, it is also unlike the EU Proposed AI Regulation in various respects. For example, the EU Proposed AI Regulation applies to both the public and private sectors, and creates certain exceptions for public sector uses of AI (in particular, law enforcement). AIDA, on the other hand, excludes all Canadian federal government institutions and may be extended to exclude federal or provincial departments or agencies by regulation (AIDA s. 3). In addition, the EU Proposed AI Regulation sets out several specific prohibited AI practices, and sets out criteria for determining whether an AI system presents high, limited or minimal risks. AIDA, by contrast, sets out no specific prohibited practices and appears to contemplate a distinction only between high-risk systems and all other systems.

AIDA is also considerably less elaborate than the EU Proposed AI Regulation, although the relative simplicity may be only apparent: AIDA proposes to leave many salient matters to regulation, which, given the subject matter of the proposed law, may turn out to be quite complex.

Our general impression of AIDA is that it provides a flexible framework, given that many details will be set out in regulations, but also creates significant responsibilities both for developers of AI algorithms and models and for providers of data, which may have an unintended chilling effect on development and innovation. We look forward to the debates, deliberations in committee, and consultations that we expect to commence when Parliament resumes its activities this Fall.

Definition of AI system…

Read The Full Article at BLG

Check Also

Comments on the Clearview AI joint Report of Findings

On February 3, 2021, the Privacy Commissioner of Canada (OPC), the Commission d’accès à l’…