The regulation of Artificial Intelligence (“AI”) has been a hot topic for years. This discussion has evolved from whether to regulate AI, to when regulation should be introduced, to how and what aspects of AI should be regulated. This evolution reflects the complex relationship between humans, society and technology. Of particular interest is the growing trend over the last two years to regulate the use of automated decision-making systems in Canada.
In 2019, the federal government adopted a Directive on Automated Decision-Making (“DADM”) and an accompanying algorithmic impact assessment (“AIA”) tool to guide the use of automated decision making at the federal level. More recently, the federal government introduced a major bill to reform Canada’s private sector privacy law, Bill C-11 (known as the Consumer Privacy Protection Act or CPPA). If passed, the CPPA would specifically regulate automated decision-making systems. The tabling of the CPPA came on the heels of a recent report by the federal Privacy Commissioner of Canada with recommendations on regulating AI (you can read our commentary on the Privacy Commissioner’s report here).
Scope: What constitutes an automated decision system in Canada?
Both the DADM and CPPA share the same definition of an automated decision-making system: “any technology that assists or replaces the judgement of a human decision-maker using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning, and neural nets.” This definition is broad and includes a wide range of possible computer systems.
While the DADM and CPPA share a definition, the two differ in important ways in terms of scope. Specifically, the DADM included a number of exemptions to the applicability of the Directive as follows:
1) the DADM only applies to systems that “provide external services.” Therefore any application for internal use such as talent analytics, fraud detection, predictive auditing, etc. would not trigger any requirements for government;
2) the DADM is not grandfathered in and excludes all systems already adopted by the federal government, including ADS “operating in test environment;” and
3) the DADM does not apply to any national security systems, i.e. any algorithms that may be used by federal law enforcement agencies.
There are no such exemptions in the proposed CPPA. As drafted, the CPPA does not provide any limiting parameters on applicability. Rather, under the proposed CPPA, the use of an automated decision system “to assist or replace human judgment” would trigger a right to an explanation whenever the system is used “to make a prediction, recommendation or decision about the individual” [s. 63(3) of the CPPA]. For any automated decision system that “could have a significant impact” on an individual, the organization would be required to keep a general account of all such systems [s. 62(2)(c) of the CPPA].
Compare for example with Article 22 of Europe’s General Data Protection Regulation (“GDPR”), which is widely cited as the leading regulation of automated decision systems. Article 22 applies only to those decisions made “solely” by an automated system and to those decisions “which produces legal effects concerning him or her or similarly significantly affects him or her.” The CPPA goes beyond the GDPR to include a much broader range of automated decision-making systems.
Obligations: What requirements are associated with the use of automated decision systems?
When regulating the use of automated decision-making systems for government departments, Canada’s DADM adopts a risk-based approach, consistent with proposed regulatory schemes being developed in the US and the EU. The DADM specifically requires that departments complete an AIA, defined in the DADM as “a framework to help institutions better understand and reduce the risks associated with Automated Decision-Making Systems and to provide the appropriate governance, oversight and reporting/auditing requirements that best match the type of application being designed.” (See Appendix A).
The CPPA, on the other hand, does not adopt a risk-based approach. The applicability of the provision is not scaled to risk level. In fact, it does not matter whether the organization is deploying an automated call routing chatbot or a biometric targeted advertisement platform, the CPPA proposes a one size-fits all requirement for automated decision-making systems:
• upon request, organizations must provide an explanation of the prediction, recommendation or decision; [s. 63(3) of the CPPA]
• upon request, organizations must provide an explanation of how the personal information that was used to make the prediction, recommendation or decision was obtained; [s. 63(3) of the CPPA]; and
• organizations must make available a general account of its use of any automated decision-making systems. [s. 62(2)(c) of the CPPA].
The CPPA’s focus is clearly on algorithmic transparency and explainability. This aligns with recommendations made by the Privacy Commissioner of Canada as well. However, the CPPA does not provide any specific guidance as to what constitutes an “explanation” or how an organization should go about discharging this obligation. This is somewhat surprising since some direction on explainability is provided to federal government departments in the DADM.
According to the AIA, systems at the lowest level of impact require only that “a meaningful explanation be provided for common decision results.” It further specifies that “this can include providing the explanation via a frequently asked question section on a website.” Decisions from a system at the second impact level need to provide “meaningful explanation on request” but only for “decisions that resulted in the denial of a benefit, a service, or other regulatory action.” The final and highest impact level would require that a “meaningful explanation be provided with any decision that resulted in the denial of a benefit, a service, or other regulatory action.”
While “meaningful explanation” is not defined in the DADM, looking at the government’s approach to explainability, it is obviously grounded in scaling requirements and focused on assessing algorithmic adverse impact. We will be following whether the CPPA adopts or endorses a similar risk-based approach to automated decision-making systems as the DADM.
Getting ready: What does this mean for your business?…
Taming the AI Beast: A Risk-Based Guide to Smarter AI Governance
In today’s digital age, Artificial Intelligence (AI) is revolutionizing industries, …