About the Authors
Policy Lead on Technology, Cybersecurity and Democracy, Ryerson Leadership Lab and Cybersecure Policy Exchange at Ryerson University
Yuan (“You-anne”) Stevens is a legal and policy expert focused on cybersecurity, privacy, and human rights. She brings years of international experience to her role at Ryerson University as Policy Lead on Technology, Cybersecurity and Democracy, having examined the impacts of technology on vulnerable populations in Canada, the US, and Germany. She has been conducting research on artificial intelligence since 2017 and has worked at Harvard University’s Berkman Klein Center for Internet & Society. She is a research affiliate at the Centre for Media, Technology and Democracy and Data & Society Research Institute. When she’s not examining the role of technology in creating dystopian futures in Canada and abroad, you can find her gardening on her balcony, taking apart hardware around her house, or keeping up with family members in Newfoundland.
2019-21 McConnell Professor of Practice, Centre for Interdisciplinary Research on Montreal, McGill University
Ana Brandusescu is a researcher, advisor, and facilitator who works on a more publicly accountable use of data and technology. Currently, she is examining public investments in artificial intelligence (AI) to better understand its socio-economic implications. Ana is co-leading “AI for the Rest of Us”, a research project to develop a new model of public (civic) engagement in government decision making processes being automated with AI. She also serves on Canada’s Multi-Stakeholder Forum on Open Government. Previously at the World Wide Web Foundation, Ana led research on the implementation and impact of open government data in 115 countries, co-chaired the Measurement and Accountability Group of the international Open Data Charter, and co-led gender equality research advocacy.
About the Series
In this essay series, Facial Recognition Governance, McGill’s Centre for Media, Technology and Democracy explores the policy, legal and ethical issues of facial recognition technologies faced on a global scale.
Facial recognition technology has expanded into various domains of public life including surveillance, policing, education, and employment, despite known risks ranging from identity-based discrimination and data privacy infringements to opaque decision-making. While policymakers around the world have proposed law and regulation of biometric technology, the governance landscape remains deeply divided across jurisdictions. This project evaluates the challenges posed by the adoption of facial recognition into high-stakes public contexts in order to inform the coordination and reform of global policies and to safeguard the publics on which this technology is used.
I. Introduction
1.1 The Clearview AI scandal
It was only in January 2020 that the public learned that law enforcement was using facial recognition technology provided by Clearview AI[1]. Canadians learned that over 30 police departments in Canada had used free trials of Clearview AI’s software without public knowledge[2]. By that time, Canadian police had already run 3,400 searches across 150 accounts[3]. Privacy commissioners across the country also took notice in February, beginning an investigation into Clearview AI for using Canadians’ personal information without consent[4]. It is unknown exactly which Canadian police departments used Clearview AI. No government agency or department has published this list. By February 2021, four privacy commissioners across Canada ruled that Clearview AI continues to engage in “mass surveillance” by scraping photos of people residing in Canada and refusing to delete images they have collected[5].
Founded in 2017, Clearview AI claims to be a “web search for faces”[6]. The company has surreptitiously amassed a collection for their database of more than three billion public images from the open web. Clearview AI’s software is used by the company’s clients, the majority of which are governments and private companies all over the world[7]. The images are scraped from public accounts on websites like social media platforms[8]. The company culls images from websites and social media sites with little regard for applicable laws, without the permission of the websites from which they scrape and without the consent of those in the images[9].
FRT is used to identify faces in digital images or videos[10]. The technology was borne out of “computer vision”, a field of study seeking to replicate the human process of observing patterns in images and videos[11]. Similar “recognition” technology can be used for fingerprints, genetic material, heartbeats and many other types of biometric or bodily data[12]. FRT can be deployed in real-time, allowing for instantaneous identification. Police forces have already had access to such technology for several years with inadequate transparency and accountability. There are multiple, overlapping challenges in public procurement practices that allow companies like Clearview AI to operate without publicly tracking their deployments.
The Clearview AI scandal highlights how tech companies take advantage of weak privacy and procurement requirements in Canada to increase capital and power through the pretext of law enforcement protection and safety. This is just one example of how public investments are used by government to support the innovation economy with AI technologies[13]. In this essay, we demonstrate that the Clearview AI scandal reveals major existing ‘vulnerabilities’ in Canada’s privacy law and public sector tech procurement practices. We draw on the analytic framework of sociotechnical security, which identifies flaws in social systems that are entangled with technological systems, with the goal of protecting specific communities from the harm enabled by these flaws[14]. We identify two primary vulnerabilities when it comes to facial recognition software: (i) The omission of biometric information is a weakness in Canada’s privacy law that prizes organizational efficiency over the protection of dignity, and (ii) tech companies such as Clearview AI could easily take advantage of transparency requirements (or lack thereof) when public bodies in Canada enter into contracts with private companies.
1.2 Why Facial Recognition?
FRT is significant because it automates and speeds up a human process that would otherwise take an immense amount of time.
What’s the benefit of software like FRT that replicates human decision-making? A stock phrase that has emerged in the data analytics industry involves the “three Vs” of big data[16]. Volume, velocity, and variety are routinely touted as the value added by data analysis software like FRT[17]. FRT like Clearview AI’s software can analyze a significant amount of data (volume), categorize it as needed (variety), and do so at immense speeds (velocity).
FRT also has significant costs. Experts in Canada have identified the following key harms associated with the use of facial recognition software[18]:
Lack of human autonomy over decisions;
Lack of transparency for reasons behind certain results;
Inaccuracy (e.g., false negatives);
Discrimination; and
Risk of unauthorized sensitive data access and manipulation.
1.3 The Harms of Facial Recognition
FRT perpetuates the subjective biases and values of the people who design and develop these systems, as well as the data used to train them[19]. As the computer science adage goes: garbage in, garbage out[20].
There is mounting evidence that FRT is discriminatory towards Black, Indigenous, people of colour (BIPOC) and trans people. For example, Joy Buolamwini and Timnit Gebru’s seminal research found IBM’s, Microsoft’s, and Face++ AI’s software to have racial and gender bias, where “darker-skinned females are the most misclassified group (with error rates of up to 34.7%). In contrast, the maximum error rate for lighter-skinned males is 0.8%”[21]. The research also found that 189 facial recognition algorithms were 10 to 100 times more likely to inaccurately identify Black and East Asian people[22]. Buolamwini’s research on another AI (artificial intelligence) system, Amazon Rekognition, shows similar patterns in race and gender bias: 100% accuracy on lighter males and 68.6% accuracy on darker females[23].
It is important to note that “there are other marginalized groups or cases to consider who may be being ignored. For instance, dysfunction in facial analysis systems locked out transgender Uber drivers from being able to access their accounts [in order] to begin work”[24]. In light of these harms, , Luke Stark calls facial recognition the ‘plutonium of AI’: “it’s dangerous, racializing, and has few legitimate uses; facial recognition needs regulation and control on par with nuclear waste”[25].
When FRT is used in real-time, such harms can occur even faster. An example of real-time or “live” FRT involves disturbing results that came out of a study from 2020. Administrators at UCLA (University of California, Los Angeles) proposed a program that would use live FRT on their campus CCTV cameras[26]. The digital rights non-profit Fight for the Future stepped in to support students’ efforts against the program[27]. To support this effort, Fight for the Future ran a test using Amazon Rekognition to compare more than 400 photos of the school’s faculty members and athletes to a mugshot database. The software falsely matched only 58 of the photos with images from the database with “100% confidence.” The vast majority of the incorrectly matched photos depicted people of colour[28]. UCLA ultimately abandoned their plan to use FRT on campus[29]. This example highlights that when organizations implement FRT, they systematize deeply-held biases to the detriment of marginalized groups and can do so at alarming speed[30].
II. Facial Recognition and Law Enforcement
The underlying question to ask when it comes to FRT is this: “Do we want to live in a society where, when we walk on the street, we are essentially part of a perpetual police line-up?”[31]
Facial recognition is not new in Canada. Government bodies like the Royal Canadian Mounted Police (RCMP) have been using it for the last 18 years[32]. The RCMP, Canada’s national police force, is unique in the world as a combined international, federal, provincial and municipal policing body[33]. FRT has also been used in the province of Ontario and in cities like Calgary, Edmonton, Ottawa and Toronto[34]. The Toronto Police Service relied on FRT to identify three men accused of murder, all of whom were sentenced to life in prison in March 2020[35].
Out of the law enforcement agencies that trialled Clearview AI’s software in Canada, 50% of them were in Ontario. By July 2020, Canadian privacy commissioners conducted an investigation and stated that Clearview AI was no longer providing its services to the RCMP as a result of their inquiry[36]. The Office of the Privacy Commissioner (OPC) publicly announced that Clearview AI would “cease offering its facial recognition services in Canada”[37]. OPC’s statement declared the “indefinite suspension” of Clearview AI’s contract with the RCMP, which was apparently Clearview AI’s last remaining Canadian client[38].
Despite the fact that the RCMP and all other law enforcement agencies in the country have ostensibly stopped using Clearview AI’s product, the first and only testimony on Clearview AI’s page comes from a Canadian law enforcement agent (see Figure 2). FRT, biometrics, and AI are nebulous terms that further obfuscate how to search for them on public record, especially details around classification and terminology used for the technology itself.
Clearview AI provides a window into much larger conversations happening in Canada and the US. Amnesty International then published a letter in July 2020 signed by 77 privacy, human rights and civil liberties advocates, calling for the immediate ban on FRT for Canada’s law enforcement and intelligence agencies[39]. Beyond law enforcement, “the problem of large-scale surveillance-data collection and the deployment of AI-based facial-recognition technology will only get more challenging as more of our public spaces become governed by private companies”[40]. To this extent, Open Media, a Canadian digital rights non-profit, launched the “Stop Clearview AI’s Facial Recognition” campaign to provide support for individuals to retrieve data about them from Clearview AI[41].
The harms associated with the use of FRT by law enforcement are so significant that cities across the US have banned their police forces from using the technology. City governments that banned FRT include Portland (Maine), Portland (Oregon), San Francisco and Oakland[42]. In June 2020, lawmakers in the US proposed a bill to ban the use of FRT by federal law enforcement[43]. Even Big Tech seems to agree that there is imminent need for better regulation over the use of this technology. When George Floyd was tragically murdered in May 2020[44], IBM, Amazon, and Microsoft released public statements in June 2020 stating that they would temporarily cease providing their FRT software to police departments until the US federal government enacted law to protect people’s fundamental freedoms and rights to privacy[45]. The UK Court of Appeal rendered a landmark decision in August 2020, holding that the use of FRT by police breaches data protection as well as both equality and privacy laws[46].
In contrast, Canada’s current approach to regulating the harms of FRT is laissez-faire, involving few enforceable requirements for privacy rights in biometric information or transparent procurement processes of technology for law enforcement. This approach renders Canada’s privacy law and procurement systems vulnerable in ways we describe below.
III. Canada’s Vulnerable Legal Regimes
3.1 Gaps in Canadian Privacy Law
“Privacy is about power. It is about how law allocates power over information”[47]. Privacy law in Canada, as it does elsewhere around the world, “determines who ought to be able to access, use, and alter information”[48]. Laws that protect privacy are one of main tools in North America to protect human dignity, personal integrity, as well as control and autonomy over one’s information and body[49]. Yet Canadian privacy law fails to protect biometric information, including facial patterns and images. This legal gap puts us at greater risk of surveillance by law enforcement as well as violations of our fundamental rights protected under the Charter of Rights and Freedoms[50].
Canada’s privacy law is regulated at the federal and provincial levels, and is generally split into the public and private sectors. At the federal level, the Privacy Act dictates how federal government institutions — such as the RCMP — collect, use, and disclose personal information[51]. The Privacy Act, enacted in 1983, provides a right to access to this information as well as the right to correct this information. The law also established the Office of the Privacy Commissioner (OPC) of Canada[52], which acts as an oversight body for the Privacy Act.
The OPC also provides oversight for Canada’s federal private sector privacy law enacted in 2000, the Personal Information Protection and Electronic Documents Act (PIPEDA)[53]. PIPEDA provides guidelines on the collection, use, and disclosure of personal information for private organizations in the course of for-profit, commercial activities across Canada[54]. PIPEDA applies for all personal information that crosses provincial or national borders, and does not apply to organizations that operate entirely within Alberta, British Columbia, or Quebec. These three provinces have private-sector privacy laws that have been deemed substantially similar to PIPEDA[55].
The Canadian federal government began long-awaited concrete efforts to overhaul PIPEDA through the proposal of C-11 in November 2020[56]. The Department of Justice also began a public consultation on the Privacy Act in the same month, with submissions due in early 2021[57]. A discussion paper released with the public consultation launch stated that the “Government is not currently considering specifying categories of personal information to which special rules would apply (such as “sensitive” personal information or information relating to minors), though some other jurisdictions do so”[58].
Yet data protection laws such as the EU’s General Data Protection Regulation (GDPR)[59], the UK’s Data Protection Act[60] and California’s Consumer Protection Privacy Act[61] all acknowledge the existence of biometric information. The GDPR requires EU member states to presumptively prohibit the processing of special category data such as biometric information for the purpose of uniquely identifying a person[62]. In our scan of privacy law at all levels of Canadian government and in both the public and private sectors, we identified only two pieces of privacy legislation that account for the existence of biometric information[63]. Only the provincial governments of Alberta and Prince Edward Island acknowledge biometric information in privacy law that applies to government but not industry. Quebec is the only province that requires both government and industry to disclose when they “create” a database of biometric information[64]. Canadian privacy law therefore fails to protect biometric information as it occurs elsewhere around the world.
All levels of government in Canada are putting at risk the privacy and security of people’s information. Canadian government bodies are failing to adequately account for (and therefore adequately protect) information such as our facial patterns. Fundamental freedoms like privacy are never wholly absolute. Governments must strike a balance between the dignitary interest in privacy and the necessity of public safety. It is therefore critical to provide explicit regulation for the object of protection — in this case biometric information — serving as a fault line over these dividing interests.
Key Weaknesses: Biometric Information, Privacy and Law Enforcement…
Each Facebook User is Monitored by Thousands of Companies
This article was copublished with Consumer Reports, an independent, nonprofit organization…