Recent caselaw demonstrates that privacy laws reach further than some organisations might expect.

Introduction: the identifiability test

Most information privacy and data protection laws around the world have as their starting point some notion of identifiability.  Legal obligations will typically only apply to data that relates to an ‘identifiable’ person.

For example, Australian privacy laws create privacy principles, which apply only to data which meets the definition of “personal information”.  The Australian Privacy Act defines this as: “information or an opinion about an identified individual, or an individual who is reasonably identifiable”.

The point of this legal definition is that if no individual is identifiable from a set of data, then the privacy principles – the backbone of an organisation’s legal obligations – simply won’t apply.  If no individual can be identified from a dataset, then the dataset can be safely released as open data; matched or shared with or sold to other organisations; or used for a new purpose such as data analytics, without breaching privacy law.

Or so the theory goes.

In reality, determining whether or not an individual might be considered in law to be ‘identifiable’ is not straightforward.  The scope of what is included within the notion of identifiability may surprise many organisations.

Recent cases have tested the limits

The Office of the Australian Information Commissioner (OAIC) has made a series of determinations which have shed light on the extent to which privacy laws cover data which – at face value – may not appear to be identifiable of any individual.

The recent cases which touch on the definition of ‘personal information’ are the 7-Eleven case, the Clearview AI case, and the Australian Federal Police (AFP) case.

All three cases involved the use of facial recognition technology, but the issues raised in relation to the scope of privacy laws are applicable to many other types of data and data use practices, including online behavioural advertising, customer profiling and targeted marketing.

The 7-Eleven case

In June 2020, the 7-Eleven chain of convenience stores began using a new customer feedback survey system in 700 stores across Australia.  Each store had a tablet device which enabled customers to complete a voluntary survey about their experience in the store.  Each tablet had a built-in camera that took images of the customer’s face as they completed the survey.

Those facial images were stored on the tablet for around 20 seconds, before being uploaded to a server in the cloud.  A third party service provider converted each facial image to a ‘faceprint’, which is an encrypted algorithmic representation of the face. The faceprint was used to detect if the same person was leaving multiple survey responses within a 20 hour period on the same tablet; if multiple responses were detected, they were excluded from the survey results.

In other words, 7-Eleven was using a facial recognition technology on its customers, to prevent its employees gaming a customer satisfaction survey by leaving multiple positive survey responses about their own performance.  At least 1.6 million survey responses were completed.

The OAIC found that 7-Eleven had breached Australian Privacy Principle (‘APP’) 3.3 by collecting ‘sensitive information’ (namely, biometric templates) unnecessarily and without consent, and APP 5 by failing to provide proper notice about that collection.

One of the arguments raised by 7-Eleven was that the information at issue did not constitute ‘personal information’ for the purposes of the Privacy Act.

The Clearview AI case

Clearview AI provides a facial recognition search tool which allows registered users to upload a digital image of an individual’s face and then run a search against the company’s database of more than 3 billion images.  The database of images was created by Clearview collecting images of individuals’ faces from web pages including social media sites.  The search tool then displays likely matches and provides the associated source information to the user.  The user can then click on the links to the source material, to potentially enable identification of the individual.

From October 2019 to March 2020, Clearview offered free trials of its search tool to the AFP, as well as to the police services of Victoria, Queensland and South Australia.  Members from each of these police services used the search tool on a free trial basis, uploading images of people to test the effectiveness of the tool.  Uploaded images, known as ‘probe images’, included photographs of both suspects and victims in active investigations, including children.

The OAIC found that Clearview had breached APPs 1.2, 3.3, 3.5, 5 and 10.2.  One of the arguments raised by Clearview was that the information at issue did not constitute ‘personal information’ for the purposes of the Privacy Act.

The AFP case

Officers from the AFP used the Clearview search tool on a free trial basis.  Those officers did so without entering into any formal arrangements with Clearview, and the Clearview search tool was not subject to the AFP’s normal procurement or due diligence processes.  The OAIC found that the AFP had breached APP 1.2, as well as a separate requirement under a Code issued specifically for Australian government agencies, which mandates the conduct of a Privacy Impact Assessment prior to commencing any high privacy risk activities.  While it does not appear that the AFP argued otherwise, the OAIC canvassed whether the data at issue was ‘personal information’ for the purposes of the Privacy Act.

The arguments about identifiability and ‘personal information’

7-Eleven had argued that the facial images and faceprints it collected were not ‘personal information’ because they were not used to identify any individual.

However the OAIC found that even though individuals could not necessarily “be identified from the specific information being handled”, the information was still ‘reasonably identifiable’ – and thus within the scope of ‘personal information’ – because the faceprints were used “as an ‘identifier’ which “enabled an individual depicted in a faceprint to be distinguished from other individuals whose faceprints were held on the Server”.

Similarly, Clearview argued that ‘vectors’ could not constitute ‘personal information’.  From the three billion raw images scraped from the web, Clearview retained metadata about the source of each raw image, and a vector for each raw image: a digital representation generated from the raw image, against which users could compare a new vector (i.e. a new digital file created by running the tool’s facial recognition algorithm over an uploaded probe image), in order to find a potential match.  Clearview argued that the vector and metadata held in their database neither showed an individual’s face, nor named or otherwise directly identified any individual.  They claimed that their tool merely distinguished images, and did not ‘identify’ individuals.  (Any image ‘matches’ would simply present a link to the URL for the source of the original raw image.)

However the OAIC disagreed.  First, the OAIC noted that the definition in the Privacy Act does not require an identity to be ascertained from the information alone, thanks to an amendment to the definition in 2014.

Second, the OAIC noted that because “an individual … is uniquely distinguishable from all other individuals in the respondent’s database”, it was irrelevant that the respondent did not retain the original image from which the vector was generated, nor any identity-related information about the individual.

The OAIC thus determined that both the raw image and the vector generated from it constituted ‘personal information’ for the purposes of the Privacy Act.

In the AFP case, the OAIC reiterated that being able to distinguish an individual from the group will render an individual ‘identified’ in privacy law.

Lesson 1:…

Read The Full Article at Salinger Privacy

Check Also

Privacy Isn’t Dead. Far From It.

Welcome!  The fact that you’re reading this means that you probably care deeply about…