Privacy Impact Assessments (PIAs) have not changed dramatically over the past 20 years or so, or at least the approach to them hasn’t.

Whether the starting point is in a Word or Excel template or [one hopes] by using actual technology to support the process, a PIA involves a group of people in the organization sitting around a [virtual] table to assess the risks and identify mitigations, before ultimately presenting it for sign-off by [one hopes] someone at the right level to accept the residual risk.

What’s wrong with this you might ask? It is certainly preferable over doing nothing, and in fact is a requirement for any privacy program worth its salt. It is increasingly also a legal requirement in many jurisdictions.

The problem is that this approach to PIAs creates a privacy echo chamber. Whether through meetings, or information gathering via technology that supports doing PIAs, it involves a group of internal employees assessing privacy risk, and ultimately accepting mitigations for a proposed use of technology or data. Inevitably, those involved will have similar viewpoints and see projects in a similar light.

So now for a revolutionary idea: why don’t we ask the people whose data we are using, what they think about what we are doing with their data?

Stay with me. Regulators have long supported that surveys, focus groups, and other ways of gathering stakeholder input would be influential in demonstrating an organization’s commitment to and accountability for privacy. So there is value in keeping your regulator happy, and this can mitigate the fallout should something unexpected or untoward occur.

What is the value to the organization? Well, these are your customers, your employees, your patients, your citizens. If you only look at PIAs in the echo chamber, you may miss how an average stakeholder reacts, and potentially miss the “that’s creepy” reaction. Increasingly, we are aware of how culture affects how we view privacy. For example, if the people assessing the risk of a geo-location app are all white males, they may fail to recognize how some data collection reinforces discriminatory pricing related to the area one lives in, or creates a potential risk to other groups, like the app being misused to stalk women.

Moreover, by explaining the good thing you are trying to accomplish, such as a new service, or greater convenience, you might well get suggestions from stakeholders as to how to reach that goal better, and without that potential “ick” factor. This might be through better notice or transparency, or data minimization.

And from the point of view of defending your choices later, it would seem a great insurance policy. So long as you haven’t failed to disclose something essential, and as long as you have approached seeking stakeholder involvement in good faith, documented stakeholder engagement can help demonstrate your practices were within individuals’ reasonable expectations should you be called upon to defend those practices at any time.

What are the downsides? Well for an organization, it could be scary, because it might mean hearing something you don’t want to hear – like the project is too invasive, too creepy. If the input is negative, and the remediation is too difficult or expensive, it might well kill the project. However, it is better to hear this early on than to find out later through complaints, a regulator, or negative press.  More likely, you will find out that how you have presented or explained the project leaves people suspicious or concerned.

What kind of project would benefit from this approach?

This could vary widely…

Read The Full Article at nNovation

Check Also

NAFTA 2.0: Data Protection Considerations for Canadian Companies

Thank you to Cristina Onosé, Sarah Nasrullah and Haley Fine for your assistance in develop…