Privacy, consent and democracy: How PIPEDA is failing Canadians

  • April 11, 2018
  • Philippa Lawson

The Personal Information Protection and Electronic Documents Act, Canada’s private-sector data-protection law, was developed in the 1990s, long before social media, Big Data and the Internet of Things. On the theory that individuals can control their own personal information, its primary form of protection is consent. Much has been written about the efficacy of consent, but only now are we seeing the links between consent and democracy.

Democracy requires citizens who are truly autonomous, free to question, to disagree, to change their minds. But the kind of individual autonomy necessary for true democracy is not just about the freedom to choose and express preferences, it’s also about the freedom to be exposed to information from various sources, to hear different perspectives and to have one’s biases and prejudices challenged.

If autonomy is only about freedom to choose and express preferences, we might as well be robots. It’s just a matter of programming. Feed in the information, out come the choices. But humans are not robots – we have a unique capacity for individual reflexive self-determination and for collective deliberative decision-making regarding the rules of social cooperation.

This capacity is easily undermined by forces that control the information and images to which we are exposed through the various media and social circles we inhabit. It is on the basis of that information, together, of course, with our upbringing, education and life experience, that we form the beliefs and opinions on which we then base our choices and form our preferences. What we don’t know is as important as what we think we know.

And as it turns out, we are quite programmable: history is replete with examples of people developing fears, forming views and beliefs, and then making free choices based on the misinformation that was fed to them – think Nazi Germany, Maoist China, Rwanda. Whether as consumers or as citizens, we humans are remarkably susceptible to psychological manipulation, especially by entities that understand our vulnerabilities and decision-making processes better than we do.

So a concept of autonomy that focuses only on freedom of choice is incomplete – it is missing a key component, which is freedom from manipulation by systems that control the information we get and the images we see.

This is where consent comes in: consent is the basis on which we justify the surveillance and profiling that is now being used to decide what we see and don’t see online, and to manipulate us into purchasing certain products or into voting a certain way.

But we all know that informed consent is, for the most part, a sham. Whether because of deficient notices, cognitive limitations or structural obstacles including simple lack of time, consumers rarely provide meaningful, informed, consent to non-obvious data uses. The reality of consumer behaviour simply does not match the theory of informed consent.

There is now ample scientific research disproving our assumptions about how people make decisions regarding privacy. We now know that people do not act rationally, that they often act contrary to their own values and best interests; that they overvalue immediate benefit and undervalue distant risk; and that their decisions are strongly influenced by the way the choices put to them are framed. These cognitive limitations are inherent features of the human condition and are not going to change.

Greater transparency might help, but that is unlikely if the way that choices are framed is left to the very entities whose business models require the collection, use and sharing of personal data.

Short, simple notices don’t suffice if the explanation is complex, as it often is – all they do is deprive consumers of the full explanation needed in order for their consent to be truly informed. And so we end up with the “consent dilemma:” an explanation that is too detailed and time-consuming is ignored, but short and simple is inadequately informative.

And now, on top of these pre-existing cognitive and structural problems, comes the new world of Big Data in which it is impossible to identify all of the downstream uses to which one’s data may be put, and the Internet of Things, in which there are just too many actors and too complex a web of data sharing relationships for the old model of informed consent to work.

But we still cling to the notion of consent because we want to protect individual autonomy, and the alternative of outright prohibitions or limitations seems paternalistic and inconsistent with our theory of individuals as autonomous agents.

The irony is that by clinging to the fiction of informed consent where it does not exist, we have in fact undermined the very autonomy we sought to protect.

And that is the link between consent and democracy: not just the simplistic notion that citizens in a democracy must be free to make their own choices, but rather that truly autonomous citizens must be free from manipulation through surveillance and targeting made possible by data processing that is justified on the basis of notional consent.

Under the guise of consent, we have allowed a flourishing trade in personal data to develop, when individuals in fact have no idea where their personal data has ended up or how it is being used.

It wouldn’t be so bad if we were controlling the uses to which our data is being put – that is, allowing socially beneficial uses while preventing potentially harmful uses. But in our blind reliance on consent and our privileging of innovation over other values, we have allowed technology and industry to develop de facto norms based largely on commercial – as opposed to societal – interests. At the same time, we have neglected the need to entrench substantive ethics-based privacy norms in our system of data protection so as to prevent socially undesirable uses before they happen.

The damage to democracy caused by this neglect is now painfully clear in the U.S., where notionally consent-based surveillance and targeting appears to have decided an election. By narrowing the information that people receive online to that which reflects their own expressed biases and by facilitating a campaign of misinformation designed to confirm those biases, feed prejudices and deepen existing social divides, data processing by social media companies has had the unintended consequence of undermining individual autonomy and seriously eroding democracy.

It’s time for those with public interest mandates to take back the controls, recognize that consent is a flawed data protection device, and develop effective mechanisms for data protection that serve to promote true autonomy and real democracy.

Philippa Lawson is a barrister and solicitor