If you’ve spent any amount of time discussing reforms to improve privacy online, you’ve likely encountered the Big Knob Theory. Like Covid it comes in variants, but its core tenet can be summarised thus: there exists (metaphorically) a Big Knob that can either be turned towards “privacy” or towards “competition” — but it’s very much a zero-sum game and you can’t have both.
Big Knob Theory (BKT) is often a strongly-held view of its proponents, many of whom take it simply as self-evident. More surprisingly, it is also a view commonly held by those who wished things were different. You can often find them deploring the fact that they would very much like to fix our privacy predicament, but can’t because it would empower companies that already have too much power.
If it’s true, and certainly if as clearly true as it is taken to be, there should be solid evidence and arguments to support it. Upon closer inspection, however, it turns out that the case for the Big Knob Theory is far from being that obvious.
Let’s try to formulate the situation a little more rigorously so that we have a basic framework with which to pick through the evidence.
The simplest understanding of privacy that lends itself to some degree of empirical verification is the Vegas Rule: what happens in a given context stays in that context. In practical terms, this means that whatever a person does on a given site or app cannot be learnt by a party other than that site or app in such a way that the third party can then reuse that information elsewhere. (In technical terms, the first party is the sole data controller.) On crucial points is that, under the Vegas Rule, contexts are defined as different products or services, irrespective of whether they are owned in common. Whatever happens at the Vegas Hilton will not be known at the front desk of any other Hilton, and data gathered about you by your email service will not be used by other services owned by the same company. This definition of privacy maps well to contextual integrity and to people’s expectations. We can understand it as measuring the fluidity of data flows.
Competition in data or data-adjacent markets can be measured with HHI or similar metrics. (Margins can also serve as a proxy measure of how contestable the market is.)
With these starting points, an explicit formulation of the Big Knob Theory would be that data and data-adjacent markets will be more competitive in proportion to the fluidity of personal data flows between contexts, and less competitive when contexts are siloed. What support can we find for this theory?
Arguments for the Big Knob Theory
A very common argument mentioned in support of the BKT could be captured succinctly as the “Safari CPMs“. The idea is that we can observe in the market that ad prices (CPMs) are measured to be significantly lower in Safari, a browser that protects privacy, than in Chrome, a browser that doesn’t. But what this shows is only that if, in the same market, some parts have fluid data flows and others do not, then the money will flow to the former. Buyers who are willing to pay for decreased privacy will pay more in a system like Chrome that has a low level of security for personal data. It says nothing about the impact of data fluidity as it impacts the entire market.
The other primary argument for the BKT looks at the GDPR as a natural experiment. It comes in multiple flavours.
Some, like How GDPR is Helping Big Tech and Hurting the Competition, look at the impact of the GDPR on Google’s market share. The argument proceeds as follows: the GDPR happened, but Google’s market share increased anyway; therefore, privacy is bad for competition. This makes two fundamental assumptions: 1) that the GDPR improves privacy and 2) that the GDPR is being enforced against platforms. Both assumptions, unfortunately, are wrong. The GDPR, as implemented today, unfortunately includes consent as a big loophole. This has enabled pretty much everyone to broadcast personal data just as much as they did prior to the GDPR simply by adding the annoyance of consent banners. Europe has seen very little improvement in privacy from the GDPR. (And there is reason to believe that GDPR-style consent, in addition to being useless for privacy, also helps larger companies.) Additionally, the platforms are registered in Ireland, and Ireland is acting as the data equivalent of an uncooperative tax haven. Even if the GDPR improved privacy, it wouldn’t apply to companies whose European operations are centred in Ireland.
Others, like Privacy & market concentration: Intended & unintended consequences of the GDPR, look at the market share of small vendors under the GDPR. Right after the GDPR comes into effect, the number of third-party vendors used in websites that operate under the GDPR drops 15% (with smaller vendors dropping more) before returning to the same level six months later (the abstract somehow fails to mention this last point). The theory here is that sites fear enforcement and so reduce their vendors — mostly the small ones that are less adept at compliance — but over time that fear of enforcement fades and the volume of third-party vendors returns.
Unfortunately, the paper doesn’t factor in the realities of operating a website. Sites manage third-party vendors like this: marketers regularly want to test new vendors, and have them added to the site. When a vendor doesn’t pan out, marketers are supposed to ask for its removal but that often fails (for lack of a forcing function) and stray trackers remain on the site with no purpose. When the GDPR happened (which for many was a last-minute race), pretty much every website out there had to produce a list of all its trackers, and asked marketing to explain which ones did what and who to contact to get data processing addenda in place. That’s a great forcing function to spot vendors you no longer need, which alone suffices to explain the short-lived drop in the number of vendors. It also explains why the drop was short (better than the suggestion that people stopped fearing GDPR enforcement after 6 months) and why the replacement vendors are mostly different ones from those that had been removed (as noted in the paper). It’s hard to overestimate the spring cleaning effect: practitioners found and shut down entire websites that should no longer have been running; a 15% drop in the number of vendors is in fact relatively small.
A great overview of this strand of thinking can be found in The Competitive Effects of the GDPR. In fact, this paper offers simultaneously a good summary of what is valuable in looking at the GDPR as a natural experiment and of why that line of inquiry does not support the Big Knob Theory. This paper (and others in this vein) tend to show that bureaucratic compliance regimes benefit large firms, as does consent-based processing. It’s quite interesting to see that this kind of “notice and choice” isn’t great from a competition perspective because it’s also bad from a privacy standpoint.
It’s worth a quick pause here because, to many, particularly outside of the privacy space, the GDPR has become synonymous with privacy. Sadly, that is hardly the case. The GDPR is first and foremost a thorough implementation of the fair information practices (FIPs), a privacy paradigm that is perfect if you are processing data in the 1970s. It is privacy that works for lawyers and compliance teams, heavily focused on procedural, bureaucratic solutions such as privacy policies, inventories, and consent. While the FIPs can be useful, and should be part of the privacy toolbox so long as their bureaucratic overhead is kept in check, there are simpler and more effective measures to improve privacy (for instance a ban on third-party data controllers) that aren’t in the GDPR.
It is great that there are papers analysing the impact of the GDPR; but to the extent that they equate the GDPR with privacy, they are extending themselves beyond their empirical reach. The GDPR did not significantly and durably reduce the fluidity of data flows but it did increase the overhead of data processing in a way that favours companies with greater cover-your-ass expertise. Even if these papers do not, in fact, support the theory that improved privacy harms competition, they do strengthen the case against transparency and choice regimes.
I read a number of other papers (notably the references from those cited above) but the above points cover the spectrum of arguments that I’ve found in favour of the Big Knob Theory. While none actually supports the BKT itself, the related points they make can be helpful, notably in showing that certain bad ways of regulating privacy are also bad for competition.
How Privacy Improves Data Markets
In Incomplete Law, Pistor & Xu recount a history of the legal status of electricity. In the late 19th century, when the electrification of houses started to become more common, some enterprising people decided that they might as well just hook their household up straight to the grid without officially signing up for anything or paying anyone. Today, it is obvious to most that this is purely and simply theft. That view, however, was not so readily apparent to the courts. The German Supreme Court found that electricity could not be considered an asset in the sense that the law understood it, and only assets could be stolen — therefore, helping yourself to electricity couldn’t be theft. American courts decidedly differently, but the matter remained contentious and was debated in New York courts until 1978.
We are facing a similar moment of confusion during which the status of data is challenging both our legal and economics traditions. Data is much weirder a commodity than electricity. It isn’t easily excludable — copies are cheap and can be difficult to prevent — but it is nevertheless rivalrous (if you’re the only one to know that I plan to buy expensive shoes, you can make a lot more money from shoe sellers with that information than you would if everyone knew). When traded, it becomes even weirder. The market itself is an information device. The interplay between the market as an information device with information itself being traded in that market is not straightforward.
As we increasingly apply ourselves to it, we’re figuring it out. I have good hope that the 2020s will be the decade in which we begin to understand enough of digital society that we can start making it work for people.
Starting from the basics: as Neil Richards explains in Why Privacy Matters, “We live in a society in which…
Privacy Isn’t Dead. Far From It.
Welcome! The fact that you’re reading this means that you probably care deeply about…