To talk about Over-the-Top companies (OTT), like Google or Facebook, and their successful data-driven business model implies the need to take technological development into account. The idea is quite simple and reasonable: new technologies – from big data to machine learning - allowed unprecedented technical possibilities that were not only simply available but also thinkable less than 20 years ago.
As a consequence, we have been witnessing an incremental focus on the onlife behaviour in relation to new technologies, especially the Internet and user’s privacy. We can now connect the whole world – the argument goes – but this comes with a cost: someone can see my behaviour online, as I constantly leave digital traces whatever I do; and to be seen is to be controlled. The notions of informational privacy and data protection have therefore evolved in this framework.
Cutting-edge was the idea that behaviour meant money. This might still be a little counterintuitive and it struggles to become common sense, but a descriptive element can be isolated from ideological interpretations: it is possible to make money by collecting and aggregating personal data, using the data to make predictions on future behaviours and sell them to someone. This new logic has been astutely called “surveillance capitalism”. This is an entire business model and a logic of accumulation based on the necessity to monitor human experience. Indeed, the new technical possibilities allow to objectively observe human behaviour in its historical development and interaction on a statistical level (e.g. social networks). From observation comes high precision in the prediction of future behaviour and, therefore, the possibility to modify this behaviour via advertising. Indubitably, this ongoing process plays a relevant role even in the formation of our digital identity. However, if the invasion of privacy has become an economic necessity, what is then left of the notion of privacy?
By investigating the historical relevance of the concept of privacy itself, one could also claim that it is time to change our glasses. In fact, the already vague definition of privacy has blurred drastically since its creation and faced a sociological mutation. The right to privacy was detected in the folds of the American Constitution to protect the private sphere against the new technological means of intrusion. Anyhow, it was understood either as a shield against the State (public vs private) and its excessive power or as the thing that allows having an intimate, personal life inter pares (private vs private). In this sense, the emersion of OTT creates a new theoretical challenge. Are they still only private entities or rather are they gathering a “quasi-public” relevance by providing crucial services for our lives and the world around us?
However, the picture sketched so far is much more complex as information on personal data acquired importance also for political purposes. This is blurring the traditional lines around which privacy was conceptualized (private vs public). Indeed, knowledge and technical tools for data and metadata elaboration are fundamental elements of power also for the State in its interplays both at domestic and international level. Therefore, data protection is in a certain measure linked to and dependent on other systems, and considering it by itself may fall short: it will be difficult to regulate properly the asymmetrical relationship of extraction of user’s privacy if the State itself has decided to turn to the same logic in his quest for social control. This approach could have relevant consequences, especially for democratic regimes. For example, the recent scandals as Wikileaks and Cambridge Analytica have shown that OTT, as well as governments, gained unprecedented instruments for behavioural monitoring and modification that affect deeply and directly the democratic process.
In addition, little democratic debate took place to decide if and under which limitations these instruments are lawful. Privacy and data protection laws – including the General Data Protection Regulation (the “GDPR”), the boldest and most advanced existing laws on the matter – do not take into account the asymmetrical dimension of knowledge and power. In this sense, the OTT appear to be the contractors in charge of the digitization of some services of public relevance, with great informational power. For instance, right now, the hairdresser around the corner and Amazon have basically the same obligations under GDPR. Of course, assessments and actions to comply with the GDPR differs greatly under an accountability approach, but there is no distinction among actors and, therefore, no effective control of the OTT’s activities.
Nowadays, the surveillance logic of the extraction of human experience is broadly diffused. Nevertheless, there has never been any constitutive act where stakeholders agreed upon the rule of the game. It was just a bunch of companies, some of which are now considered OTT, that set the rules. This process, called fracture of law, implies that when a new field (so far unregulated) is discovered, the first one that manages to grasp its relevance can rule according to its preferences. The law tries to follow as it can, regulating specific matters, with all the issues arising from a multi-level and multi-agent complex system rapidly developing over time. Nowadays, our existing privacy laws struggle to control the OTT’s activities on privacy matters.
Then maybe we should not only be asking if OTT and user’s privacy playing in the same team, but also if they are playing the same sport.
 Accountability means that every data controller has to self-assess the risk in data processing, and take adequate measure to avoid or minimise the risk. Therefore, high risk and a large-scale processing lead to a greater work do in term of compliance. See also art. 24 of the GDPR.