Tech giants will gain more and more power. Their social and political impact will be difficult to predict and control.
A wealth of information, Herbert Simon warned us back in the 1950s, means a poverty of attention. Nowhere is this truer than in cyberspace, where user attention has been an increasingly scarce and thus valuable resource over the past three decades. Faced with a skyrocketing volume of data, users need intermediaries to manage, order, and rank information, helping them navigate this otherwise inextricable world. Those who conquer users’ attention attain power. With power comes the temptation to abuse it, to gain even more attention, or (as it is called today) “user engagement”, and hence even more power. Since its inception, the digital world has featured these patterns, leading to a growing distance between the power of large intermediaries (the so-called FAANGs) and smaller businesses producing applications and services, placed in a position of almost absolute dependence.
Filling an inevitable gap, these platforms have grown from satellites to planets and then stars, and from stars to supernovas, and are currently collapsing into black holes, able to exploit the Web’s centripetal forces to capture and embed the data and value generated by users and businesses and convert it into profit. Not surprisingly, the economic performance of the FAANGs has followed an entirely different pattern to the rest of the economy: these trillion-dollar companies have grown partly at the expense of the real economy: where they prosper, others often lose. “Superstar firms” have been associated in economics literature with a fall in labour share and quality, booming market power, stagnant productivity, and overall booming inequality. Their growth appears almost unstoppable: in January 2020, the five biggest tech companies (Apple, Microsoft, Alphabet, Amazon and Facebook) accounted for 17.5% of the S&P 500, and since then Covid-19 has pushed them to 23%. To get a sense of this sensational growth, it suffices to mention that Italy’s Gross Domestic Product falls short of Apple’s market capitalisation.
This, however, did not occur by itself. We let it happen. Governments abstained from regulating the Internet in its early days, evoking its infancy, its neutrality, and its reliance onopen, non-proprietary standards. Legislative interventions since the 1990s, notably in the United States and the European Union, have aimed at shielding intermediaries from responsibility. However, in the absence of public regulation, cyberspace has rapidly transitioned from being a land of permissionless innovation to a space where social interactions and economic activities are mostly regulated by proprietary protocols and algorithms deployed by tech giants. Already in 2011, Tim Büthe and Walter Mattli included these giants among the “new global rulers”, referring to the power of platforms such as Microsoft Windows to regulate a whole ecosystem thanks to powerful Application Programming Interfaces. Academics have defined platforms as playing a role of “rule maker, regulator, and police”. As a matter of fact, businesses wanting to participate in the digital economy today know that the rules are found in obscure contracts for platform and cloud services, rather than in the “law in the books”.
Economic power has also gradually become political power. Algorithms that moderate content have gained enormous influence over the public debate; they may convince some to wage war, others to resist vaccination and distrust science and public institutions. They have had an impact on the Brexit referendum, on several US elections, and on political discourse in many countries. They have exploited users’ vulnerabilities in the name of profit and user engagement. Their hands-off approach encouraged actions against the Rohingyas in Myanmar. And when their creators sought to control them, they easily turned their back on them. For example, a recent Twitter internal study found evidence that the platform’s algorithm ends up amplifying right-wing political content.
At the same time, platforms dwarf public regulators when it comes to enforcement: suffice it to recall Apple’s proposal to police the Internet by engaging in on-device scanning of pictures, which may mark the advent of “client-side scanning” as a means of enforcement that largely replaces public scrutiny. Recently Ian Bremmer compared the agility of tech giants in reacting to the attack on Capitol Hill with the slow and hesitant reaction of public regulators and political powers: a comparison that shows, once more confirming Lessig’s early prophecy, that in cyberspace “code, not law, defines what’s possible”. While Twitter and Facebook facilitated the spread of extremist content (e.g. QAnon) before the Capitol Hill attack, they were also extremely rapid in policing their territory afterwards: Twitter deleted 70,000 accounts, corresponding to more than 60% of QAnon’s presence. Rather than public institutions and courts, the world saw Facebook’s private Oversight Board issuing a verdict on Donald Trump, upholding in May 2021 a decision made in the immediate aftermath of the attack. Unsurprisingly, some commentators started equating platforms with authoritarian regimes.
The rise of platform power, however, is not limitless. Politicians, regulators, businesses and civil society around the world are mobilising to invert this trend. Businesses like Epic Games and Spotify are reacting against Apple’s exploitative practices; several antitrust authorities have launched investigations into Alphabet, Amazon, Apple and Facebook; governments from the United States to China, Japan and the EU are seeking to take back their role of regulators. In 2022, ambitious new proposals such as the EU Digital Services Act and Digital Markets Act will try to introduce regulatory constraints on the behaviour of platforms, as well as promote contestability in a world in which competition has not produced real newcomers for at least a decade. The US Congress has embarked on a bumpy ride that may lead to amending Section 230 of the Communications Decency Act, which shields platforms from responsibility for hosting dangerous content. Rules on the responsible use of AI and another generation of privacy and data governance laws, especially in the EU, are likely to restrain the freedom platforms have enjoyed until now.
Will this attempt at taking back control succeed? At first blush, governments won’t be able to turn back the clock. And indeed, there is no state of grace to get back to: a cyberspace happily governed by public law has never existed. The only solution in sight for regulators may consist in learning how to use digital technologies when regulating digital technology: regulation will not succeed without new regulatory tools, from algorithmic inspections to real-time monitoring through so-called RegTech solutions. And it won’t succeed, indeed it may even get worse, if governments pretend to go it alone: no institution can match the speed, plasticity and agility of tech companies. Rather, empowering civil society and the data science community to look into platforms’ algorithms, as well as empowering workers to dig into the algorithms that are deployed in the workplace, can be the solution to one of the most daunting regulatory tasks of all times.