The Social Dilemma is a new documentary available on Netflix. The documentary takes the argument a step beyond the mantra that if you are not paying for the product, you are the product, to explain that there is an arms race between the tech giants for our attention.
Interestingly, the new documentary is not focused on how addictive social media are; instead there is a lot on the mechanics utilised to keep us scrolling and liking content in an ever expanding timeline. The developments in Artificial Intelligence are also prominently discussed, namely the fact that algorithms – developed by people, with the biases of people – are used for the sole purpose of keeping us glued to our screens, adapting as we go along, ensuring that the content served is engaging. Concerns about the truth of the content, as well as the addictiveness of the practice, are secondary, as they are in direct contrast to the business models of the companies maintaining these platforms. The documentary was also successful in explaining in layman’s terms how the process works, how the algorithms adapt based on our activity and how they analyse usage patterns of people with similar profiles.
That said, the documentary approached what is a systemic problem through an individualistic prospective, focusing on how the practices adopted by these corporations affect each one of us. Entrenched in these discussions is the idea that we must regain control, which is quite reasonable. What I contest is the underlying notion that to take back control we need to either (a) revert to a previous state of affairs, i.e. by personally abstaining from these services, the abstentionist view, or (b) demand that these companies be regulated by some external body – notably, the state – since they have proven manifestly inadequate to self regulate, the statist regulatory view. To be fair, these two points were not explicitly discussed in the documentary, as there wasn’t much in term of solutions or suggestions on the way forward, but do form part of the general discussion.
The abstentionist view
Noting that the documentary fits into a wider discussion, let’s consider the two objections mentioned above, starting by the fact that I think it’s not only unrealistic but also detrimental to the democracy to encourage people to abstain from social media.1
To begin, let’s clarify that social media usage and purpose vary from person to person and from time to time. It can be a virtual place where grandparents can talk with their grandchildren, where cat videos receive millions of views, where people find partners, where business deals are closed – there’s a myriad things to do online. What is not, however, contestable, is that social media is also a place where political deliberation takes place, a setting where there is exchange of good (or not so good) reasons on opposing political opinions and ideas.2
The structure of the debate is certainly not ideal, especially with the rise of fake news and of targeted political advertising,3 but neither is real-life. What we see with the Social Media is the continuation of the discussion formerly held at coffee shops and pubs, which is now taking place online with the added amplification that the new medium allows. In some respects, social media enabled the democratisation of political voices, whereby one can have a wide-reaching opinion without any attachments to big media outlets and political parties. Of course, this comes with its downside; the politicians and the media outlets are subject to increased scrutiny, whereas the discussions at the pub or under a post on Facebook are held to lower standards (and rightly so).
People are worried about the level of discussion that goes on online. They are worried about fake news, about trolls and whatnot. I am only partially sympathetic to this view. I agree insofar as to say that we have lost control on the amplification of each voice as this is decided by an algorithm that is neither transparent nor controlled. This is an issue that requires collective action, which is to be covered in the next section.
I am unsympathetic, though, to the assumption underpinning these discussions that in order to have a meaningful political exchange the parties need to agree to a (minimum) shared account of the truth, which reportedly is lost in the era of the social media. The argument being that the trolls and the prevalence of fake news have distorted so much the facts, that is impossible to have any meaningful political discussion. I strongly disagree with this. Our world is imperfect, the structures in which political discussions take place are by definition also imperfect, as are the discussions themselves. The purpose of political deliberation is not, necessarily, to persuade your interlocutor with whom you supposedly share a version of the truth. Trump and Biden cannot find common ground and surely do not agree on what went on during the three and a half years of the former’s presidency, without this somehow being detrimental to democracy. A political discussion between two people is not necessarily an attempt to persuade or dissuade one or the other; rather, it’s an attempt to persuade others who are (sometimes passive) participants and with whom you share a similar version of the truth – essentially, to persuade persons with whom you’ve never exchanged a word but with whom you have enough similarities to become potential allies.4
Social Media, primarily Twitter, and, to a somewhat lesser extend, Facebook, have been successful in providing public spaces where opposing political opinions are expressed and witnessed by millions of people. I think these online public spaces should be protected. I made this argument before, back in 2013 when Google pulled the plug on Google Reader. As I argued then:
Online public spaces should receive no less scrutiny or be subject to no less regulation than physical public spaces. In fact, they should be even more protected. Let me explain why: In real life, the concept of public space and public discussion is a largely artificial construct that is only realised in the idealised theories of political philosophers. Physical spaces reinforce already existing barriers — education, socioeconomic status, class, culture — making ‘public dialogue’ either an ideological construct or plain old wishful thinking. After all, a banker working in the city will never find herself in the same table with an industrial worker from Coventry.
Online social tools have come as close as possible to overcoming these structural limitations. Although the situation is far from ideal, online public spaces maximise the chances of a chat between the Coventry industrial worker and the banker working in London’s financial centre. If public spaces and public discussion (or ‘exchange of good reasons’ per the academic jargon) is necessary for a just democratic state, then there is tremendous value in maintaining these online infrastructures of communication.
In the era of the social media the idea of the demos has, for better or worse, been redefined. Our allegiances now transcend national boundaries, often going beyond party borders, class divisions, and ethnic identities. We are bearers of multiple identities that sometimes criss-cross or contradict each other. We go through life re-negotiating our identities in light of additional information and other external stimuli; an ever lasting process that depends upon access to platforms, both physical and virtual, that enable us to develop politically and intellectually, interrogating and revising our beliefs. This is, to my understanding, the essence of democracy: the ability to exchange good reasons, having the potential to persuade others, the option to form new associations, and the choice to abandon them (reasons and associations) when you see fit. Democracy is a process, not a destination, certainly not solely the act of voting every four or five years.
In this context, social media become the virtual platforms where the process of democracy takes place. Once we conceive them as public platforms essential to democracy, we have good reasons to take the argument a step forward. We have the intellectual basis to reject point (a) raised above, namely that we should abstain from social media in light of the “revelations” in the documentary. If social media are democratic platforms, advising people to abstain is detrimental to democracy. The question then becomes: what action, if any, is necessary to safeguard these public spaces, both in terms of the longevity of the data and the equal access to the platforms, which is the topic that will be discussed in the next section.
The statist regulatory view
The view that these platforms need to be regulated is, on first instance, easy to justify, however difficult to sustain. I am a firm believer on the need for regulation but I don’t think this should be a task undertaken by the state. But first things first, let’s discuss why regulatory oversight is necessary before considering how it should be implemented.
The documentary was rather successful in explaining why these companies should not be left to their own devices; why, in other words, self-regulation would never work. The main thesis, which I subscribe to, is that the commercial interests of these companies depend upon the current practice whereby algorithms are utilized to keep us engaged, scrolling and liking, for as long as possible, shaping not only your opinions but also how we interact with the world. And no matter how many ethicists these big corporations hire, no matter how many screen-usage statistics they use, at the end of the day their loyalties and allegiances lay with their shareholders, not the users. A higher authority is needed to keep them in check.5
The tricky bit is deciding who shall regulate these platforms. As a starting point, we can acknowledge that this is not a national problem but rather a global one. Global problems have sometimes local and sometimes global (intergovernmental) solutions. Take the environment for instance. What happens at the amazon rainforest affects everyone. The habits of the Indian and Chinese middle classes affect everyone. An agreement at the global level is necessary to ensure that there are shared commitments and actions, especially as there is substantial cost attached to promoting environmentally sound policies. But there are also local solutions to this global problem. The proponents of local solutions maintain that sustainability can be achieved by organising and consuming locally in an environmentally cautious way – smaller communities, more in touch with the land, with more conscious consumption choices. Long story short, the fact that a problem is shared, i.e. it is “global”, does not suffice to determine whether to pursue global, national or local solutions.
One other factor to consider when contemplating the regulation of big tech companies is what kind of trust, if any, do we have on existing institutions that have regulatory functions. Do we really want to entrust the regulation of big tech companies – the framework governing the platforms upon/through which democracy is exercised – to nation states who are invested and have a first-hand interest in the outcome of these deliberations? In the same manner as it is not in the interest of big companies to self-regulate, it is also not in the interest of states to safeguard a neutral public space as structures for political discussion. Political parties supporting and underpinning elected governments depend upon influencing this debate. What better way to affect the outcome than skewing the platform in your favour? Who wouldn’t do it?
I’m also concerned about the level of influence that states are able to exert. Seeing how authoritarian regimes and quasi-dictators are gaining in popularity, I am ever sceptical about the appropriateness of the state as the regulatory authority, fearing that, in absence of a powerful opposition and functional checks and balances, they will be able to use the government’s heavy hand, not only to influence but also to structurally change how these platforms work.
Given the above, I think a solution should be reached at the international level, primarily for three reasons. First, even if we accept that this global problem can be addressed with a national-level solution, the burden imposed upon companies would be enormous as they will have to align their products to the frameworks of so many different entities (states). Big tech companies will be able to cope and tailor their products accordingly, but smaller companies will not, which means that regulatory efforts at the national level will effectively create barriers of entry, potentially limiting the availability of platforms. Secondly, the international route is most attractive because any collaboration at that level is inevitably going to be an agreement on the lower common denominator, essentially controlling against states using heavy-handed policies to change the structure of these platforms. Thirdly, an intergovernmental agreement is preferable sine it would create a level playing field, avoiding disparities and fragmentation.
But what would such an international oversight body look like? What actions are necessary to ensure that online public spaces are neither manipulative nor manipulated? Given the complexity of AI, I don’t think that the traditional model of rule-book following suffices. It is not enough to agree on an international legal instrument that specifies what is not allowed, which will then bind the tech companies. Though necessary, what is also needed in order to effectively ensure the implementation of the legal instrument, is access to the source code of these platforms. As AI becomes more elaborate and is able to learn how to learn, to adapt to realities as they evolve, it is not beyond imagination to assume that these tools will learn how to manipulate online behaviours while adhering to the rule-book. In order, therefore, to ensure adherence to the rules, enhanced access is necessary to guarantee transparency. I realise that this proposal is rather radical, essentially asking big companies to open their source to an internationally-mandated oversight body, which would force them to allow access to proprietary algorithms. Radical as it may be, I cannot think of a better way to regulate these companies.6
In conclusion: shift of approach
Whilst I was watching the Social Dilemma, I had an eye on the clock. The time was passing, the analysis on the pathogens of social media was quite accurate, but I wasn’t sure what solution would they propose to ameliorate the vices they identified. To my dismay, I was left wondering. Hence why I tried to put some thought on paper. Given the length of the text, I will summarise the main takeaway points below.
Firstly, in order to come up with the right solutions we need to diagnose the problem correctly. My suggestion is to stop approaching the issue as a problematic relationship between the social network and the individual, but rather approach the social network as a platform whereby individuals communicate and exchange good reasons; a platform that is of fundamental importance to democracy.
Secondly, if, given its relevance to democracy, (the right to equal) access and participation in these platforms is important, then the suggestion that people should regain control by abstaining from social media is problematic.
Therefore, thirdly, we need to stop looking for perfect deliberative systems boycotting whatever falls short – people are messy, platforms are messy, life is messy – we shouldn’t be looking at sterilised conditions for political debate but rather focus on how to regulate big tech companies' abilities to influence and manipulate social and individual behaviours.
Fourthly, we need to accept that technology companies cannot self-regulate as it leads to a clash between the interests of the shareholders and the interests of the users. The former will trump the latter every single time.
Fifthly, given that self-regulation will not work, there are good reasons to promote an external oversight body. This body should not be the state. Notwithstanding the barriers of entry when having to deal with hundreds of different national regulatory frameworks, states are also invested in manipulating these platforms for their survival. Plus, a state without a credible opposition or functional checks and balances, especially one with authoritarian tendencies, can have a heavy handed approach that will erode the very platforms it is supposed to regulate.
The proposal is to have an international regulatory body, an agreement on an international legal framework, which will inevitably mean an agreement on the lowest common denominator, thus ensuring a level playing field, avoiding a heavy handed approach. The main challenge is how to implement and enforce such a legal tool given the complexity and constant evolution of AI. The only solution I could think is through access to the companies' source code.
I realise that not many people may agree with the aforementioned arguments. I remain, however, satisfied, if even few of you agree on the initial premise that social media platforms should be considered online public spaces. Agreeing on this opens all sorts of avenues.
I acknowledge the fact that there are levels to one’s engagement and that it’s a different thing to cut back on one’s (sometimes obsessive) usage, from advocating to totally abstain from Facebook, Twitter, YouTube, Instagram, TikTok and the rest. I’m taking the abstentionist view here to interrogate the logical implications of this approach, realising of course that spending one rather than five hours each day on Facebook is better for you and probably others. ↩︎
I defended the argument that social networks are online public spaces before, initially on a blog post on the LSE’s Politics and Policy blog, where I made the argument that online public spaces such as Google+, Facebook and Twitter, should be subject to no less regulation than physical public spaces like pubs. In a follow-up article, I argued that in order to guarantee online social interaction we need to protect our data by ensuring that it can survive the discontinuation of a social network. I concluded that discussion with an argument for an import/export mechanism that would underpin a decentralised system of communications. ↩︎
Targeted political advertising happens when the same candidate or party promote different content to different persons, with messages not necessarily consistent, based on an analysis on their profiles and online activity, essentially telling each person what he/she wants to hear. ↩︎
I think the idea of the convert, who gives up a strong, long-held political opinion further to political persuasion, is fairly unrealistic, basically an exception to the rule. ↩︎
I don’t think this view is particularly controversial, though I’m sure some hardcore right-wing libertarians will object, I suppose on the wrong notion that we are in control of what goes on behind the scenes by virtue of our decision to access (or not) these platforms, i.e. we decide whether to join and spend time on Facebook, Twitter, and so on, which signifies a tacit agreement on our behalf, rendering us, somehow, in control. ↩︎
There are, of course, ways to mitigate the risks, for instance by using a secondment modality, whereby teams can be seconded to work with(in) big companies, ensuring that each team only has access to one of the big companies. This needs to be considered and spelled out in great detail – this is not the place and I’m certainly not the one equipped to do so. ↩︎