Member of the European Parliament from Germany (Greens-European Free Alliance), Member of the Special Committee on Artificial Intelligence in a Digital Age.

There is no privacy on the internet and nothing is as free as it looks. And yet we confide everything to it, even giving it authority over democratic discourse. That is dangerous. Because communication on online platforms is controlled by algorithms programmed with ruthless business interests, seeking to keep people on sites as long as possible to show them adverts. That works best with extreme content, for our strongest emotions are fear and anger.

Hate speech, misinformation, polarisation and prejudices fall on fertile ground, currently exacerbated by the Covid crisis. We are all spending much more time online, thereby automatically coming across more false information, including disinformation deliberately spread by domestic or foreign parties and a whole host of other factually incorrect, decontextualised or obsolete content which mislead people, reinforce dubious narratives and devalue true expert knowledge.

That can have fatal consequences, such as Donald Trump alleging in spring 2020 that it might be “interesting” to inject people with disinfectant to fight coronavirus. A US government expert had previously correctly explained that bleach and disinfectant could quickly kill the coronavirus on metal surfaces. The absolutely nonsensical idea which Trump then released into the world cost human lives, seeing significant increases in calls to the poison centre. People were gargling with bleach. A family in Florida even sold bleach as a “miracle mineral solution” and founded a church to sell it.

The Covid crisis in particular has led to a false information epidemic, with increased yearning for simple answer and quick fixes. Simultaneously, the corrective effect of real life contact is lacking. Who is protecting us?

Not the platform operators, because misinformation helps their business interests. Dubious but highly emotional content boosts their profits. Fact-checking is a good way of preventing the worst excesses. But it is no panacea. Twitter and Facebook demonstrated how misleading content can be handled differently during the US presidential election. Whilst Twitter placed fact-checking labels on numerous tweets by President Trump, Facebook and its algorithms allowed the “Stop the Steal” group to amass 200,000 members overnight, despite it disseminating false election information and inciting violence, thus breaching the network’s terms and conditions.  However, democratic governments must not make arbitrary decisions about online content, either. Freedom of expression is precious, and no government should be able to intervene in content moderation on a political whim.

Rather, policy-makers ought to draft and implement clear rules. First of all, uniform rules for dealing with illegal content. But the majority of misinformation is not illegal. Should we prohibit people from sharing private content, the validity of which they cannot judge? Of course not.

The Covid crisis in particular has led to a false information epidemic, with increased yearning for simple answer and quick fixes. Simultaneously, the corrective effect of real life contact is lacking. Who is protecting us?

What we can demand and desperately need, however, is transparency, first of all for recommendation systems – the artificial intelligence employed to govern content. A Wall Street Journal report found extremist content in a third of all Facebook groups. The groups were recommended to 66% of members by a Facebook algorithm. We have a societal right to know about and publicly debate these decision mechanisms. Meaningful transparency will be increasingly important in the future – just think about deep fakes – but it is already urgent. AlgorithmWatch, a Berlin-based NGO, has made concrete proposals regarding researchers’ and investigative journalists’ access to raw data.

The mechanisms which currently control our communication and significant portions of the basic information in public debate are a veritable black box. Independent researchers only have restricted access to data, with access to public application programming interfaces (APIs) being increasingly curtailed in recent years, leaving Facebook, YouTube and the like to use every psychological trick to exploit their knowledge monopoly. As a society, we must put an end to this. We ought to be able to co-determine rules for digital opinion-forming marketplaces, based on hard facts currently withheld from us by conglomerates.

I propose “Social Media Councils” as a model for public debate, similar to Citizens’ Assemblies in Ireland, comprising civil society, experts for freedom of expression, democracy and technology, and representatives of groups particularly affected by hatred and hate speech. They can trigger public debates based on evidence gained through transparency obligations, identify good and bad practice, and issue recommendations for action to politicians. Facebook and its internal ethics committee seek to privatise precisely this public debate. Facebook’s own internal appraisal of its practices is commendable, but in the long run a committee answerable to the CEO will never decide in society’s favour over its own boss’s business interests. That is why we need space for public reflection. Prominent Facebook critics such as Carole Cadwalladr, who exposed the Cambridge Analytica scandal, and Roger McNamee, early investor in Facebook and venture capitalist, have joined forces in the “Real Facebook Oversight Board” to publicise particularly dubious practices.

Micro-targeting and targeted advertising, i.e. the dissemination of advertising to very small target groups based on previously collected and collated data, must also urgently be banned. The resulting gigantic database with millions of highly detailed user profiles also enables misleading messages to be spread to especially susceptible users and is thus highly problematic. Moreover, Google and Facebook control large sections of the ad-tech market, thereby constantly expanding their market shares to the detriment of European media outlets. Nowadays, press publishers’ proceeds are also highly dependent on behaviour-based advertisement. However, the Dutch public broadcaster NPO has demonstrated that equally successful methods for context-based advertisement can exist without personal data or spying on and pursuing people via various websites and into their offline lives. A viable press financing model is needed to reinforce substantiated reporting as a foundation of liberty and democracy, whilst not inadvertently weakening that very democracy through the unwanted dissemination of misinformation.