shutterstock

Before the Internet came into our daily life, French Philosopher Michel Foucault could put forward that the theory that we don’t live, die, or love in the square format of a paper. To him we don’t live in a neutral, white cube. That’s’ a model for exhibition spaces. We live, die, and love in equal shades of bright and grey colors – all within both secret and open, inner and outer, public and private spaces that are linked in different ways by a multitude of different connections.

Complexity is one of the words that might describe a bit of it. This multitude of shades should challenge us to discover the length of our life and the others with whom we are living in society, as part of their own distinctiveness as a person in the say way it was described in the previous century by the Personalism movement by thinkers such as Denis de Rougemont, Gabriel Marcel, and Hendrik Brugmans.

Is this “personalism” still a reality today as algorithms are being developed and making choices for us, including ethical decisions about the kind of information that we want or need?  In the not so distant future, this also concerns questions about life and death when we’ll have all kinds of intelligent devices programmed to work for us and autonomous cars driving and being designed to react to car accidents. 

The need to make decisions with the help of algorithms and to take away from individual final decisions that are relevant for our interaction as human beings and to our personality comes with the abundance of data made available by the digital revolution.

Algorithms are now often becoming the ‘à la carte’ curators and editors of our personalised info –  sometimes for our own wellbeing, but also to the frustration of many, as a human being is not to be reduced to data concerning their preferences, habits, and psychological profile. In this case, what is not expected disappears as a right, as does the freedom to experience something that is unexpected, which ultimately denies an individual the right to determine certain things.

Algorithms make accidental discoveries virtually impossible, but as the world is only at the beginning of the era of automatisation, machine learning or Artificial Intelligence is still being put to work. This development demands a serious societal debate that touched upon the fundamentals of our democratic societies and the worldviews that go with it as we see a greater tendency towards what could be called “Artificial Intelligence Nationalism”, the likes of which can seen in China, the US, and Russia.

The question now arises as to how the EU can respond to this development whereby Europeans in are battling to get the maximum of our available brain time? A new kind of censorship, censorship by abundance instead of restriction, is the new tendency, which manifests itself most frequently in the disinformation used by populists and authoritarian regimes.

As Madeleine De Cock Buning, the Commission’s appointee to oversee it fight against fake news, says, more work needs to be done to strengthen society’s resilience to disinformation. Especially with the coming of deep fake news, a development where audiovisual content is manipulated to make it impossible to recognize what is true and what is false.

What does this mean for European citizens, Europe, and the EU as a community?

Several experts point out the importance – especially if we want to grasp the positive opportunities that the development of AI (machine learning) has to offer us – of combining the teachings of the essentials of programming, media theories, ethics, and psychology with more classical teaching faithful to the old-fashioned “Bildung Ideal” that includes the concept of self-cultivation through the teaching of foreign languages, as well as the study of Latin and Greek, of major literary works, of music and sport, and mathematics.

They offer the possibility to recognise the differences, distinctness, and sensibilities for the inevitable ambiguity that comes with them and for weighing both general and particular factors. At the end this learning process is the guarantee of positive scientific development and innovation.  In this sense, we may understand the plea of UK journalist Jamie Bartlett for an expanded media literacy – to develop a theory of epistemology, “why should I trust one thing over another thing?”

Transparency is not necessarily key for understanding, but trust and respect for other and otherness are. For Europeans who do not have, as is the case in the US, the Puritan tradition of making the most internal feelings public, this new powerful tendency brought with the American quasi monopoly of Social media, brings existential questions to the European societal model that developed different checks and balances more sensible to questions of privacy. 

On this cultural distinction, literature has been produced recently by German Academics Nathalie Weidenfeld and Julian Nida-Rümelin, who co-wrote the book Digitaler Humanismus, or Digital Humanism in English, Weidenfeld argues that we should not advocate for the right to have access to 4G or 5G broadband networks, but the right to opacity – or the right to live individuality, away from the stress of missing out.

Europe should be proud of its culture of privacy – a private life protected from public comments and not be seduced by a Puritanism that might offer us more losses than gains. Machine learning AI should not necessarily develop as it is now. This also comes from the appeal of the American researcher and publicist, Steven Hill, in his discussions about Europe. His call for an AI version of the European Organization for Nuclear Research (CERN)f or the research and development of AI in the interest of the common good. This new research team could make it possible to provide for a public network that opens the doors for university labs and SME’s to develop new uses of AI without having to be dependent upon Google, Facebook or Amazon.

It would also help Europe to fill the gap of useful data that now exists and to develop a way to protect private information in a way that permits European citizens to guard themselves by approaching this social data as information that needs to be protected as part of the common good, where responsible citizens are empowered to decide whether or not they give access to companies to make a profit from the information.

We should thus think of responsibility as response-ability – the ability to respond to life, people, events, technological, and societal innovations – for individual European citizens and us as a European Community.

+ posts

Lieven Taillie is the President of the Association of European Journalists in Belgium.