“Brother! You doubting Thomases get in the way of more scientific advances with your stupid ethical questions! This is a brilliant idea! Hit the button, will ya?”
Calvin addressing Hobbes regarding the ‘Duplicator‘ (Waterson, 1990)
While talk about a post-COVID-19 world is ripe, reflecting more the desire for an economic relaunch than the medical reality of the moment, we are still struggling to understand the effects that the pandemic is having on our societies. Those ripple effects are likely both to outlive the pandemic, and even to make themselves visible after the pandemic has hopefully been eradicated.
One of the conversations that has emerged most clearly is linked to the use of Artificial Intelligence (AI) in healthcare, and concerns both its effectiveness and its ethics. This article will follow two major ethical questions that have dominated the public sphere up to now: the use of data tracking systems for forecasting viral spread, and the possible use of AI as decision support for the allocation of medical resources in emergency situations. The main question underpinning this article is: how will our approach to these challenges impact our future?
Exploring the possible answers to this question will lead us to analyse the impact of the dynamic socio-cultural environment on the predictive capacities of algorithm-based AI models. The article will emphasise the importance of integrating culturally specific dimensions in developing and deploying AIs, and discuss how to approach ethics as applied to AI in a culturally aware manner.
COVID-19
The pandemic we are living through has generated a series of unforeseen effects on local, regional and global scales. From raising instances of racism and subsequent domestic violence in conjunction with the lockdown measures, to major disruptions in human activities that may generate the biggest economic contraction since World War II, we are experiencing a combination of phenomena that reminds us of the interconnectedness of our world.
The SARS-COV2 virus appeared in a context of decreased trust in public institutions and in scientific expertise at a global level, against a background of the increased dominance of social media in spreading fake news and pseudo-scientific theories. This was a perfect storm, which has allowed not only the weakening of democratic institutions and the rise of authoritarian leadership, but also the rapid spread of the virus itself.
At the global level all efforts are geared towards controlling the spread of the virus, creating a vaccine, and treating those affected. Naturally, eyes turned to Artificial Intelligence and to the possibility of using it as a tool to help in these efforts. This process has revealed, and continues to reveal, complex and rather problematic interactions between AI models and the reality in which they are deployed, as well as the conflict between competing AI ethical principles.
Ethics: principles versus practices
In a lecture at Tübingen University, the former UN Secretary-General Kofi Annan said: “One thing that should be clear is that the validity of universal values does not depend on their being universally obeyed or applied. Ethical codes are always the expression of an ideal and an aspiration, a standard by which moral failings can be judged, rather than a prescription for ensuring that they never occur.”
This is a powerful statement, touching at the core of the ethical challenges of leadership. However, it may also contain a major flaw: while ethical codes can be framed as an expression of universal aspirations, the standards by which we may judge moral failings cannot equally be universal. Whether we like it or not, morality is culturally dependent – and moral failings may certainly fall into a cultural blind-spot for many of us. Yet this does not mean that we can advocate abdicating universal ethical codes in the name of ‘cultural particularities’ (although this is a current practice among authoritarian figures, particularly regarding respect for human rights). It merely means that we need to be aware of how these aspirational universal codes are expressed in daily practices, and how the transformation of these practices can (and does) generate new moral norms that in their turn shed light on those very cultural blind-spots.
Let’s take an example: Valuing human life is a universal ethical code. But what type of human life is more ‘valued‘ than others in different societies? And how do these societies make decisions on that basis? Is a young life more valuable than an old one? How is this valuing expressed in daily practices? Is life at any cost more valuable than an individual choice to ‘not resuscitate‘, or to retain dignity in dying? Is it possible to have an ‘equally valuable‘ approach to human life even in moments of scarcity? Is collective survival more important than individual well-being – and can these even be separated?
These types of questions have emerged forcefully during the current COVID-19 pandemic crisis, and scientists, ethicists, and politicians are tackling the answers – or acting as if they knew them already.
To continue, let’s follow two major conversations that have dominated the public sphere lately: the use of data tracking systems for forecasting viral spread, and the possible use of AI as decision support for the allocation of medical resources in emergency situations. By analysing the conversations and practices around this topic, this text will advocate a bottom-up approach towards the use of AI. The main arguments are that sometimes ethical codes may compete among themselves, and that trying to codify them in universally applicable AI algorithms would probably lead to the emergence of new types of biases instead of eliminating the existing ones. Thus, both deciding on the instances of using AI, and designing & relying on AI as decision-making mechanisms need to have the practices that embody moral norms as their starting points, and not universal ethical codes and their presumed possible codification in AI algorithms. The immateriality of AI models has received a reality check, and the same is about to happen to AI ethics.
A material world
At a higher level of analysis, the pandemic is a reminder that our world is material, despite a discourse that everything has now been virtualised, from markets to life itself. All of a sudden COVID-19 has forced us to experience at least three major types of materialisations:
Materialisation of borders. While borders have not always been easy to cross, and some frontiers have been more material than others, in the past three months the transboundary movement has come to almost a complete halt. Most countries in the world have become inaccessible to those who are not their citizens or residents, and repatriation has more often than not been the only type of existing international travel. As I write this text the lockdown is easing in the European Union, but many other countries around the world remain closed to foreigners.
In parallel, extraordinary forms of collaboration at regional and global levels have shown that only the continuation of an open type of approach may offer long-term solutions, for example the German hospitals taking in French patients at frontier regions in order to relieve the over-stretched French hospitals. At the same time, displays of solidarity have also been received with suspicion, raising questions about the use of solidarity as a mechanism of soft power, particularly in the case of China.
(De-)Materialisation of movement. Movement has become at once materialised and virtual. Movement has entered a controlled phase at all levels during lockdown, with much of the workforce entering a mass experiment of working from home. Many who perceived the ability to move as ‘natural‘ are now experiencing it for the first time as a privilege. And movement has been displaced onto online platforms, dematerialising itself into bits and pieces of data (more on this later).
Materialisation of our bodies. Most importantly, we have been called upon to acknowledge the full extent of the importance of our bodies. We, individually and collectively, have dramatically come to realise that our lives are very real and unequivocally linked to our material bodies. The variations of the abstract indicators of the economy show that the entire global complex system is not separated from, but is in fact heavily dependent on our human bodies, their health and their movement (see above). This will contribute to the gradual dismantling of the illusion that we may have had that we live in a virtual world in which the body is only an instrument among others, a tool to be refined in gyms and yoga sessions, or a resource to overstretch during long, caffeine-fuelled working hours. Somehow our bodies have become ourselves again.
Tracking
The data tracking systems (DTS) are not a novelty, and their use by the police is quite widespread in the US. So is their use by marketing companies that rely on ’data from individual users to push products through targeted advertising. As early as 2012 the question of data tracking while surfing the internet was brought to the public’s attention’. The generalisation of the use of smartphones has made possible the extension of tracking from virtual movement to material movement in space and time. Apps, which use the phones’ GPS system and a scantily disguised but default option for the user (‘allow the app to access your location’), track, store, and sell movement data to third parties for the purpose of marketing and targeted publicity. In some instances police forces can use the same data to track movements and ‘prevent crime’ – a contested practice that is not yet fully understood, let alone regulated.
The European Union (EU) enacted…
Privacy 2024 Recap – some significant decisions, slow progress for reform
The past year saw a few court decisions of note as well as halting progress toward privacy…