Am I Responsible for the (Digital) Future?

  • Post author:
  • Post category:blog / news

Written by Bernd Carsten STAHL, Professor of Critical Research in Technology, School of Computer Science, University of Nottingham

One reason why I am interested in the concept of responsible digital futures is that I have been looking for a long time for an answer to the question of what I am responsible for and, by implication, what I am not responsible for. I suspect that this is a very typical question that most humans ask in one form or another at many points in our lives. My awareness of difficulties of personal responsibilities grew rapidly when I joined the German army in the late 1980s. I learned that my actions could have significant immediate impacts on others, for example when I gave orders that they had to obey. In the army I was part of artillery reconnaissance, i.e. the branch of the military that identifies targets that artillery can fire on. While the German army never had nuclear weapons, it did have delivery systems that were designed to deliver nuclear payloads on behalf of our NATO allies and these systems were part of the German artillery. To put it differently, I understood that in the case of a war, I would have been part of the socio-technical system that would have been responsible for targeting nuclear bombs. This was a fact that I struggled with immensely and that led me to ask the question of what I am responsible for.

As part of my military duties, I studied industrial engineering at the University of the German Armed Forces in Hamburg (now Helmut Schmidt University). During course of my studies I attended modules on philosophy of technology, a topic that I continued to pursue after I finished my degree and went back to regular duties. I registered to study philosophy at the German equivalent of the Open University and gained an MA in philosophy with a dissertation on the responsibility of the engineer in the context of technology assessment. My PhD built on this and looked at a broader concept of responsibility in information systems.

The concept of responsibility has thus been part of my academic journey for more than 30 years. I was therefore pleasantly surprised by the emergence of the concept of responsible (research) and innovation (RI), feeling that I had a good conceptual starting point to contribute to this debate. Since early 2010 I have been involved in a number of projects on RI, both on the UK and the EU level.

Pretty much all flavours of RI agree that it includes an active engagement with possible and likely futures. The AREA framework starts with “anticipation”. It is built on the Stilgoe et al.’s (2013) work which similarly emphasises the anticipatory component of RI. And there is a strong plausibility to the idea that, to be responsible for research and innovation, we need to think about their future consequences. However, thinking about responsibility for the future raises at least two fundamental questions:

  1. How can we know what future consequences of current actions will be?
  2. What would a desirable future look like that we can aim to achieve through present actions?

The first question is fundamentally an epistemological one that has no clear and unambiguous answer. There are numerous future and foresight methods, but none of these are infallible. Employing such methodologies has the advantages of rendering assumptions about the future clear and transparent, but that does not mean that they will be realised.

The second question is even more difficult to answer, as it combines normative and epistemological uncertainties. In modern pluralistic societies there is typically no consensus on what a desirable present would look like. We have different values and preferences which can often conflict, thereby rendering an agreement on a substantive vision of society unlikely. Democracies deal with this problem by using structures and processes that are meant to bring about practical solutions to how we collectively decide what our world should look like. Such a procedural approach is generally accepted, but it makes it difficult for the RI scholar to determine whether research or innovation leads to acceptable or desirable outcomes. This question is rendered even more difficult by the fact that our normative preferences change. What we find acceptable now may well no longer be so in 10 years’ time. If this is true, then how can we shape research and innovation now to achieve a future that we cannot even describe?

The challenges arising from this tension between the aspiration of accepting stewardship for the future and the impossibility of knowing what this future holds is at the core of dealing with responsible digital futures. The reference to “digital”, to computing technologies which are famously logically malleable (Moor, 1985) exacerbates the problem even further. The epistemological problem of knowing the future is reflected in the nature of computing technologies, which by definition are open to a wide range of uses that even the designer of the technology has often not foreseen.

What all of this tells us is that we do not know what the digital future holds, and we can at best make educated guesses with regards to the consequences of our actions. At the same time, we are inexorably moving towards the future and make decisions that shape this future. We cannot simply sit back and enjoy the show. And this implies that we have some sort of responsibility. Engaging with responsible digital futures is thus part of the answer to the question of what I am responsible for. I may never know whether I do the right thing or whether another course of action would have had better consequences. But at the same time, my action or inaction has consequences and for those I need to answer, I need to respond, to take responsibility. Despite the limitations of engaging with responsibility for future states, the alternative is to abdicate responsibility, which to me seems to be the weaker option.

References