Written by Bernd Stahl, Professor of Critical Research in Technology, School of Computer Science, University of Nottingham
The ethics of Artificial Intelligence (AI) continues to be a hot topic of debate. Recent developments such as the release of ChatGPT have raised new questions about the technical capabilities of AI, its social and organisational consequences and the ethical concerns it raises (Dwivedi et al., 2023; Eke, 2023). While new technologies continue to fuel this discussion, many of the topics it raises are well-established. Questions of bias, transparency or privacy are not new and neither are concerns about the justice of distribution, employment or the influence of technology on warfare. What seems to set the ethics of AI debate apart from earlier discussions of the ethics of digital technologies (Stahl, 2021) seems to be the speed of development, the significance of possible impacts, and the multitude and complexity of the issues involved.
In the SHERPA project which I coordinated between 2018 and 2021, we therefore wanted to bring some order to this debate by understanding what subject experts think about it. The idea was to see which issues are deemed to be most important and how these could or should be addressed. In order to collect this expert view we undertook a Delphi study whose results have just been published (Stahl et al., 2023).
The logic of the Delphi study, which I believe mirrors much of the broader ethics of AI debate can be described as follows. AI is made up of a range of techniques, artefacts and approaches. These have characteristics (e.g. need for large datasets or opacity) which, in turn, can cause ethical concerns (e.g. bias, discrimination, exclusion). A sound understanding of the characteristics and the ethical concerns should allow identifying ways to address these issues. Once one has a good understanding of the technical characteristics, ethical concerns and mitigation options, one can prioritise practical steps to address the ethics of AI. Our Delphi study included three rounds of interaction with experts through online surveys. The first round asked them to name the key issues and ways of addressing them. The second round was used to prioritise the issues as well as the mitigation measures. The final round was designed to determine consensus on the prioritisation of potential governance measures.
The responses allowed us to draw a number of interesting conclusions. One important insight was that the experts did not converge on their evaluation of the issues or the ways to address them. There were no strong favourites in either category. Somewhat surprisingly, the experts did not seem to think highly of legislative interventions, despite much high-level activity around specific legislation, such as the EU AI Act (European Commission, 2021). Contrary to our expectations, the experts ranked interventions most highly that were of a general nature. This includes measures geared towards the creation of knowledge and raising of awareness.
The findings from our Delphi study suggest that the framing of the ethics of AI debate as outlined earlier, may not be the best way of thinking about the topic. In some cases it is no doubt true that specific characteristics lead to identifiable ethical problems, which may be addressed through targeted interventions. However, overall the AI ethics discourse does not seem to be captured well by this logic. This may be because many of the ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems that constitute and make use of AI. If this is true, then it will be important to think how we can move beyond the current state of the debate, that very much focuses on specific interventions through means like ethics guidelines (Jobin et al., 2019), standardisation (IEEE Computer Society, 2021) or auditing (Kazim et al., 2021), to name just some examples, and come to a better understanding about how these different interventions can be combined and aligned, with a view to addressing the overall socio-technical AI ecosystem.
If you want to see some more of the detail of the Delphi study that informs this post, you can directly download the paper.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060.
European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (COM(2021) 206 final). European Commission.
IEEE Computer Society. (2021). IEEE Standard Model Process for Addressing Ethical Concerns during System Design—7000-2021 (7000-2021) [Standard].
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Kazim, E., Denny, D. M. T., & Koshiyama, A. (2021). AI auditing and impact assessment: According to the UK information commissioner’s office. AI and Ethics.
Stahl, B. C. (2021). From computer ethics and the ethics of AI towards an ethics of digital ecosystems. AI and Ethics, 2.
Stahl, B. C., Brooks, L., Hatzakis, T., Santiago, N., & Wright, D. (2023). Exploring ethics and human rights in artificial intelligence – A Delphi study. Technological Forecasting and Social Change,