Thinking about ethical issues of AI before they happen – AI impact assessments

Written by Bernd Stahl, Professor of Critical Research in Technology, School of Computer Science, University of Nottingham.

In light of the rapidly growing discussion of ethical and social concerns of artificial intelligence (AI), it seems fairly uncontroversial to say that it would be desirable to understand these concerns before they are realised in practice. This will never be possible to achieve completely and comprehensively because the future remains unknown. However, one can argue that this should not stop us from trying, as some idea of likely consequences, however limited and flawed it may be, must be preferable to having no idea of these consequences. This principle that it would be desirable to have a reasonable guess at what is likely going to result from particular activities is of course not limited to AI. It is arguably the driving force behind many future-oriented activities including a family of processes that fall under the heading of “impact assessment”.  

Impact assessments are nothing new and have been applied across a broad range of fields and topics. There are environmental impact assessments, social impact assessments, environmental impact assessments and many others. In the field of digital technologies, the EU’s General Data Protection Regulation (GDPR, 2016) has introduced a requirement to undertake data protection impact assessments under certain circumstances. There are furthermore well-established processes such as risk assessment that fall in a similar category. It is therefore not surprising that the application of impact assessment principles to AI and related technology forms part of the array of approaches that have been proposed to deal with possible ethical and social issues arising from this family of technologies.  

While there have been numerous suggestions to develop such AI-related impact assessments and several examples of their implementation, what has been missing so far, has been an overview of what these different proposals have in common, where they diverge, how they work and how they are positioned in the broader AI ecosystems. In our published paper in the journal Artificial Intelligence Review (Stahl et al., 2023), we therefore present what we believe is the first systematic review of such AI-related impact assessments. Our focus is on specific proposals that provide clear guidance on how such impact assessments may be undertaken.  

Starting with a total of 138 candidate documents, we identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. Our review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. 

The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, we describe a baseline process of implementing AIIAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations’ approaches to AI. 

If you would like to learn more about our findings, you can download the open access paper freely using this link.   

 References 

GDPR. (2016), “REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)”, Official Journal of the European Union, p. L119/1. 

Stahl, B.C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., et al. (2023), “A systematic review of artificial intelligence impact assessments”, Artificial Intelligence Review, doi: 10.1007/s10462-023-10420-8.