Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability (with Lyria Bennett Moses)
Janet is a multidisciplinary scholar with research interests in criminal justice policy and practice, sociology of organisation and occupation, and the social organisation of creativity. She is internationally recognised for her contributions to policing research, especially her work on police culture and socialisation, police reform, and the use of information technology in policing. Her major publications in this field include Changing Police Culture (Cambridge University Press 1997) and Fair Cop: Learning the Art of Policing (University of Toronto Press 2003). Janet has been awarded a number of major grants for criminological and sociolegal research, ranging from policing, juvenile justice, restorative justice, work stress and wellbeing of lawyers, to projects on Big Data analytics for national security and law enforcement. Since 2004 she has established a major research program on creativity and innovation, studying the creative practices of visual artists, research scientists and art-technology collaborations. She is co-editor of Creativity and Innovation in Business and Beyond (Routledge 2011) and Handbook of Research on Creativity (Edward Elgar 2013). Janet was elected Fellow of the Academy of Social Sciences in Australia in 2002 for distinction in research achievements
The goal of predictive policing is to forecast where and when crimes will take place in the future. In less than a decade since its inception, the idea has captured the imagination of police agencies around the world. An increasing number of agencies are purchasing software tools that claim to help reduce crime by mapping the likely locations of future crime to guide the deployment of police resources. Yet the claims and promises of predictive policing have not been subject to critical examination. This paper will provide a long overdue review of the available literature on the theories, techniques and assumptions embedded in various predictive tools. Specifically, it highlights three key issues about the use of algorithmic prediction in policing that researchers and practitioners should be aware of:
Assumptions: The historic data mined by algorithms used to predict crime do not reveal the future by themselves. The algorithms used to gain predictive insights build on assumptions about accuracy, continuity, the irrelevance of omitted variables, and the primary importance of particular information (such as location) over others. In making decisions based on these algorithms, police are also directed towards particular kinds of decisions and responses to the exclusion of others. Understanding the assumptions inherent in predictive policing is crucial in critiquing the notion of data-based decision making as “scientific” in the sense of “value-free”.
Evaluation: Figures quoted by vendors of these technologies in the media imply that they are successful in reducing crime. However, these figures are not based on published evaluations, their methodologies are unclear and their relevance to notions of success is assumed rather than analysed. While some evaluations have been conducted, and a high quality evaluation is underway, there is insufficient evidence currently of the effectiveness of predictive policing programs.
Accountability: Finally, the paper will explore the extent to which current practices align with traditional standards of accountability in policing. It will argue that, in the case of algorithmic tools, accountability can only be maintained where there is transparency about the data deployed, the tools used, the assumptions implicit in the process and the effectiveness of predictions. Such transparency would need to be both within the organisation (so that those making deployment decisions based on predictive algorithms understand the limitations of the tools they are using) and externally (either to the general public or to an independent oversight body). Given the current state of play in multiple jurisdictions, the lack of transparency in both respects undermines accountability. The paper also explores the extent to which greater transparency would impact effectiveness.