Artwork

Treść dostarczona przez Algorithmic Governance Research Network. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Algorithmic Governance Research Network lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Episode 2: Conversation with Simon Egbert and Matthias Leese on Criminal Futures: Predictive Policing and Everyday Police Work

1:24:16
 
Udostępnij
 

Manage episode 341029609 series 3394510
Treść dostarczona przez Algorithmic Governance Research Network. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Algorithmic Governance Research Network lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Joining me today are Simon Egbert, Postdoctoral Fellow at Bielefeld University, on an ERC research project on The Future of Prediction, and Matthias Leese, Senior Researcher at the Center for Security Studies (CSS) in Zürich, to discuss their recent book Criminal Futures: Predictive Policing and Everyday Police Work, published in 2021 with Routledge. The book is available to download open-access here.

Today we discuss predictive policing and the ways in which it is transforming police work. Police departments across the globe are embracing algorithmic techniques to support decision-making through risk assessments and predictions based on big data and real-time analytics, utilizing tools such as facial recognition. Silicon Valley’s ‘technological solutionism’, to use Evgeny Morozov’s concept, has been making its way into law enforcement agencies across the globe, promising to smoothly, efficiently and effortlessly anticipate, predict, and control (future) criminal behaviour and deviance. But predictive policing has met with resistance from civil society and academics alike. Even though data-driven predictions and algorithmic risk assessments are sold by tech developers as ‘neutral’ and ‘objective’ forms of ‘evidence’ and ‘intelligence’ – because technological – as something ‘solid’ and ‘hard’ in ‘liquid times,’ critical social scientists tend to know better. What counts as data and how it is collected, what is included and what excluded, all this reflects historical, representational, cultural, gender, and other inequalities and biases. Prejudices about criminality of certain groups can be built into crime data, resulting in their reinforcement, rather than dispelling. We increasingly read about systems trained on biased and ‘dirty’ data, about ‘rogue algorithms’, and ‘algorithmic injustice’ and violations of human rights and civil liberties. As Cathy O’Neil put it, algorithms can create ‘a pernicious feedback loop’, where ‘policing itself spawns new data, which justifies more policing’ (O’Neil 2016: 87). Last year, acting on these insights, the city of Santa Cruz in California, one of the earlierst adopters of predictive policing, became the first US city to ban the use of predictive technologies in policing. Calls for ethical, transparent and explainable AI are emerging both from within computer science, law and social sciences, and from policymakers and civil society. It is clear that both the development and adoption of these technologies does not happen in a cultural, political or economic vacuum. In many countries, for instance, police forces are experiencing financial cuts, increasing pressures to outsource certain tasks to private actors, often accompanied by organizational reform. Demands on response time, results, performance, and efficiency are increasing, while resources may be shrinking, thus structurally creating a market for a wide range of optimization tools for police work. Simon Egbert and Matthias Leese have studied predictive policing, the datafication of security and the transformation of police work ethnographically in Germany and Switzerland. In this podcast, we discuss in detail the reality behind the sleek commercials for predictive policing software tools that promise to forecast crime and control futures. Are we headed towards a dystopian society of total surveillance, social sorting, and control or a utopia of a perfectly optimized police force? What futures lie ahead for predictive policing and what will the police force of the future look like?

Text © Tereza Østbø Kuldova, 2021

Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).

  continue reading

14 odcinków

Artwork
iconUdostępnij
 
Manage episode 341029609 series 3394510
Treść dostarczona przez Algorithmic Governance Research Network. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Algorithmic Governance Research Network lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Joining me today are Simon Egbert, Postdoctoral Fellow at Bielefeld University, on an ERC research project on The Future of Prediction, and Matthias Leese, Senior Researcher at the Center for Security Studies (CSS) in Zürich, to discuss their recent book Criminal Futures: Predictive Policing and Everyday Police Work, published in 2021 with Routledge. The book is available to download open-access here.

Today we discuss predictive policing and the ways in which it is transforming police work. Police departments across the globe are embracing algorithmic techniques to support decision-making through risk assessments and predictions based on big data and real-time analytics, utilizing tools such as facial recognition. Silicon Valley’s ‘technological solutionism’, to use Evgeny Morozov’s concept, has been making its way into law enforcement agencies across the globe, promising to smoothly, efficiently and effortlessly anticipate, predict, and control (future) criminal behaviour and deviance. But predictive policing has met with resistance from civil society and academics alike. Even though data-driven predictions and algorithmic risk assessments are sold by tech developers as ‘neutral’ and ‘objective’ forms of ‘evidence’ and ‘intelligence’ – because technological – as something ‘solid’ and ‘hard’ in ‘liquid times,’ critical social scientists tend to know better. What counts as data and how it is collected, what is included and what excluded, all this reflects historical, representational, cultural, gender, and other inequalities and biases. Prejudices about criminality of certain groups can be built into crime data, resulting in their reinforcement, rather than dispelling. We increasingly read about systems trained on biased and ‘dirty’ data, about ‘rogue algorithms’, and ‘algorithmic injustice’ and violations of human rights and civil liberties. As Cathy O’Neil put it, algorithms can create ‘a pernicious feedback loop’, where ‘policing itself spawns new data, which justifies more policing’ (O’Neil 2016: 87). Last year, acting on these insights, the city of Santa Cruz in California, one of the earlierst adopters of predictive policing, became the first US city to ban the use of predictive technologies in policing. Calls for ethical, transparent and explainable AI are emerging both from within computer science, law and social sciences, and from policymakers and civil society. It is clear that both the development and adoption of these technologies does not happen in a cultural, political or economic vacuum. In many countries, for instance, police forces are experiencing financial cuts, increasing pressures to outsource certain tasks to private actors, often accompanied by organizational reform. Demands on response time, results, performance, and efficiency are increasing, while resources may be shrinking, thus structurally creating a market for a wide range of optimization tools for police work. Simon Egbert and Matthias Leese have studied predictive policing, the datafication of security and the transformation of police work ethnographically in Germany and Switzerland. In this podcast, we discuss in detail the reality behind the sleek commercials for predictive policing software tools that promise to forecast crime and control futures. Are we headed towards a dystopian society of total surveillance, social sorting, and control or a utopia of a perfectly optimized police force? What futures lie ahead for predictive policing and what will the police force of the future look like?

Text © Tereza Østbø Kuldova, 2021

Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).

  continue reading

14 odcinków

Wszystkie odcinki

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi