zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Can we use the open future?

Preparedness and innovation in times of self-generated uncertainty

© Andrei Damian on unsplash

Is the future simply open or can it be made more or less open? The awareness of the uncontrollable impact of present action on the future has recently raised a debate about the risks of innovation and rational planning. Relying on Luhmann’s concept of defuturization, the article confronts the two approaches of future-making and preparedness and proposes to combine them with reference to the management of innovation. This adds a purposeful dimension to the discourse about preparedness, aimed so far only at confronting damaging events: one can also be prepared to seize and exploit novel opportunities.

ESPOSITO, E. (2024). Can we use the open future? Preparedness and innovation in times of self-generated uncertaintyEuropean Journal of Social Theory


Can a predicted future still be an open future?

Algorithmic forecasts and actionability in precision medicine

© National Cancer Institute on unsplash

The openness of the future is rightly considered one of the qualifying aspects of the temporality of modern society. The open future, which does not yet exist in the present, implies radical unpredictability. This article discusses how, in the last few centuries, the resulting uncertainty has been managed with probabilistic tools that compute present information about the future in a controlled way. The probabilistic approach has always been plagued by three fundamental problems: performativity, the need for individualization, and the opacity of predictions. We contrast this approach with recent forms of algorithmic forecasting, which seem to turn these problems into resources and produce an innovative form of prediction. But can a predicted future still be an open future? We explore this specific contemporary modality of historical futures by examining the recent debate about the notion of actionability in precision medicine, which focuses on a form of individualized prediction that enables direct intervention in the future it predicts.

ESPOSITO, E., HOFMANN, D. & COLONI, C. (2023). Can a predicted future still be an open future? Algorithmic forecasts and actionability in precision medicine. History & Theory 0(0) (September 2023)


Predictive Policing

Das Polizieren der Zukunft und die Zukunft der Polizei

© Deepmind on unsplash

Mit der zunehmenden Digitalisierung der Gesellschaft verändert sich die Arbeit in vielen Bereichen unserer Gesellschaft, die Polizei eingeschlossen. Dies gilt nicht nur in Bezug auf sich verändernde Kriminalitätsstrukturen und neu entstehende Deliktsfelder (Cybercrime) (vgl. z. B. Rüdiger/Bayerl 2018), sondern ebenfalls auf die Art und Weise, wie Polizei Wissen generiert und in polizeiliche Praktiken übersetzt – auch als Datafizierung der Polizeiarbeit bezeichnet. Zwar werden Softwareprogramme und mithin auch Algorithmen schon seit einigen Jahrzehnten in der hiesigen Polizeiarbeit benutzt – sei es über elektronische Vorgangsbearbeitungssysteme oder Textverarbeitungssoftware –, mit dem gegenwärtigen Digitalisierungsschub werden nun aber auch algorithmische Analyseverfahren polizeilich implementiert, die mit selbstlernenden Systemen operieren (Machine Learning), was wesentliche epistemische wie praktische Implikationen hat. Diese These wollen wir im Folgenden mit Blick auf polizeiliche Prognosearbeit (Predictive Policing) näher beleuchten. Wir gehen im Zuge dessen wie folgt vor: Zunächst stellen wir in der gebotenen Kürze die Grundlagen von Machine Learning-Algorithmen sowie dem damit zusammenhängenden Phänomen von Big Data dar. Danach beschreiben wir die Grundzüge von Predictive Policing und diskutieren daraufhin die wesentlichen Folgen dieser neuer Art, (Kriminalitäts-)Prognosen zu generieren. Dies beziehen wir auf die zunehmende Verschränkung von Prävention und Re-pression, die mit der Nutzung solcher Machine Learning-Verfahren zusammenhängt.

EGBERT, S., ESPOSITO, E. & HEIMSTÄDT, M. (2023). Das Polizieren der Zukunft und die Zukunft der Polizei. In: Polizei.Wissen 6 (2): 20-23


Predictive Policing und die algorithmische Rekonfiguration polizeilicher Entscheidungen

© Bernard Hermant on unsplash

In many organisations, algorithms that generate predictions are used in the course of digitalisation. This article analyses the impact of this algorithmisation on decisions in the police using the example of predictive policing. Predictive policing is the increasing use of forecasting software to predict and prevent criminal behaviour in police organisations. Based on the differentiation of two variants of police predictive software, which are characterised by a different degree of comprehensibility for their users, the article examines the effects of these algorithms on central decision-making premises of police organisations: Programmes, communication channels and people. The analysis provides an outlook on the consequences of the future development of predictive software for the police as an organisation.

EGBERT, S., ESPOSITO, E. & HEIMSTÄDT, M. (2022). Predictive Policing und die algorithmische Rekonfiguration polizeilicher Entscheidungen. In: Soziale Systeme 26(1-2): 189-216.


Predictive Policing und die Neukonfiguration des Verhältnisses von Prävention und Repression

© Alexandre Debieve on unsplash

Predictive policing, i.e. the police application of algorithmic data analysis procedures in order to generate and implement operational forecasts regarding spatio-temporal and/or personal risks of future crime implies an increasing interweaving of preventive and repressive police measures. This is because the operational forecasts that are considered central to predictive policing, which can be translated more or less directly into police action, represent a (further) temporal advance of police intervention, in that crime risks that can be narrowed down in terms of space-time or group or person can be processed with preventive intent. This temporal reconfiguration of police action has a momentous effect on the relationship between prevention and repression, since in the context of targeted and very concrete prevention efforts, police measures are carried out that, by definition, take place before crimes are carried out and are not based on the existence of concrete dangers, since the quality of prediction is not sufficient for this. Nevertheless, these measures can have a repressive quality. In a legal sense, in that offences are dealt with proactively. In a broader sense, oriented towards the semantic content of the concept of repression, in that despite preventive intent they contain repressive elements associated with coercion, which refers to the reduction of the possibilities of action of the affected citizens – not only (inclined) offenders (chilling effects). Overall, it is evident that the strict legal-terminological separation between prevention and repression is not suitable for adequately capturing future-oriented policing via operational forecasts in predictive policing. Rather, predictive policing is to be described as a prepressive practice that operates on the border between police and criminal law and has both preventive and repressive components.

EGBERT, S. (2022). Predictive Policing und die Neukonfiguration des Verhältnisses von Prävention und Repression. In: Feltes, Thomas; Klaas, Katrin; Thüne, Martin (Hrsg.): Digitale Polizei. Frankfurt am Main: Verlag für Polizeiwissenschaft, S. 113-129.


From Actuarial to Behavioural Valuation

The impact of telematics on motor insurance

Photo by Rayson Tan on Unsplash

Algorithmic predictions are used in insurance to assess the risk exposure of potential customers. This article examines the impact of digital tools on the field of motor insurance, where telematics devices produce data about policyholders’ driving styles. The individual’s resulting behavioural score is combined with their actuarial score to determine the price of the policy or additional incentives. Current experimentation is moving in the direction of proactivity: instead of waiting for a claim to arise, insurance companies engage in coaching and other interventions to mitigate risk. The article explores the potential consequences of these practices on the social function of insurance, which makes risks bearable by socialising them over a pool of insured individuals. The introduction of behavioural variables and the corresponding idea of fairness could instead isolate individuals in their exposure to risk and affect their attitude towards future initiatives.

CEVOLINI, A. & ESPOSITO, E. (2022). From Actuarial to Behavioral Valuation. The impact of telematics on motor insurance. Valuation Studies 9(1) 2022: 109-139


Artificial Communication

How Algorithms Produce Social Intelligence

book cover © 2022 Massachusetts Institute of Technology, MIT Press

A proposal that we think about digital technologies such as machine learning not in terms of artificial intelligence but as artificial communication.

Algorithms that work with deep learning and big data are getting so much better at doing so many things that it makes us uncomfortable. How can a device know what our favorite songs are, or what we should write in an email? Have machines become too smart? In Artificial Communication, Elena Esposito argues that drawing this sort of analogy between algorithms and human intelligence is misleading. If machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. Esposito proposes that we think of “smart” machines not in terms of artificial intelligence but in terms of artificial communication.

To do this, we need a concept of communication that can take into account the possibility that a communication partner may be not a human being but an algorithm—which is not random and is completely controlled, although not by the processes of the human mind. Esposito investigates this by examining the use of algorithms in different areas of social life. She explores the proliferation of lists (and lists of lists) online, explaining that the web works on the basis of lists to produce further lists; the use of visualization; digital profiling and algorithmic individualization, which personalize a mass medium with playlists and recommendations; and the implications of the “right to be forgotten.” Finally, she considers how photographs today seem to be used to escape the present rather than to preserve a memory.

ESPOSITO, E. (2022). Artificial Communication: How Algorithms produce social intelligence. Cambridge (MA), London: MIT Press


A review of the book can be found at E&T.

Interviews with Elena Esposito about her book can be listened to on The Neutral Ground Podcast and New Books Network.

 


Coaching strategies in telematics motor insurance: control or motivation?

How insurers try to be proactive in risk mitigation

Photo by Sanjeevan SatheesKumar on Unsplash

One of the main issues underlying insurance contracts is moral hazard: if people are insured, their exposure to dangers could increase because they have fewer incentives to try to prevent accidents from happening. Digital technologies promise to transform the way insurance companies deal with moral hazard. On one side, these technologies monitor individual behaviour, on the other side they produce data which, in turn, are used to involve policyholders in coaching programs. A case in point is telematics motor insurance. If one looks more closely at coaching programs, however, things look different.

CEVOLINI, A. (2022). Coaching strategies in telematics motor insurance: control or motivation? How insurers try to be proactive in risk mitigation. Movingdots, 21.03.2022.

https://www.movingdots.com/news/coaching-strategies-in-telematics-motor-insurance-control-or-motivation


Transparency versus explanation

The role of ambiguity in legal AI

©anhtuanto

Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move fromartificial intelligence to an innovative formof artificial communication. In many cases the goal of explanation is not to reveal the procedures of themachines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that theymust explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequencemight be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.

ESPOSITO, E. (2021). Transparency versus explanation: The role of ambiguity in legal AI. Journal of Cross-disciplinary Research in Computational Law, Vol. 1 No. 1 (Nov. 2021).

esposito-online-first-final-pre-doi.pdf


Using Risk Analytics to Prevent Accidents Before They Occur

The Future of Insurance

Photo by Nick Fewings on Unsplash

While insurance was originally devised as a safety net that steps in to compensate for financial losses after an accident has occurred, the information generated by sensors and digital devices now offers insurance companies the opportunity to transform their business by considering prevention. We discuss a new form of risk analytics based on big data and algorithmic prediction in the insurance sector to determine whether accidents could indeed be prevented before they occur, as some now claim is possible. We will use the example of motor insurance where risk analytics is more advanced. Finally, we draw conclusions about insurance’s new preventive role and the effect it may have on the policyholders’ behavior.

GUILLEN, M. & CEVOLINI, A. (2021). Using risk analytics to prevent accidents before they occur – the future of insurance. The Capco Institute Journal of Financial Transformation, Vol. 54 (Nov. 2021): 76-83.

Guillen_Cevolini_Using-Risk-Analytics-to-Prevent_CAPCO_Journal-of-Financial-Transformation_54.pdf


Systems Theory and Algorithmic Futures

Interview with Elena Esposito

By introducing us into core concepts of Niklas Luhmann’s theory of social systems, Elena Esposito shows their relevance for contemporary social sciences and the study of unsettled times. Contending that society is made not by people but by what connects them - as Luhmann does with his concept of communication - creates a fertile ground for addressing societal challenges as diverse as the Corona pandemic or the algorithmic revolution. Esposito more broadly sees in systems theory a relevant contribution to critical theory and a genuine alternative to its Frankfurt School version, while extending its reach to further conceptual refinement and new empirical issues. Fueling such refinement is her analysis of time and the complex intertwinement between past, present and future - a core issue that runs throughout her work. Her current study on the future as a prediction caught between science and divination offers a fascinating empirical case for it, drawing a thought-provoking parallel between the way algorithmic predictions are constructed today and how divinatory predictions were constructed in ancient times.

ESPOSITO E., SOLD K. & ZIMMERMANN B. (2021). Systems theory and algorithmic futures: Interview with elena esposito. Constructivist Foundations 16(3): 356–361.

https://constructivist.info/16/3/356


A Pandemic of Prediction

On the Circulation of Contagion Models between Public Health and Public Safety

Photo by geralt on Pixabay

Digital prediction tools increasingly complement or replace other practices of coping with an uncertain future. The current COVID-19 pandemic, it seems, is further accelerating the spread of prediction. The prediction of the pandemic yields a pandemic of prediction. In this paper, we explore this dynamic, focusing on contagion models and their transmission back and forth between two domains of society: public health and public safety. We connect this movement with a fundamental duality in the prevention of contagion risk concerning the two sides of being-at-risk and being-a-risk. Both in the spread of a disease and in the spread of criminal behavior, a person at risk can be a risk to others and vice versa. Based on key examples, from this perspective we observe and interpret a circular movement in three phases. In the past, contagion models have moved from public health to public safety, as in the case of the Strategic Subject List used in the policing activity of the Chicago Police Department. In the present COVID-19 pandemic, the analytic tools of policing wander to the domain of public health – exemplary of this movement is the cooperation between the data infrastructure firm Palantir and the UK government’s public health system NHS. The expectation that in the future the predictive capacities of digital contact tracing apps might spill over from public health to policing is currently shaping the development and use of tools such as the Corona-Warn-App in Germany. In all these cases, the challenge of pandemic governance lies in managing the connections and the exchanges between the two areas of public health and public safety while at the same time keeping the autonomy of each.

HEIMSTÄDT, M., EGBERT, S., & ESPOSITO, E. (2021). A Pandemic of Prediction: On the Circulation of Contagion Models between Public Health and Public Safety. Sociologica , Vol 14 n. 3, p. 1-24.

https://doi.org/10.6092/issn.1971-8853/11470


Insurtech tra rischio e mutualità

Photo by Markus Spiske on Unsplash

The new insurance business model, driven by digital technologies, is promising because it allows, among many other things, to profile customers at the best, offering them more and more personalized solutions. However, from a sociological standpoint, a number of social issues arises which are worth been further investigated.

CEVOLINI, A. (2020): Insurtech tra rischio e mutualità. Insurance Review, 79, November: 54-57.

https://www.insurancereview.it/insurance/contenuti/speciale/1913/insurtech-tra-rischio-e-mutualita


Il futuro dell'assicurazione

Opportunità e minacce delle tecnologie digitali nell' assicurazione del futuro

Photo by drmakete-lab on Unsplash

Big Data seems to reverse the information asymmetry between insurance companies and policyholders. Through the growing development of InsurTech, insurance companies might know more about the policyholder than the policyholder knows about herself. This reversal leads to complex issues of privacy, transparency and circularity of information. What is now called Insurance-of-Things, moreover, could have a disruptive impact on the insurance business, marking a turning point from a reactive approach to a proactive approach. The information available though algorithms could allow the insurer to know future damages in advance and to move from a compensatory approach to a preventive approach. In this paper, we briefly show how these changes could redefine business model, social performances and technical skills in the insurance sector.

CEVOLINI A. & ESPOSITO E. (2020). Il futuro dell'assicurazione. Opportunità e minacce delle tecnologie digitali nell'assicurazione del futuro. Futuri. Rivista Italiana di Future Studies, 13(7): 51-56.

il-futuro-dellassicurazione.pdf


From pool to profile

Social consequences of algorithmic prediction in insurance

Photo by Thomas Park on Unsplash

The use of algorithmic prediction in insurance is regarded as the beginning of a new era, because it promises to personalise insurance policies and premiums on the basis of individual behaviour and level of risk. The core idea is that the price of the policy would no longer refer to the calculated uncertainty of a pool of policyholders, with the consequence that everyone would have to pay only for her real exposure to risk. For insurance, however, uncertainty is not only a problem - shared uncertainty is a resource. The availability of individual risk information could undermine the principle of risk-pooling and risk-spreading on which insurance is based. The article examines this disruptive change first by exploring the possible consequences of the use of predictive algorithms to set insurance premiums. Will it endanger the principle of mutualisation of risks, producing new forms of discrimination and exclusion from coverage? In a second step, we analyse how the relationship between the insurer and the policyholder changes when the customer knows that the company has voluminous, and continuously updated, data about her real behaviour.

CEVOLINI, A. & ESPOSITO, E. (2020). From Pool to Profile: Social Consequences of Algorithmic Prediction in Insurance. Big Data & Society.

https://doi.org/10.1177/2053951720939228


Systemic Integration and the Need for De-Integration in Pandemic Times

Photo by 5187396 on Pixabay

The common response to a global emergency is a call for coordination. The paper argues, referring to systems theory, that the problem of our functionally differentiated society is not lack of integration, but rather an excess of integration. In dealing with threats that come from the environment, the opportunities for rationality in society lie in the maintenance and exploitation of differences, not in their elimination.

ESPOSITO, Elena. “Systemic Integration and the Need for De-Integration in Pandemic Times.” Sociologica, vol. 14, n. 1 (2020), p. 3-20.

http:// https//doi.org/10.6092/issn.1971-8853/10853


ERC Logo

This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 833749).


Zum Seitenanfang