A proposal that we think about digital technologies such as machine learning not in terms of artificial intelligence but as artificial communication.
Algorithms that work with deep learning and big data are getting so much better at doing so many things that it makes us uncomfortable. How can a device know what our favorite songs are, or what we should write in an email? Have machines become too smart? In Artificial Communication, Elena Esposito argues that drawing this sort of analogy between algorithms and human intelligence is misleading. If machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. Esposito proposes that we think of “smart” machines not in terms of artificial intelligence but in terms of artificial communication.
To do this, we need a concept of communication that can take into account the possibility that a communication partner may be not a human being but an algorithm—which is not random and is completely controlled, although not by the processes of the human mind. Esposito investigates this by examining the use of algorithms in different areas of social life. She explores the proliferation of lists (and lists of lists) online, explaining that the web works on the basis of lists to produce further lists; the use of visualization; digital profiling and algorithmic individualization, which personalize a mass medium with playlists and recommendations; and the implications of the “right to be forgotten.” Finally, she considers how photographs today seem to be used to escape the present rather than to preserve a memory.
ESPOSITO, E. (2022). Artificial Communication: How Algorithms produce social intelligence. Cambridge (MA), London: MIT Press
A review of the book can be found at E&T.
One of the main issues underlying insurance contracts is moral hazard: if people are insured, their exposure to dangers could increase because they have fewer incentives to try to prevent accidents from happening. Digital technologies promise to transform the way insurance companies deal with moral hazard. On one side, these technologies monitor individual behaviour, on the other side they produce data which, in turn, are used to involve policyholders in coaching programs. A case in point is telematics motor insurance. If one looks more closely at coaching programs, however, things look different.
CEVOLINI, A. (2022). Coaching strategies in telematics motor insurance: control or motivation? How insurers try to be proactive in risk mitigation. Movingdots, 21.03.2022.
Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move fromartificial intelligence to an innovative formof artificial communication. In many cases the goal of explanation is not to reveal the procedures of themachines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that theymust explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequencemight be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.
ESPOSITO, E. (2021). Transparency versus explanation: The role of ambiguity in legal AI. Journal of Cross-disciplinary Research in Computational Law, Vol. 1 No. 1 (Nov. 2021).
While insurance was originally devised as a safety net that steps in to compensate for financial losses after an accident has occurred, the information generated by sensors and digital devices now offers insurance companies the opportunity to transform their business by considering prevention. We discuss a new form of risk analytics based on big data and algorithmic prediction in the insurance sector to determine whether accidents could indeed be prevented before they occur, as some now claim is possible. We will use the example of motor insurance where risk analytics is more advanced. Finally, we draw conclusions about insurance’s new preventive role and the effect it may have on the policyholders’ behavior.
GUILLEN, M. & CEVOLINI, A. (2021). Using risk analytics to prevent accidents before they occur – the future of insurance. The Capco Institute Journal of Financial Transformation, Vol. 54 (Nov. 2021): 76-83.
By introducing us into core concepts of Niklas Luhmann’s theory of social systems, Elena Esposito shows their relevance for contemporary social sciences and the study of unsettled times. Contending that society is made not by people but by what connects them - as Luhmann does with his concept of communication - creates a fertile ground for addressing societal challenges as diverse as the Corona pandemic or the algorithmic revolution. Esposito more broadly sees in systems theory a relevant contribution to critical theory and a genuine alternative to its Frankfurt School version, while extending its reach to further conceptual refinement and new empirical issues. Fueling such refinement is her analysis of time and the complex intertwinement between past, present and future - a core issue that runs throughout her work. Her current study on the future as a prediction caught between science and divination offers a fascinating empirical case for it, drawing a thought-provoking parallel between the way algorithmic predictions are constructed today and how divinatory predictions were constructed in ancient times.
ESPOSITO E., SOLD K. & ZIMMERMANN B. (2021). Systems theory and algorithmic futures: Interview with elena esposito. Constructivist Foundations 16(3): 356–361.
Digital prediction tools increasingly complement or replace other practices of coping with an uncertain future. The current COVID-19 pandemic, it seems, is further accelerating the spread of prediction. The prediction of the pandemic yields a pandemic of prediction. In this paper, we explore this dynamic, focusing on contagion models and their transmission back and forth between two domains of society: public health and public safety. We connect this movement with a fundamental duality in the prevention of contagion risk concerning the two sides of being-at-risk and being-a-risk. Both in the spread of a disease and in the spread of criminal behavior, a person at risk can be a risk to others and vice versa. Based on key examples, from this perspective we observe and interpret a circular movement in three phases. In the past, contagion models have moved from public health to public safety, as in the case of the Strategic Subject List used in the policing activity of the Chicago Police Department. In the present COVID-19 pandemic, the analytic tools of policing wander to the domain of public health – exemplary of this movement is the cooperation between the data infrastructure firm Palantir and the UK government’s public health system NHS. The expectation that in the future the predictive capacities of digital contact tracing apps might spill over from public health to policing is currently shaping the development and use of tools such as the Corona-Warn-App in Germany. In all these cases, the challenge of pandemic governance lies in managing the connections and the exchanges between the two areas of public health and public safety while at the same time keeping the autonomy of each.
HEIMSTÄDT, M., EGBERT, S., & ESPOSITO, E. (2021). A Pandemic of Prediction: On the Circulation of Contagion Models between Public Health and Public Safety. Sociologica , Vol 14 n. 3, p. 1-24.
The new insurance business model, driven by digital technologies, is promising because it allows, among many other things, to profile customers at the best, offering them more and more personalized solutions. However, from a sociological standpoint, a number of social issues arises which are worth been further investigated.
CEVOLINI, A. (2020): Insurtech tra rischio e mutualità. Insurance Review, 79, November: 54-57.
Big Data seems to reverse the information asymmetry between insurance companies and policyholders. Through the growing development of InsurTech, insurance companies might know more about the policyholder than the policyholder knows about herself. This reversal leads to complex issues of privacy, transparency and circularity of information. What is now called Insurance-of-Things, moreover, could have a disruptive impact on the insurance business, marking a turning point from a reactive approach to a proactive approach. The information available though algorithms could allow the insurer to know future damages in advance and to move from a compensatory approach to a preventive approach. In this paper, we briefly show how these changes could redefine business model, social performances and technical skills in the insurance sector.
CEVOLINI A. & ESPOSITO E. (2020). Il futuro dell'assicurazione. Opportunità e minacce delle tecnologie digitali nell'assicurazione del futuro. Futuri. Rivista Italiana di Future Studies, 13(7): 51-56.
The use of algorithmic prediction in insurance is regarded as the beginning of a new era, because it promises to personalise insurance policies and premiums on the basis of individual behaviour and level of risk. The core idea is that the price of the policy would no longer refer to the calculated uncertainty of a pool of policyholders, with the consequence that everyone would have to pay only for her real exposure to risk. For insurance, however, uncertainty is not only a problem - shared uncertainty is a resource. The availability of individual risk information could undermine the principle of risk-pooling and risk-spreading on which insurance is based. The article examines this disruptive change first by exploring the possible consequences of the use of predictive algorithms to set insurance premiums. Will it endanger the principle of mutualisation of risks, producing new forms of discrimination and exclusion from coverage? In a second step, we analyse how the relationship between the insurer and the policyholder changes when the customer knows that the company has voluminous, and continuously updated, data about her real behaviour.
CEVOLINI, A. & ESPOSITO, E. (2020). From Pool to Profile: Social Consequences of Algorithmic Prediction in Insurance. Big Data & Society.
The common response to a global emergency is a call for coordination. The paper argues, referring to systems theory, that the problem of our functionally differentiated society is not lack of integration, but rather an excess of integration. In dealing with threats that come from the environment, the opportunities for rationality in society lie in the maintenance and exploitation of differences, not in their elimination.
ESPOSITO, Elena. “Systemic Integration and the Need for De-Integration in Pandemic Times.” Sociologica, vol. 14, n. 1 (2020), p. 3-20.