Systemic Risks from General-Purpose AI
Seminario online Systemic Risks from General-Purpose AI, tenuto da Sylvie Delacroix.
Quando: Lunedì 15 dicembre, ore 16.30
Dove: Online al seguente link
Relatore: Sylvie Delacroix, King's College London (UK)
Abstract: AI systems deployed in healthcare, education, and law face a critical challenge: many forms of professional uncertainty resist the probabilistic quantification that current systems privilege. A physician pondering whether to document suspected domestic abuse faces uncertainties about appropriate inquiry, patient safety risks, and trauma-informed care that cannot meaningfully be reduced to confidence scores. When diagnostic systems express high algorithmic confidence while failing to communicate these contextual uncertainties, the resulting mismatch between expressed and actual uncertainty creates cascading systemic risks. This talk examines why technical improvements in uncertainty quantification (while valuable) cannot address the fundamental epistemological mismatch between algorithmic confidence and professional judgment. Drawing on my recent analysis of healthcare and judicial contexts, I show how this mismatch risks eroding professionals' capacity to refine the tacit, experience-based judgment that distinguishes expert from novice performance: precisely the forms of situated knowing that depend on engaging with uncertainty rather than resolving it prematurely. The solution requires shifting from designer-centric to genuinely participatory approaches: building technical architectures that enable professional communities to iteratively refine how systems communicate uncertainty, treating uncertainty expression itself as evolving professional knowledge rather than a fixed algorithmic problem.
Bio: Sylvie Delacroix’s work focuses on data & machine ethics. She is the Inaugural Jeff Price Chair in Digital Law at King's College London; and also the founding Director of the Centre for Data Futures and a visiting professor at the Centre for Language AI research at Tohoku University (Japan). Her work has always been animated by a commitment to bridge the gap between theory and practice. This has led to launch or be involved in a variety of ethics and public policy initiatives. She is currently working on agency-enhancing uncertainty communication features for LLMs deployed in morally loaded contexts. She is also considering the social sustainability of the data ecosystem that makes generative AI possible.
Per ulteriori informazioni contattare:
- Professor Daniele Quercia