Categoria: Seminari e Convegni
Stato: Corrente
23 febbraio 2026 - ore 16,30

International Law as AI Design Code: Building AI Systems on Foundations of Global Consensus

Online

Seminario online International Law as AI Design Code: Building AI Systems on Foundations of Global Consensus, tenuto da Esther Jaromitski.

Quando: Lunedì 23 febbraio, ore 16,30
Dove: Online al link: http://tiny.cc/4d2z001
Relatore: Esther Jaromitski

Abstract: International law represents humanity's most ambitious attempt at codifying shared values across borders—yet it remains almost entirely absent from AI system design. This talk argues that treaties and customary international law should serve not merely as external constraints on deployed AI, but as foundational design principles from the earliest stages of development. Drawing on the UN Convention on the Rights of the Child as a case study, I examine how AI-powered toys and children's technologies could be fundamentally reimagined if developers treated international human rights frameworks as design specifications rather than compliance afterthoughts. What if Article 3's "best interests of the child" principle became a core consideration in product requirement documents, or Article 12's right to be heard inspired child consultation processes before feature lock? The approach is not imposing public international law obligations on companies—it's about recognising that these treaties represent decades of global deliberation on human flourishing, and that this hard-won consensus is a resource the AI industry is leaving largely untapped. For companies trying to design technology that works for the world, this body of law offers a starting point: the work of defining shared human values has already been done. The Convention on the Rights of a Child, as an example, offers not only substantive protections but also a methodological blueprint to AI development: meaningful stakeholder participation, including children themselves, before design decisions are locked in. These two worlds—international law and AI development—currently operate in near-total isolation. Bridging them offers a path toward AI systems built on foundations of codified global consensus rather than the implicit values of their creators. This talk explores the first steps toward bringing international law closer to technologists—so they can start using it comfortably in their early-stage thinking.

Bio: Esther Jaromitski is a published novelist, legal scholar, and senior adviser specialising in international law, AI governance, and trust-and-safety enforcement for emerging technologies. Her work focuses on how platforms, AI-driven systems, and critical digital infrastructure can facilitate international crime, and on embedding accountability into global technology systems. She holds degrees from Leiden University College and Queen Mary University of London, where she recently completed her PhD on international criminal liability for social media platforms and algorithmic systems. Her research was funded by the Konrad-Adenauer Foundation. Previously, Esther served as a legal adviser to the EU Delegation to the United Nations in New York and conducted research for the United Nations International Law Commission, the United Nations Office on Drugs and Crime, amongst others. She has taught Internet Regulation in the Technology Law Master's at the Queen Mary University of London. Currently, Esther is a Senior Adviser at the UK Government's Department for Science, Innovation and Technology; none of her external work represents the views of the UK Government, as Esther speaks in a personal capacity. Esther is an editor at the Oxford Journal of Technology and Law and the Queen Mary Law Journal. Her work has been published in the Oxford Human Rights Hub, the UNODC Library, and other outlets.

Per ulteriori informazioni contattare: Professor Daniele Quercia