Theresa Stadler

Trustworthy and Responsible AI.

Portrait_SDSC_closeUp.jpeg

I am a Senior Data Scientist at the Swiss Data Science Center in Lausanne (Switzerland) where I am part of the Innovation Team focused on data science projects in the humanitarian data space.

Before joining the SDSC, I was a lecturer and postdoctoral researcher at the Security & Privacy Engineering Lab at EPFL (Switzerland) led by Carmela Troncoso.

My work focuses on the responsible deployment of data-driven technologies. To me, ‘‘Responsible AI’’ is more than just a buzzword. AI systems deployed in the real world impact real people with sometimes real adverse effects on their lived experiences. To avoid these, in my work, I follow some principles that I have distilled over the years:

Beyond “responsible use”, we need “responsible design”

To avoid harms, ethics needs to be baked into a project from the very first steps. It is not sufficient to only think about potential misuse just before deployment.

Responsible design starts with identifying and acknowledging risks

We can only fix what we know about. That is why a thorough risk analysis is essential.

Responsible design must consider fundamental trade-offs between risks and benefits

Certain trade-offs between benefits and risks are inherent and cannot be resolved even with the best technical mitigation measures.

My research in this area has been featured on multiple national media outlets and continues to inform policy makers on a national and European level.

Before joining EPFL, I previously worked as a researcher for Privitar, a London-based start-up, where I developed enterprise software that implements privacy-enhancing technologies and aims to makes these technologies available to organisations at scale. I hold a PhD in Computer Science from EPFL (Switzerland) and a Master’s degree in Computational Neuroscience (Biomathematics) from the University of Tübingen (Germany).

news

Jun 15, 2025 New policy paper Purpose First: The Need for a Paradigm Shift in Privacy-preserving Data Sharing
Jun 02, 2025 Interview with Svea Eckert and Eva Wolfange on the “They talk Tech” podcast and for heise online about privacy engineering.
May 19, 2025 Invited talk at the Responsible AI Seminar at Nokia Bell Labs.
Nov 01, 2024 Public defence of my PhD thesis On the Fundamental Limits of Privacy Enhancing Technologies
Sep 10, 2024 Talk and panel discussion at the Synthetic Data for AI Conference organised by the European Commission.

latest posts

selected publications

  1. Conference
    The Fundamental Limits of Least-Privilege Learning
    Theresa Stadler, Bogdan Kulynych, Nicoals Papernot, and 2 more authors
    In Proceedings of the 41th International Conference on Machine Learning (ICML 24), 2024
  2. Conference
    Synthetic Data – Anonymisation Groundhog Day
    Theresa Stadler, Bristena Oprisanu, and Carmela Troncoso
    In 31st USENIX Security Symposium (USENIX Security 22), 2022
  3. Preprint
    Decentralized privacy-preserving proximity tracing
    Carmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, and 8 more authors
    arXiv preprint arXiv:2005.12273, 2020