Jump to the content
- {{#headlines}}
- {{title}} {{/headlines}}
Contact
Press, Communications and Marketing
Tel.: +49 228 833-144
Fax: +49 228 833-441
presse[at]avh.de
Law
The rapid developments in the field of Artificial Intelligence (AI) are worrying many people. Apart from very concrete fears for their own jobs or issues revolving around responsibility for AI-based decisions, AI also causes diffuse anxiety about a loss of control, for example, or feeling helpless. One of the reasons for this is that many AI systems and algorithms lack transparency. The new technology seems like an unfathomable black box. Sandra Wachter is doing something about it. She investigates how AI and other new technologies can be regulated and designed more fairly and transparently – and how to achieve an appropriate balance between innovation and security in the process. To this end, she advises governments, NGOs and the United Nations and has also helped shape the regulatory discussion on the European AI Act.
Sandra Wachter holds a doctorate in law focussing on technology law and regulation; she also has an additional qualification in social science. Her research on the ethical and legal implications of new technologies, especially AI, is ground-breaking and offers approaches to tackling practical problems, many of which have already flowed into legislation and corporate processes. It thus makes a significant contribution to avoiding bias in AI systems as well as maintaining human rights, data protection and consumer protection.
Sandra Wachter is one of the founders of the research field of explainable AI. One specific example of her work is the Conditional Demographic Disparity test which she helped to develop in 2020. It is an open access tool that works like an alarm system for AI bias, and it has already been acquired and implemented by Amazon for its cloud computing platform, Amazon Web Services. Human prejudice that has unwittingly been incorporated in AI training data can lead to distorted data and thus harmful results. The test provides a measure of inequality within a dataset and so can flag up discrimination in job recruitment processes or loan approval. And that’s not all: It takes account of the standards of fairness used in European courts of law and incorporates the underlying factors which might be driving the bias, delivering pointers to optimising processes.
As a Humboldt Professor for Artificial Intelligence in Potsdam, Sandra Wachter will be associated with the Data and AI Cluster at the Hasso Plattner Institute and contribute to re-directing the cluster towards the fairness and explainability of AI methods. With her legal and technical expertise, she is called upon to help develop practicable regulatory and control procedures and to mediate in an interdisciplinary fashion between research areas as well as between research and practical applications. Her work is particularly relevant to the ethical design of AI systems in medical applications. Consequently, close collaboration is foreseen with the Digital Health Cluster and the HPI research division at Mount Sinai Hospital in New York, United States.
Brief bio
After completing her doctorate at the University of Vienna in 2016, Sandra Wachter initially moved to the Alan Turing Institute in London and then, in 2017, to Oxford University, both United Kingdom, as a postdoctoral research fellow. In 2019, she became an associate professor and, in 2022, the Professor of Technology and Regulation at the Oxford Internet Institute. She has twice received an O²RB Excellence in Impact Award, most recently in 2021, for her significant international impact on law, policy and business practice around the ethical use of artificial intelligence. The magazine Computer Weekly named her amongst the TOP 50 Women in UK Tech Rising Stars and, in 2020, she was a finalist in the Innovator of the Year category at the AIconics Awards. Amongst others, she is a member of the Advisory Board of Oxford University Press and the working group on IT Security, Privacy, Law and Ethics at the BMBF (Federal Ministry of Education and Research)-funded German AI platform, “Lernende Systeme”.
Sandra Wachter has been selected for a Humboldt Professorship and is currently conducting appointment negotiations with the German university that nominated her for the award. If the negotiations end successfully, the award will be granted in 2025.