Gaël Varoquaux

Inria, National Institute for Research in Digital Science and Technology

Model evaluation, a machine-learning bottleneck



While machine-learning papers always come with amazing promises, these do not always carry over to wider settings. One of the reasons for such shortcomings is that our evaluation is often detached from reality. As machine-learners love to go metric chasing, it is crucial that these metrics are chosen to reflect the application settings. In addition, conclusions of evaluation must also be informed the associated uncertainty on each model evaluation. In my talk, I will expose some seldom-discussed considerations on choice of metrics for simple classification problems, and on evaluation uncertainty.



Gaël Varoquaux is a research director working on data science and health at Inria (French Computer Science National research). His research focuses on statistical-learning tools for data science and scientific inference, with an eye on applications in health and social science. He develops tools to make machine learning easier, with statistical models suited for real-life, uncurated data, and software for data science. For example, since 2008, he has been exploring data-intensive approaches to understand brain function and mental health. He co-funded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python. Varoquaux has a PhD in quantum physics and is a graduate from Ecole Normale Superieure, Paris.



Carina Prunkl

University of Oxford, Institute for Ethics in AI

Pride and Prejudice – Implementing Ethics into AI development



Trading, identifying, hiring, or firing – advances in ML drive innovation across sectors and societies. At the same time, it has become clear that there are significant challenges emerging from the use of AI systems. Some of these are quite general and as old as humanity itself. Others are very specific to ML, forcing us to reflect on our cultures and our values. This talk will address the challenges associated with AI development. It will discuss the difficulties of risk prediction, responsible innovation, and the importance of communication between research cultures.



Carina Prunkl is a Research Fellow at the University of Oxford’s Institute for Ethics in AI, a Junior Research Fellow at Jesus College, Oxford, and an affiliate at Harvard University’s Black Hole Initiative. She is also a member of the Humanities Cultural Programme Steering Committee and works as an Ethics Advisor for Digital Catapult.

She works on the ethics and governance of artificial intelligence. Her main research focus is on autonomy and the ethics of automated decision-making in public sector settings, though she is also interested in the more general question as to how to implement ethical considerations into governance solutions. She currently co-teaches the undergraduate course on Ethics in AI for philosophy students, as well as Governance of AI for the EPSRC Centre for Doctoral Training in Autonomous and Intelligent Machines and Systems.

She holds a DPhil in Philosophy and an MSt in Philosophy of Physics from the University of Oxford as well as a Master’s and Bachelor’s degree in Physics from Freie Universität Berlin.