Creative Commons
Diagnostic uncertainty calibration: towards reliable machine predictions in medical domain
We further formalize the metrics for higher-order statistics, including inter-rater disagreement, in a unified way, which enables us to assess the quality of distributional uncertainty. In addition, we propose a novel post-hoc calibration method that equips trained neural networks with calibrated distributions over class probability estimates. With a large-scale medical imaging application, we show that our approach significantly improves the quality of uncertainty estimates in multiple metrics.
The State of AI Ethics Report (June 2020)
It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions.
Undergraduate Diagnostic Imaging Fundamentals
The structure and content of this work has been guided by the curricula developed by the European Society of Radiology, the Royal College of Radiologists, the Alliance of Medical Student Educators in Radiology, with guidance and input from Canadian Radiology Undergraduate Education Coordinators, and the Canadian Heads of Academic Radiology (CHAR).
Machine learning in medicine: a practical introduction
Following visible successes on a wide range of predictive tasks, machine learning techniques are attracting substantial interest from medical researchers and clinicians. We address the need for capacity development in this area by providing a conceptual introduction to machine learning alongside a practical guide to developing and evaluating predictive algorithms using freely-available open source software and public domain data
Montreal AI Ethics Institute: Response to the European Commission’s white paper on AI
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence – A European approach to excellence and trust. This paper outlines the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response addressing the EC’s plans to build an “ecosystem of excellence” and an “ecosystem of trust,” as well as the safety and liability implications of AI, the internet of things (IoT), and robotics.
Interpretable Machine Learning (A Guide for Making Black Box Models Explainable)
The book focuses on machine learning models for tabular data (also called relational or structured data) and less on computer vision and natural language processing tasks. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.









