#Programmer

R0:e3d9ea294a21c145042e5f31369de739-CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA (Counterfactual And Recourse LibrAry), a python library for benchmarking counterfactual explanation methods across both different data sets and different machine learning models. In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods. We have open-sourced CARLA and our experimental results on Github, making them available as competitive baselines. We welcome contributions from other research groups and practitioners.

R0:2271580d5e4f655f084ee4605a50b147-labml.ai Deep Learning Paper Implementations

labml.ai Deep Learning Paper Implementations

This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations, and the website renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.

R0:3d92323b5375746d21dcb172e8950adc-Explainability in Graph Neural Networks: A Taxonomic Survey

Explainability in Graph Neural Networks: A Taxonomic Survey

We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

https://editorialia.com/wp-content/uploads/2020/06/openmined-opensource-to-make-privacy-preserving-of-ai-technologies.jpg

OpenMined: open source to make privacy-preserving of AI technologies

With OpenMined, an AI model can be governed by multiple owners and trained securely on an unseen, distributed dataset.The mission of the OpenMined community is to create an accessible ecosystem of tools for private, secure, multi-owner governed AI

https://editorialia.com/wp-content/uploads/2020/06/undergraduate-diagnostic-imaging-fundamentals.jpg

Undergraduate Diagnostic Imaging Fundamentals

The structure and content of this work has been guided by the curricula developed by the European Society of Radiology, the Royal College of Radiologists, the Alliance of Medical Student Educators in Radiology, with guidance and input from Canadian Radiology Undergraduate Education Coordinators, and the Canadian Heads of Academic Radiology (CHAR).

https://editorialia.com/wp-content/uploads/2020/06/toolkit-for-healthcare-imaging.jpg

Medical Open Network for AI (MONAI), AI Toolkit for Healthcare Imaging

The MONAI framework is the open-source foundation being created by Project MONAI. MONAI is a freely available, community-supported, PyTorch-based framework for deep learning in healthcare imaging. It provides domain-optimized foundational capabilities for developing healthcare imaging training workflows in a native PyTorch paradigm.

https://editorialia.com/wp-content/uploads/2020/06/privacy-preserving-ai.jpg

Privacy Preserving AI – Andrew Trask, OpenMined

Learn the basics of secure and private AI techniques, including federated learning and secure multi-party computation. In this talk, Andrew Trask of OpenMined highlights the importance of privacy preserving machine learning, and how to use privacy-focused tools like PySyft.

https://editorialia.com/wp-content/uploads/2020/06/cover-interpretable-machine-learning-1.jpg

Interpretable Machine Learning (A Guide for Making Black Box Models Explainable)

The book focuses on machine learning models for tabular data (also called relational or structured data) and less on computer vision and natural language processing tasks. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.