#General

R0:c4c3ffc882452ee50fea21d2c6b1486a-CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization

CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization

CNN Explainer tightly integrates a model overview that summarizes a CNN’s structure, and on-demand, dynamic visual explanation views that help users understand the underlying components of CNNs. Through smooth transitions across levels of abstraction, our tool enables users to inspect the interplay between low-level mathematical operations and high-level model structures.

Between words and characters: A Brief History of Open-Vocabulary Modeling and Tokenization in NLP

In this survey, we connect several lines of work from the pre-neural and neural era, by showing how hybrid approaches of words and characters as well as subword-based approaches based on learned segmentation have been proposed and evaluated. We conclude that there is and likely will never be a silver bullet singular solution for all applications and that thinking seriously about tokenization remains important for many applications

Ethics-based auditing of automated decision-making systems: intervention points and policy implications

Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS.

AI and the Future of Skills, Volume 1

The OECD launched the Artificial Intelligence and the Future of Skills project to develop a programme that could assess the capabilities of AI and robotics and their impact on education and work. This report represents the first step in developing the methodological approach of the project.

A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning

This paper provides a succinct overview of this emerging theory of overparameterized ML (henceforth abbreviated as TOPML) that explains these recent findings through a statistical signal processing perspective. We emphasize the unique aspects that define the TOPML research area as a subfield of modern ML theory and outline interesting open questions that remain.

How to avoid machine learning pitfalls: a guide for academic researchers

This document gives a concise outline of some of the common mistakes that occur when using machine learning techniques, and what can be done to avoid them. It is intended primarily as a guide for research students, and focuses on issues that are of particular concern within academic research, such as the need to do rigorous comparisons and reach valid conclusions. It covers five stages of the machine learning process: what to do before model building, how to reliably build models, how to robustly evaluate models, how to compare models fairly, and how to report results

Partial Differential Equations is All You Need for Generating Neural Architectures — A Theory for Physical Artificial Intelligence Systems

In this work, we generalize the reaction-diffusion equation in statistical physics, Schrödinger equation in quantum mechanics, Helmholtz equation in paraxial optics into the neural partial differential equations (NPDE), which can be considered as the fundamental equations in the field of artificial intelligence research

Probabilistic Machine Learning: An Introduction

“In this book, we will cover the most common types of ML, but from a probabilistic perspective. Roughly speaking, this means that we treat all unknown quantities (e.g., predictions about the future value of some quantity of interest, such as tomorrow’s temperature, or the parameters of some model) as random variables, that are endowed with probability distributions which describe a weighted set of possible values the variable may have.[…].”.

Addressing Ethical Dilemmas in AI: Listening to Engineers

Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems.