SELECT PROFILE down
It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions.
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence – A European approach to excellence and trust. This paper outlines the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response addressing the EC’s plans to build an “ecosystem of excellence” and an “ecosystem of trust,” as well as the safety and liability implications of AI, the internet of things (IoT), and robotics.
A key challenge to making effective use of evolutionary algorithms (EAs) is to choose appropriate settings for their parameters. However, the appropriate parameter setting generally depends on the structure of the optimization problem, which is often unknown to the user. Non‐deterministic parameter control mechanisms adjust parameters using information obtained from the evolutionary process.
To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other.
Multi-presenter format with exciting Speakers from the current European ICT research projects AI4EU (www.ai4eu.eu) and Helios (helios-social.com/) as well as Guest Speakers.
Yooneeque has made digitalisation its motto. An artificial intelligence called YOONA is the fashion designer here. This time again during the Berlin Fashion Week the latest outputs of the software were presented
This repository contains examples and best practices for building NLP systems, provided as Jupyter notebooks and utility functions. The focus of the repository is on state-of-the-art methods and common scenarios that are popular among researchers and practitioners working on problems involving text and language
A database housing more than 100 Colab notebooks running ML code for various NLP tasks. Colab is an excellent destination to experiment with the latest models as it comes with a free GPU/TPU housed in Google’s back-end servers… And a collection of more than 400 NLP datasets that it include papers.
Harnessing the power of supercomputer and patient modelling to deliver unparallelled medical insights and predict treatment outcomes for patients.
We survey 146 papers analyzing “bias” in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing “bias” is an inherently normative process.
The objective of this guideline is to provide medical device manufacturers and notified bodies instructions and to provide them with a concrete checklist to understand what the expectations of the notified bodies are, to promote step-by-step implementation of safety of medical devices, that implement artificial intelligence methods, in particular machine learning, to compensate for the lack of a harmonized standard (in the interim) to the greatest extent possible.
Technologies are not neutral, neither are choices in the public procurement of AI. The AI systems we deploy today are the systems we will live with tomorrow.
A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy
This paper contributes the first human-centered observational study of a deep learning system deployed directly in clinical care with patients. Through field observations and interviews at eleven clinics across Thailand, we explored the expectations and realities that nurses encounter in bringing a deep learning model into their clinical practices. First, we outline typical eye-screening workflows and challenges that nurses experience when screening hundreds of patients. Then, we explore the expectations nurses have for an AI-assisted eye screening process. Next, we present a human-centered, observational study of the deep learning system used in clinical care, examining nurses’ experiences with the system, and the socio-environmental factors that impacted system performance. Finally, we conclude with a discussion around applications of HCI methods to the evaluation of deep learning algorithms in clinical environments.
The aim is to give a general overview of different legal regimes having an impact on the planning, development and deployment of Artificial Intelligence systems.
In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their suitability in terms of impact on the users, employed technology and governance methods.
The Joint Research Center (JRC) in cooperation with the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC), European Commission’s Directorate General Communications Networks, Content and Technology (DG CNECT), and the German Institute of Standardisation (DIN), organised in Brussels on 28-29 March 2019 the Putting-Science-Into-Standards (PSIS) workshop on Quantum Technologies.
Welcome to the Search portal of the Cybercrime
Artificial Intelligence and Machine Learning in Software as a Medical Device: discussion Paper and Request for Feedback
Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care. The FDA is considering a total product lifecycle-based regulatory framework for these technologies.
Rapid Acceleration of #Diagnostics (RADx), is a fast-track technology development program that leverages the National Institutes of Health (NIH) Point-of-Care Technology Research Network #POCTRNN).
#IEEE Invites Companies, Governments and Other Stakeholders Globally to Expand on #Ethics #Certification Program for #Autonomous and #Intelligent #Systems (#ECPAIS) Work
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device
FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition
Digital solutions for healthcare open a plethora of new possibilities in this area. They provide a technical base for easy testing, they improve significantly the quality of service by allowing immediate access to medical data – results of tests, history of treatment; they facilitate correct diagnosis by easier analytics and correlation of data and easier monitoring of patients’ health parameters. They facilitate setting up appointments with appropriate doctors at a convenient time
DIH-HERO makes available up to €5 million for robotic solutions that tackle COVID19. Joining the Commission’s AI-Robotics vs COVID19 initiative, the project mobilises part of its Horizon 2020 funding to support healthcare professionals and save lives.
Open Research Dataset (CORD-19): Semantic Scholar has partnered with leading research groups to release the COVID-19
The Allen Institute just published the #covid19 open research #dataset. In addition, they are sponsoring a related Kaggle competition. The dataset contains almost 30k scholarly articles related to the virus. The goal is to use #NLP to advance our understanding.
These data can be used in the area of artificial intelligence (machine learning and deeplearning) and among all obtain more efficient and faster results regarding coronavirus.
Something went wrong. Please refresh the page and/or try again.
- Click to share on Facebook (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on Telegram (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to email this to a friend (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to share on Skype (Opens in new window)
- Click to print (Opens in new window)