Machine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity.
“The European Commission has shown its ambition in the area of artificial intelligence (AI) in its recent White Paper on Artificial Intelligence – a European approach to excellence and trust. This White Paper is at the same time a precursor of possible legislation of AI in products and services in the European Union. However, COCIR sees no need for novel regulatory frameworks for AI-based devices in Healthcare, because the requirements of EU MDR and EU IVDR in combination with GDPR are adequate to ensure that same excellence and trust.” (COCIR paper).
IEEE Use Case–Criteria for Addressing Ethical Challenges in Transparency, Accountability, and Privacy of CTA/CTT
There are substantial public health benefits gained through successfully alerting individuals and relevant public health institutions of a person’s exposure to a communicable disease. Contact tracing techniques have been applied to epidemiology for centuries, traditionally involving a manual process of interview and follow-up. This is time-consuming, difficult, and dangerous work. Manual processes are also open to incomplete information because they rely on individuals being willing and able to remember and report all contact possibilities.
The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks.
This paper introduces TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP. TextAttack builds attacks from four components: a goal function, a set of constraints, a transformation, and a search method. TextAttack’s modular design enables researchers to easily construct attacks from combinations of novel and existing components. TextAttack provides implementations of 16 adversarial attacks from the literature and supports a variety of models and datasets, including BERT and other transformers, and all GLUE tasks.
For smart mobility to be cost-efficient and ready for future needs, adequate research and innovation (R&I) in this field is necessary. This report provides a comprehensive analysis of R&I in smart mobility and services in Europe.
We further formalize the metrics for higher-order statistics, including inter-rater disagreement, in a unified way, which enables us to assess the quality of distributional uncertainty. In addition, we propose a novel post-hoc calibration method that equips trained neural networks with calibrated distributions over class probability estimates. With a large-scale medical imaging application, we show that our approach significantly improves the quality of uncertainty estimates in multiple metrics.
It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions.
To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other.
In this work we propose to train an imitation learning based agent equipped with an attention model. The attention model allows us to understand what part of the image has been deemed most important. Interestingly, the use of attention also leads to superior performance in a standard benchmark using the CARLA driving simulator.
We survey 146 papers analyzing “bias” in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing “bias” is an inherently normative process.
Technologies are not neutral, neither are choices in the public procurement of AI. The AI systems we deploy today are the systems we will live with tomorrow.
In public health, contact tracing is the process to identify individuals who have been in contact with infected persons. Proximity tracing with smartphone applications and sensors could support contact tracing. It involves processing of sensitive personal data.
A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy
This paper contributes the first human-centered observational study of a deep learning system deployed directly in clinical care with patients. Through field observations and interviews at eleven clinics across Thailand, we explored the expectations and realities that nurses encounter in bringing a deep learning model into their clinical practices. First, we outline typical eye-screening workflows and challenges that nurses experience when screening hundreds of patients. Then, we explore the expectations nurses have for an AI-assisted eye screening process. Next, we present a human-centered, observational study of the deep learning system used in clinical care, examining nurses’ experiences with the system, and the socio-environmental factors that impacted system performance. Finally, we conclude with a discussion around applications of HCI methods to the evaluation of deep learning algorithms in clinical environments.
In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their suitability in terms of impact on the users, employed technology and governance methods.
Artificial Intelligence and Machine Learning in Software as a Medical Device: discussion Paper and Request for Feedback
Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care. The FDA is considering a total product lifecycle-based regulatory framework for these technologies.
This report explores the current state of affairs in Encrypted Traffic Analysis and in particular discusses research and methods in 6 key use cases; viz. application identification, network analytics, user information identification, detection of encrypted malware, file/device/website/location fingerprinting and DNS tunnelling detection.
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device
FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition
The starting point to develop the operational definition is the definition of AI adopted by the High Level Expert Group on artificial intelligence. To derive this operational definition we have followed a mixed methodology. On one hand, we apply natural language processing methods to a large set of AI literature. On the other hand, we carry out a qualitative analysis on 55 key documents including artificial intelligence definitions from three complementary perspectives: policy, research and industry.
Machine learning uses tools from a variety of mathematical fields. This document is an attempt to provide a summary of the mathematical background needed.
Usando el marco de ingeniería de software de la deuda técnica, encontramos que es común incurrir en costos de mantenimiento masivos y continuos en sistemas de ML del mundo real.
El objetivo de este documento es alentar a los investigadores de inteligencia artificial y diseñadores de productos a cambiar del pensamiento unidimensional sobre los niveles de automatización / autonomía a un nuevo marco HCAI bidimensional.
Phyton’s most notable points are:
-Is a great library ecosystem (Scikit-learn, Pandas, Matplotlib, NLTK, Scikit-image, PyBrain, Caffe, StatsModels, TensorFlow, Keras, etc).
-Has a low entry barrier, has flexibility, is a platform independence, has readability, good visualization options, good community support and growing popularity.
The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options and to contribute to the Commission’s future decision-making in this domain.
Modern AI image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users. This suggests humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence. This mismatch presents an obstacle to appropriate human-AI interaction.
EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications
Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.
Artificial Intelligence and Public Standards (A Review by the Committee on Standards in Public Life) -GOV.UK
The Committee on Standards in Public Life Published (10 February 2020 ) its report and recommendations to the Prime Minister to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. (GOV.UK).
Cybersecurity attacks are growing both in frequency and sophistication over the years. This increasing sophistication and complexity call for more advancement and continuous innovation in defensive strategies. Traditional methods of intrusion detection and deep packet inspection, while still largely used
and recommended, are no longer sufficient to meet the demands of growing security threats.
An article by the Allen Institute of Artificial Intelligence highlights a pending issue: machines do not really understand what humans write or read.
The role of artificial intelligence in achieving the Sustainable Development Goals: an excellent work presented to the members of the European Alliance for AI
Our director (as a member of The European AI Alliance) uploaded this work and contributed to the documentation that the members of the European Alliance for AI and the high-level AI group are handling. He consider these points of view is important in terms of sustainable development. Link to the post at european AI alliance […]
“dabl tries to reduce the turnaround time required for a quick baseline estimate of a supervised learning problem. It does so by automating the task of iterating through different techniques of data preprocessing, feature engineering, parameter tuning and model building to generate efficacious baseline models”.
Manifiesto de IBM con motivo del 50 encuentro en el foro de Davos. ‘Regulación de precisión para la inteligencia artificial’
El panel de Davos, organizado por el CEO de IBM, Ginni Rometty, explora la regulación de precisión de la inteligencia artificial y la tecnología emergente. El evento lanza formalmente IBM Policy Lab, un nuevo foro para promover recomendaciones de políticas audaces y viables para una sociedad digital y fomentar la confianza en la innovación.
El Ministerio holandés de Asuntos Exteriores ha encargado este estudio para generar conocimiento sobre la interfaz entre el derecho comercial internacional y las normas europeas y valores en el uso de la inteligencia artificial. El estudio hace una serie de hallazgos significativos.
Escrita para profesionales de experiencia de usuario (UX) y gerentes de producto como una forma de ayudar a crear un enfoque centrado en el ser humano para la IA en sus equipos de producto.
Una píldora endoscópica ingerible inocua, gestionable, distributiva de medicación, selectiva y con capacidades de monitorización dinámica
Most technologies are made from steel, concrete, chemicals, and plastics, which degrade over time and can produce harmful ecological and health side effects. It would thus be useful to build technologies using self-renewing and biocompatible materials, of which the ideal candidates are living systems themselves. Thus, we here present a method that designs completely biological machines from the ground up: computers automatically design new machines in simulation, and the best designs are then built by combining together different biological tissues. This suggests others may use this approach to design a variety of living machines to safely deliver drugs inside the human body, help with environmental remediation, or further broaden our understanding of the diverse forms and functions life may adopt.
(Deep Neural Networks in Geophysics) Esta tesis investiga las propiedades fundamentales de las redes neuronales en aplicaciones geofísicas. Incluye la reutilización de redes neuronales entrenadas, que son excelentes para identificar imágenes y aplicarlas para identificar capas de rocas y eventos geológicos en imágenes geofísicas. Esta tesis profundiza para evaluar si la teoría de incluir información específica de […]
El objetivo del Programa de Certificación de Ética para Sistemas Autónomos e Inteligentes (ECPAIS) del IEEE SA es crear especificaciones para los procesos de certificación y marcado que promuevan la transparencia, la rendición de cuentas y la reducción del sesgo algorítmico en los Sistemas Autónomos e Inteligentes (A / IS). El objetivo de ECPAIS es […]
EAD, Primera edición, incluye comentarios sobre cómo la ley debe responder a una serie de desafíos éticos y legales específicos que plantea el desarrollo y despliegue de A/IS (Autonomous and Intelligent Systems) en la vida contemporánea. También se centra en el impacto de A/IS en la práctica del derecho mismo. Más específicamente, se estudia tanto […]
Something went wrong. Please refresh the page and/or try again.
- Click to share on Facebook (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on Telegram (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to email this to a friend (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to share on Skype (Opens in new window)
- Click to print (Opens in new window)