Topological Data Analysis (TDA) is an emergent field that aims to discover topological information hidden in a dataset. TDA tools have been commonly used to create filters and topological descriptors to improve Machine Learning (ML) methods. This paper proposes an algorithm that applies TDA directly to multi-class classification problems, even imbalanced datasets, without any further ML stage
S++: A Fast and Deployable Secure-Computation Framework for Privacy-Preserving Neural Network Training
We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary.
Steganography is the science of hiding a secret message within an ordinary public message. Over the years, steganography has been used to encode a lower resolution image into a higher resolution image by simple methods like LSB manipulation. We aim to utilize deep neural networks for the encoding and decoding of multiple secret images inside a single cover image of the same resolution.
Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems. Spaces for discussion of ethics are lacking and very necessary both internally in companies and externally, provided by independent organisations. Looking to policy ensuring whistleblower protection and ombudsman position within companies, as well as participation from professional organisations. One solution is to look to the existing EU RRI framework and to ensure multidisciplinarity in AI system development team composition. The RRI framework can provide systematic processes for engagement with stakeholders and ensuring that problems are better defined. The challenges of AI systems point to a general lack in engineering education. We need to ensure that technical disciplines are empowered to identify ethical problems, which requires broadening technical education programs to include societal concerns. Engineers advocate for public transparency of adherence to standards and ethical principles for AI-driven products and services to enable learning from each other’s mistakes and to foster a no-blame culture.
El principal objetivo de este documento es construir un glosario, a partir de las propuestas léxicas realizadas por los diferentes entes tecnológicos (ISO, IEEE, Wikipedia y Oxford University Press). Adicionalmente, el glosario estará estructurado según las ramas de conocimiento de esta área de trabajo, determinando exhaustiva y detalladamente las características de los términos que se incluirán en él para así facilitar una lectura amigable a la par que eficiente al usuario.
We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.
Unsupervised deep clustering and reinforcement learning can accurately segment MRI brain tumors with very small training sets
“We have demonstrated a proof-of-principle application of unsupervised deep clustering and reinforcement learning to segment brain tumors. The approach represents human-allied AI that requires minimal input from the radiologist without the need for hand-traced annotation”.
Side-Channel Sensing: Exploiting Side-Channels to Extract Information for Medical Diagnostics and Monitoring
Information within systems can be extracted through side-channels; unintended communication channels that leak information. The concept of side-channel sensing is explored, in which sensor data is analysed in non-trivial ways to recover subtle, hidden or unexpected information.
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution.
Two main approaches for evaluating the quality of machine-generated rationales are: 1) using human rationales as a gold standard; and 2) automated metrics based on how rationales affect model behavior.
Machine learning models depend on the quality of input data. As electronic health records are widely adopted, the amount of data in health care is growing, along with complaints about the quality of medical notes.
Machine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity.
“The European Commission has shown its ambition in the area of artificial intelligence (AI) in its recent White Paper on Artificial Intelligence – a European approach to excellence and trust. This White Paper is at the same time a precursor of possible legislation of AI in products and services in the European Union. However, COCIR sees no need for novel regulatory frameworks for AI-based devices in Healthcare, because the requirements of EU MDR and EU IVDR in combination with GDPR are adequate to ensure that same excellence and trust.” (COCIR paper).
IEEE Use Case–Criteria for Addressing Ethical Challenges in Transparency, Accountability, and Privacy of CTA/CTT
There are substantial public health benefits gained through successfully alerting individuals and relevant public health institutions of a person’s exposure to a communicable disease. Contact tracing techniques have been applied to epidemiology for centuries, traditionally involving a manual process of interview and follow-up. This is time-consuming, difficult, and dangerous work. Manual processes are also open to incomplete information because they rely on individuals being willing and able to remember and report all contact possibilities.
The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks.
Something went wrong. Please refresh the page and/or try again.