English (Paper)

Multi-Image Steganography Using Deep Neural Networks

Multi-Image Steganography Using Deep Neural Networks

Steganography is the science of hiding a secret message within an ordinary public message. Over the years, steganography has been used to encode a lower resolution image into a higher resolution image by simple methods like LSB manipulation. We aim to utilize deep neural networks for the encoding and decoding of multiple secret images inside a single cover image of the same resolution.

R0:5ad410ba3fa0191312506cf94754bfd9-Addressing Ethical Dilemmas in AI: Listening to Engineers

Addressing Ethical Dilemmas in AI: Listening to Engineers

Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems. Spaces for discussion of ethics are lacking and very necessary both internally in companies and externally, provided by independent organisations. Looking to policy ensuring whistleblower protection and ombudsman position within companies, as well as participation from professional organisations. One solution is to look to the existing EU RRI framework and to ensure multidisciplinarity in AI system development team composition. The RRI framework can provide systematic processes for engagement with stakeholders and ensuring that problems are better defined. The challenges of AI systems point to a general lack in engineering education. We need to ensure that technical disciplines are empowered to identify ethical problems, which requires broadening technical education programs to include societal concerns. Engineers advocate for public transparency of adherence to standards and ethical principles for AI-driven products and services to enable learning from each other’s mistakes and to foster a no-blame culture.

R0:3d92323b5375746d21dcb172e8950adc-Explainability in Graph Neural Networks: A Taxonomic Survey

Explainability in Graph Neural Networks: A Taxonomic Survey

We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

Multi-Image Steganography Using Deep Neural Networks

Fourier Neural Operator for Parametric Partial Differential Equations

The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution.

R0_b6959f4e35b2c24097cf630fa7cea606-Evaluating and Characterizing Human Rationales

Evaluating and Characterizing Human Rationales

Two main approaches for evaluating the quality of machine-generated rationales are: 1) using human rationales as a gold standard; and 2) automated metrics based on how rationales affect model behavior.

->Artificial intelligence towards data science

Probabilistic Machine Learning for Healthcare

Machine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.

https://editorialia.com/wp-content/uploads/2020/09/report-on-publications-norms-for-responsible-ai.jpg

Report on Publications Norms for Responsible AI

The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity.

https://editorialia.com/wp-content/uploads/2020/08/the-deep-learning-revolution-and-its-implications-for-computer-architecture-and-chip-design.jpg

The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design

The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks.