Documentation is key – design decisions in AI development must be documented in detail, potentially taking inspiration from the field of risk management. There is a need to develop a framework for large-scale testing of AI effects, beginning with public tests of AI systems, and moving towards real-time validation and monitoring. Governance frameworks for decisions in AI development need to be clarified, including the questions of post-market surveillance of product or system performance. Certification of AI ethics expertise would be helpful to support professionalism in AI development teams. Distributed responsibility should be a goal, resulting in a clear definition of roles and responsibilities as well as clear incentive structures for taking in to account broader ethical concerns in the development of AI systems.
Law and Ethics (Article)
“The European Commission has shown its ambition in the area of artificial intelligence (AI) in its recent White Paper on Artificial Intelligence – a European approach to excellence and trust. This White Paper is at the same time a precursor of possible legislation of AI in products and services in the European Union. However, COCIR sees no need for novel regulatory frameworks for AI-based devices in Healthcare, because the requirements of EU MDR and EU IVDR in combination with GDPR are adequate to ensure that same excellence and trust.” (COCIR paper).
Rob wants to argue that if intent is linked to an incorrect assessment of identity, and thus not central to an ethics of behaviour, then this opens up an actionable set of actors actually at play in the digtial (IoT, 5G, AI) namely: objects (with added connectivity like NFC), machines with built in connectivity, animals & plants (as ecosystems) and humans alike , as they can be treated as entities.
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence – A European approach to excellence and trust. This paper outlines the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response addressing the EC’s plans to build an “ecosystem of excellence” and an “ecosystem of trust,” as well as the safety and liability implications of AI, the internet of things (IoT), and robotics.