[googleapps domain=”drive” dir=”file/d/1QSCdTCropghbXgMdmpyyLpQUsdaQ469z/preview” query=”” width=”95%” height=”480″ /]
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI’s potential threats and use cases. Unfortunately, it’s difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers.MAIEI
MAIEI. “The Montreal AI Ethics Institute is an international, non-profit research institute dedicated to defining humanity’s place in a world increasingly characterized and driven by algorithms. We do this by creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI. The goal is to build public competence and understanding of the societal impacts of AI and to equip and empower diverse stakeholders to actively engage in the shaping of technical and policy measures in the development and deployment of AI systems. It is a digital-first civil society organization that brings together a diversity of individuals from different disciplines, areas of expertise, and geographic regions”. (Source: montrealethics.ai/about).