#R0identifier="cce7132ba1b091288c188072d05f1d10"
🔘 – Principal

🔘 Paper page: arxiv.org/abs/2006.03347
Abstract
“Current deep learning based autonomous driving approaches yield impressive results also leading to in-production deployment in certain controlled scenarios. One of the most popular and fascinating approaches relies on learning vehicle controls directly from data perceived by sensors. This end-to-end learning paradigm can be applied both in classical supervised settings and using reinforcement learning. Nonetheless the main drawback of this approach as also in other learning problems is the lack of explainability. Indeed, a deep network will act as a black-box outputting predictions depending on previously seen driving patterns without giving any feedback on why such decisions were taken. While to obtain optimal performance it is not critical to obtain explainable outputs from a learned agent, especially in such a safety critical field, it is of paramount importance to understand how the network behaves. This is particularly relevant to interpret failures of such systems. In this work we propose to train an imitation learning based agent equipped with an attention model. The attention model allows us to understand what part of the image has been deemed most important. Interestingly, the use of attention also leads to superior performance in a standard benchmark using the CARLA driving simulator”.
Authors
Luca Cultrera. Associate Researcher at Cornell University
Lorenzo Seidenari. Assistant Professor at University of Florence
Federico Becattini. Researcher at MICC
Pietro Pala. Professor of Informatics Engineering at the School of Engineering of the University of Firenze
Alberto Del Bimbo. Professor of Computer Engineering, Università di Firenze
Liked this post? Follow this blog to get more.