Explaining Autonomous Driving by Learning End-to-End Visual Attention


🔘 – Principal


🔘 Paper page: arxiv.org/abs/2006.03347


“Current deep learning based autonomous driving approaches yield impressive results also leading to in-production deployment in certain controlled scenarios. One of the most popular and fascinating approaches relies on learning vehicle controls directly from data perceived by sensors. This end-to-end learning paradigm can be applied both in classical supervised settings and using reinforcement learning. Nonetheless the main drawback of this approach as also in other learning problems is the lack of explainability. Indeed, a deep network will act as a black-box outputting predictions depending on previously seen driving patterns without giving any feedback on why such decisions were taken. While to obtain optimal performance it is not critical to obtain explainable outputs from a learned agent, especially in such a safety critical field, it is of paramount importance to understand how the network behaves. This is particularly relevant to interpret failures of such systems. In this work we propose to train an imitation learning based agent equipped with an attention model. The attention model allows us to understand what part of the image has been deemed most important. Interestingly, the use of attention also leads to superior performance in a standard benchmark using the CARLA driving simulator”.


undefined Luca Cultrera. Associate Researcher at Cornell University

undefinedLorenzo Seidenari. Assistant Professor at University of Florence

undefinedFederico Becattini. Researcher at MICC

undefined Pietro Pala. Professor of Informatics Engineering at the School of Engineering of the University of Firenze

undefinedAlberto Del Bimbo. Professor of Computer Engineering, Università di Firenze

Click to rate this post
[Total: 0 Average: 0]