Vehículos autónomos | 🇬🇧 Autonomous vehicles

R0:94ecd1a3b2168ef2dbb43222545d08bc-YOLOX: Exceeding YOLO Series in 2021

YOLOX: Exceeding YOLO Series in 2021

We switch the YOLO detector to an anchor-free manner and conduct other advanced detection techniques, i.e., a decoupled head and the leading label assignment strategy SimOTA to achieve state-of-the-art results across a large scale range of models: For YOLO-Nano with only 0.91M parameters and 1.08G FLOPs, we get 25.3% AP on COCO, surpassing NanoDet by 1.8% AP; for YOLOv3, one of the most widely used detectors in industry, we boost it to 47.3% AP on COCO, outperforming the current best practice by 3.0% AP; for YOLOX-L with roughly the same amount of parameters as YOLOv4-CSP, YOLOv5-L, we achieve 50.0% AP on COCO at a speed of 68.9 FPS on Tesla V100, exceeding YOLOv5-L by 1.8% AP.

R0:1bd6c89454b8a29952356211c7075d12-YOLOP: You Only Look Once for Panoptic Driving Perception

YOLOP: You Only Look Once for Panoptic Driving Perception

A panoptic driving perception system is an essential part of autonomous driving. A high-precision and real-time perception system can assist the vehicle in making the reasonable decision while driving. We present a panoptic driving perception network (YOLOP) to perform traffic object detection, drivable area segmentation and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. Our model performs extremely well on the challenging BDD100K dataset, achieving state-of-the-art on all three tasks in terms of accuracy and speed. Besides, we verify the effectiveness of our multi-task learning model for joint training via ablative studies.

Research and innovation in smart mobility and services in Europe

For smart mobility to be cost-efficient and ready for future needs, adequate research and innovation (R&I) in this field is necessary. This report provides a comprehensive analysis of R&I in smart mobility and services in Europe.

Explaining Autonomous Driving by Learning End-to-End Visual Attention

In this work we propose to train an imitation learning based agent equipped with an attention model. The attention model allows us to understand what part of the image has been deemed most important. Interestingly, the use of attention also leads to superior performance in a standard benchmark using the CARLA driving simulator.

When Autonomous Vehicles Are Hacked, Who Is Liable?

Who might face civil liability if autonomous vehicles (AVs) are hacked to steal data or inflict mayhem, injuries, and damage? How will the civil justice and insurance systems adjust to handle such claims? RAND researchers addressed these questions to help those in the automotive, technology, legal, and insurance industries prepare for the shifting roles and responsibilities that the era of AVs may bring.