Efficient and Robust LiDAR-Based End-to-End Navigation

Zhijian Liu*, Alexander Amini*, Sibo Zhu, Sertac Karaman, Song Han, Daniela L. Rus
Massachusetts Institute of Technology (MIT)
(* indicates equal contributions)


Deep learning has been used to demonstrate end-to-end neural network learning for autonomous vehicle control from raw sensory input. While LiDAR sensors provide reliably accurate information, existing end-to-end driving solutions are mainly based on cameras since processing 3D data requires a large memory footprint and computation cost. On the other hand, increasing the robustness of these systems is also critical; however, even estimating the model's uncertainty is very challenging due to the cost of sampling-based methods. In this paper, we present an efficient and robust LiDAR-based end-to-end navigation framework. We first introduce Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design. We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass and then fuses the control predictions intelligently. We evaluate our system on a full-scale vehicle and demonstrate lane-stable as well as navigation capabilities. In the presence of out-of-distribution events (e.g., sensor failures), our system significantly improves robustness and reduces the number of takeovers in the real world.


Media Coverage


  title={Efficient and Robust LiDAR-Based End-to-End Navigation},
  author={Liu, Zhijian and Amini, Alexander and Zhu, Sibo and Karaman, Sertac and Han, Song and Rus, Daniela},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},

Acknowledgments: This work was supported by National Science Foundation and Toyota Research Institute. We gratefully acknowledge the support of NVIDIA with the donation of V100 GPU and Drive AGX Pegasus.