TY - GEN
T1 - VRF
T2 - 22nd Annual International Conference on Mobile Systems, Applications and Services, MOBISYS 2024
AU - Khan, Kaleem Nawaz
AU - Khalid, Ali
AU - Turkar, Yash
AU - Dantu, Karthik
AU - Ahmad, Fawad
N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).
PY - 2024/6/4
Y1 - 2024/6/4
N2 - Autonomous vehicles and human drivers are prone to line-of-sight limitations. Road-side mounted 3D sensors like LiDARs can augment a vehicle's on-board perception. However, this entails fusing 3D frames at low latency and high accuracy. Road-side and vehicle 3D frames are captured from different viewpoints. This adversely affects alignment accuracy and can be computationally expensive. To this end, VRF optimizes for both latency and accuracy by decoupling the alignment process into indirect and direct alignments. First, VRF indirectly aligns the 3D frames by aligning them to a common reference point i.e., a vehicle's on-board 3D map. Then, it directly aligns the two point clouds to refine this alignment. To ensure high accuracy, it incorporates novel offline registration and alignment accuracy forecasting modules. To ensure low latency, it uses a fast fusion pipeline that caches previous and offline computations. To our knowledge, VRF is the first vehicle road-side cooperative system to ensure cm-level accuracy and end-To-end latency less than 20 ms. Most importantly, its latency is below the 100 ms threshold required for autonomous vehicles to react to external events. Finally, VRF can improve reaction time to external events by as much as 5 seconds1.
AB - Autonomous vehicles and human drivers are prone to line-of-sight limitations. Road-side mounted 3D sensors like LiDARs can augment a vehicle's on-board perception. However, this entails fusing 3D frames at low latency and high accuracy. Road-side and vehicle 3D frames are captured from different viewpoints. This adversely affects alignment accuracy and can be computationally expensive. To this end, VRF optimizes for both latency and accuracy by decoupling the alignment process into indirect and direct alignments. First, VRF indirectly aligns the 3D frames by aligning them to a common reference point i.e., a vehicle's on-board 3D map. Then, it directly aligns the two point clouds to refine this alignment. To ensure high accuracy, it incorporates novel offline registration and alignment accuracy forecasting modules. To ensure low latency, it uses a fast fusion pipeline that caches previous and offline computations. To our knowledge, VRF is the first vehicle road-side cooperative system to ensure cm-level accuracy and end-To-end latency less than 20 ms. Most importantly, its latency is below the 100 ms threshold required for autonomous vehicles to react to external events. Finally, VRF can improve reaction time to external events by as much as 5 seconds1.
KW - autonomous cars
KW - cooperative perception
KW - infrastructure-Assisted autonomous driving
UR - https://www.scopus.com/pages/publications/85196159028
U2 - 10.1145/3643832.3661874
DO - 10.1145/3643832.3661874
M3 - Conference contribution
T3 - MOBISYS 2024 - Proceedings of the 2024 22nd Annual International Conference on Mobile Systems, Applications and Services
SP - 547
EP - 560
BT - MOBISYS 2024 - Proceedings of the 2024 22nd Annual International Conference on Mobile Systems, Applications and Services
PB - Association for Computing Machinery, Inc
Y2 - 3 June 2024 through 7 June 2024
ER -