Using Edge to navigate LiDAR Data: from the surface to the bottom in minutes

7th Mar 2024 dRISK

By Federico Arenas Lopez and Katarina Cimesa

 

Assessing the surface: Introduction

 

LiDAR-based perception plays an increasingly important role in deploying safe Autonomous Vehicles and ADAS systems. In this short blog, we will illustrate how dRISK Edge can help LiDAR developers analyze the performance of their sensors under challenging circumstances, all in a matter of minutes, with little pre-processing or setup, all in the cloud. 

Edge is the only tool out there that allows users to navigate, and grasp all of their data at once without losing granularity. In this blogpost we show (1) how Edge’s unique way of showing you all of your data at once enables LiDAR developers to easily find unknown, possibly hazardous objects in their data, and (2) how Edge’s crisp interactivity enables LiDAR developers to dive deep into individual data coming from different sensors.

Edge’s highly scalable data visualization and interactive exploration capabilities are particularly useful for finding patterns in point cloud data. dRISK supports many different data formats, including the ones found in the Cirrus Dataset, which we’ll use for demonstration purposes. Below you can see the 3 different files for a single frame loaded into the same session in Edge.

Cirrus Dataset point cloud loaded into Edge for a single frame

 

Navigating the surface: Finding unknown objects in a sea of LiDAR data

 

To assess the performance of their perception system, developers can use Edge to find frames with traffic situations that are particularly challenging for LiDAR. For example, unidentified objects on busy highways form a specific cluster of risk. Edge lets you load the complete dataset  annotations at once and makes it easy to find, and visualize, those objects annotated with the label “unknown”.

Complete Cirrus dataset 1 loaded into a single view on the left, and a single JSON annotation on the right

 

Edge lets users leverage the graph structure, and lets users have all the data in one place, making it trivial to surface outliers such as a chair, presumably fallen off a vehicle shortly before, is lying in the middle of the highway.

Edge lets users grasp large datasets without losing touch of the individual data points. By using Edge, we can inspect each datapoint from the frame above and see what the underlying object type is. We see that all of them are vehicles apart from one object that is annotated as “unknown”. The data points can be colored by the LiDAR intensity, and we can see that the “unknown” chair stands out.

Identifying the unidentified object as a chair enriched by the JSON annotation, the LiDAR point cloud, and the camera feed

 

Edge allows for comprehensive exploration of any kind of data. In this case, by exploring for “unknown” entities in the Cirrus Dataset, developers can quickly understand the limitations of the dataset for training and validation purposes, and come up with strategies to counter these limitations.

 

Diving to the seabed: Assessing Performance of LiDAR settings

 

Edge lets users go from coarse analysis to granular analysis of a dataset in a matter of minutes.  For example, understanding and comparing the long range detection performance of scan patterns in great granularity. Using the same frame as an example, we can see how the number of hits for the vehicles in this highway scene drops by about 5x after 120m for the Gaussian scan pattern setting. 

Using Edge’s cross highlighting functionality to characterize the Gaussian scan pattern in multiple dimensions

 

Moreover, by overlaying the two point clouds taken from the two sensors used in the Cirrus Dataset, we can easily characterize the strengths and weaknesses of each of the scan patterns, Uniform and Gaussian. In the example below, notice how the dark blue distribution consistently shows a higher number of hits for distances higher than a couple of meters (along X [m]), showing that the Gaussian scan pattern outperforms the Uniform scan pattern for long distance sensing. However, when looking at the top left and bottom right plot, the Uniform scan pattern is able to fill the blindspots in wider fields of vision.

 

Overlay of point clouds from the Uniform (dark blue dots) and Gaussian (light green dots) scan patterns, “# of hits” is the number of LiDAR data points hitting a specific point along the given axis

 

With just a few clicks, Edge enables ADAS developers to find the optimal LiDAR setting for their downstream application by iterating over many different sensor configurations.

 

Coming back to the surface: Conclusion

 

We covered how, in a matter of minutes, users can go from coarse analysis of their data, to in depth analysis, finding unknown entities in the dataset, to comparing LiDAR sensor scan patterns, using the Cirrus Dataset as an example. These are workflows that are trivially extendable to other use cases where there’s a need to have a complete understanding of large datasets, without losing granularity. If you are interested in understanding large datasets in a complete way, in as long as it took you to read this article, try dRISK Edge for yourself now at demo.drisk.ai.

 

 

Legal notice

  1. Luminar Technologies, Inc. is the sole and exclusive owner of the Cirrus dataset.
  2. The dataset is licensed under CC BY-SA 4.0
  3. Any public use, distribution, display of this data set must contain this notice in its entirety.