TriEye

Breaking Barriers: How Points Per Second is Transforming Automotive Safety

September 6, 2022

One of the most important functionalities provided by Advanced Driver Assistance Systems is their ability to properly detect objects, classify them, and make decisions regarding how to maneuver around any potential hazards. Obtaining data about the environment that has sufficient resolution and frame rate, is a key to safe decision-making for ADAS and AVs. 

 

Understanding Points Per Second

Points per Second (PPS) refers to the number of voxels that can be measured and outputted in one second. Points per second is used as an indicator of LiDAR system performance, calculated by multiplying together three parameters: vertical FOV, horizontal FOV, sampling resolution and frame rate.  Points per second is an important metric for detection.

 

Why Does PPS Matter?

Systems that have a higher PPS obtain more information about the environment in one second, providing perception systems with more actionable data. For example, if there is a small object on the road, like a fallen pallet, a sensor with high PPS (and with the same spatial resolution), would be able to determine that there is an object at longer range, and provide AI with a detailed image of the object so that in can detect a hazard and, in some cases, classify it for better trajectory planning. Then, the perception system can determine how to maneuver around objects. 

When looking at how to improve the L2-3 functions like emergency braking, spatial resolution, range and FPS plays a significant metric for ADAS systems.  If we improve the system’s ability to see in longer distances without improving PPS, the system will be able to detect that there is an object in a longer distance but it won’t provide enough information to classify the object and decide the how to act accordingly. With higher PPS and high spatial resolution, ADAS systems will gain enriched data enabling adequate reactions in shorter times. This also results in improved target recognition, tracking, obstacle detection, and positioning that can be translated into functions such as, detecting vulnerable road users, trajectory planning, hazard/debris classifications, open door automatic emergency braking, emergency vehicle classification and more.

 

LiDAR Limitations

The LiDAR system is limited to how many points it can return per second due to the mechanics and speed of light. As the LiDAR uses Micro-Electro-Mechanical System (MEMS) or solid state sensors, gaps will consist between each point returned to the sensor, limiting the amount of data that can be captured. These gaps mean the sensor is not capturing  crucial information about the object. Thus it is not providing important details for object classification and/or small objects detection. Without enough resolution between each point, the LiDAR won’t be able to determine if there is a small box on the road ahead. Since a low number of relevant points was collected, the same lack of resolution creates an inability to classify a VRU as a pedestrian, Cyclists, etc. from  a certain range. This leads to ambiguity in the expected vehicle behaviour.

As LiDARs already provide lower spatial resolution than cameras, it is harder for them to detect and classify objects and pedestrians, and in adverse conditions the situation worsens. 

 

The Ultimate Solution

TriEye’s SEDAR (Spectrum Enhanced Detection And Ranging), is the ultimate solution, providing HD imaging and 3D simultaneously in all adverse weather and lighting conditions. As the SEDAR uses an array of pixels with 1284 x 960 resolution, it can instantaneously capture more points, providing higher points per second. TriEye’s SEDAR can capture over 20 Million points per second, compared to the standard LiDAR which captures up to  1 Million PPS. 

TriEye’s SEDAR is based on two world’s first innovations: TriEye’s Raven, an HD CMOS-based SWIR photosensor, and the UltraBlaze, TriEye’s ultra-high power, and eye-safe SWIR illumination source. 

Harnessing the SWIR spectrum’s physical advantages, the SEDAR simultaneously provides both SWIR HD image data & a detailed depth-map in all visibility conditions, providing the benefits of a camera, night vision camera and a LiDAR in one system. Leveraging TriEye’s CMOS-based SWIR sensor and proprietary illumination module, the SEDAR enables high-resolution imaging and ranging that is cost-effective, scalable, and has a small form factor.

The Ultrablze enables the SEDAR to illuminate the scene and perceive a detailed representation of the surrounding area even under adverse conditions, which a regular camera or LiDAR cannot do. Leveraging the HD resolution of the Raven sensor, the SEDAR  provides significantly more detail per frame, offering high spatial resolution and enabling deterministic identification.

Figure 1: Performance Comparison LiDAR vs. SEDAR – Wood Pallet at 125m

Distance Measurement
Ground Truth 125m
SEDAR  124m
SEDAR Accuracy  99.2%
LiDAR 121m
Points on Pallet
SEDAR SWIR Image 324
LiDAR 4

Figure 2: Performance Comparison LiDAR vs. SEDAR – Wood Pallet at 180m

Distance Measurement
Ground Truth 180m
SEDAR  178m
SEDAR Accuracy  98.88%
LiDAR Not Detected
Points on Pallet
SEDAR SWIR Image 152
LiDAR Not Detected
Recent posts