Stanford University researchers have developed a laser-based system that can reproduce images of objects hidden from view. The technology can be used to enhance the safety of self-driving vehicles, as well as applied to serve other functions.
Autonomous cars rely on LiDAR (Light Detection And Ranging) technology, which uses light pulses to scan visible objects from direct reflections in the immediate environment. While LiDAR can help cars see obstacles on the road, non-line-of-sight (NLOS) imaging can improve LiDAR's performance by reconstructing the shape and albedo of objects hidden from the driver. Until now, however, NLOS projects have not been successful because of the weak signals from multiply scattered light, and the huge processing requirements of existing algorithms to reproduce the final image of the hidden object.
In an improvement over similar research efforts, Stanford engineers have developed a system using a method called a confocal scanning procedure, which provides an increased signal and range of retroreflective objects. The system requires significantly fewer computational and memory resources than previous reconstruction methods do, and it images hidden objects at unprecedented resolution.
“It sounds like magic, but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper describing this work, published March 5 in Nature.
What makes the system unique is the confocal setup, wherein a laser is put next to a highly sensitive photon detector, with both pointed at the same location (a wall to bounce off laser pulses). Previous systems have used a non-confocal approach.
In the Stanford lab, laser pulses were bounced off the wall at an angle to hit a toy rabbit behind a screen perpendicular to the wall. The light then reflects back from the bunny to the wall, and then back to the sensor, where a so-called single photon avalanche diode, or SPAD, enhances the weak signal.
Their confocal scans of hidden objects took between two minutes and an hour, depending on lighting conditions and reflectivity of the object. This technique lessens the demand for computational power to reproduce the image. The algorithm quickly crunches the data, cuts out the noise such as ambient light, and produces a 3D image of the hidden object.
“You can push a button on your laptop and process these images in a second,” said Stanford electrical engineer David Lindell, reported Wired, “whereas before it took hours on compute-intensive hardware to be able to do this.”
“We believe the computation algorithm is already ready for LIDAR systems,” said Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”
While the Stanford engineers have performed outdoor experiments of their system, successfully detecting hidden objects under indirect sunlight, hurdles remain.
“The biggest challenge is the amount of signal lost when light bounces around multiple times,” says Stanford's Matthew O'Toole, lead author on the paper. “This problem is compounded by the fact that a moving car would need to measure this signal under bright sunlight, at fast rates, and from long range.”
Also, if the system is installed in a car today, the researchers say it can easily detect retroreflective road signs, safety vests or road markers, but can struggle in tracking a moving person wearing non-reflective clothing.
Still, if the tech is developed further, it can soon find itself integrated in LiDAR systems. The system can be applied beyond self-driving cars, as well. For example, rescue teams could use it to find people trapped under collapsed walls and rubble, or under thick foliage. It could even find its way in diagnostic and medical devices.