UC Berkeley researchers developed a more compact and higher-resolution light detection and ranging, or LiDAR, system, which is used by self-driving cars and other autonomous machines to detect surrounding objects.
According to Ming Wu, campus electrical engineering and computer science professor and leader of this research, LiDAR is a sensor that maps 3D landscapes by emitting a laser and measuring the time it takes to return to the device. However, previous LiDAR designs have been expensive and bulky. The new design, featured in Nature on Wednesday, could make LiDAR technology cheaper and smaller.
“Normal cameras involve a 2D sensor sensing an image independent of distance and compressed on a plane,” Wu said. “For many applications, from self-driving cars to robotics to drone navigation, it’s important to have that 3D landscape to avoid obstacles and plan their path.”
Xiaosheng Zhang and Kyungmok Kwon, co-first authors of the study, emphasized the need to shrink the size of LiDAR while maintaining its performance.
One of the major challenges with this task, according to Zhang and Kwon, is making an integrated beam scanner, which “steers” the output laser light toward different directions to scan across the scene. The LiDAR beam scanner is the spinning object on the roof of self-driving cars.
“We made a LiDAR with an integrated beam scanner on a silicon photonics chip of 10 mm x 11 mm, which we call a focal plane switch array,” Zhang and Kwon said in an email. “Focal plane switch array … can miniaturize and integrate (the) LiDAR system into a single chip.”
Wu added the single chip makes LiDAR resemble a smartphone camera. However, LiDAR is more complex than smartphone cameras because each pixel needs to receive light but also transmit a laser that “bounces off a target and returns to the LiDAR camera.”
The group’s design uses tiny microelectromechanical system, or MEMS, switches instead of more common thermo-optic switches. MEMS switches physically move the waveguide, a tube that guides electromagnetic waves, up and down to route the light to each pixel, according to Zhang and Kwon in the email.
According to Wu, this shrinks the pixel size from several hundred micrometers to 50 micrometers. The group was subsequently able to increase the number of pixels on a one-centimeter-squared chip from 512 to 16,384 while consuming significantly less power.
“This is the power of technology evolution,” Wu said. “Using similar silicon technology, we are on the path of scaling the technology and can reach even higher resolutions in the future.”
Wu said LiDAR is only one of many systems used to identify objects, especially in self-driving cars, so the sensors need to work well together.
For example, if a separate sensor cannot distinguish between a truck or the blue sky, LiDAR can provide critical information when integrated well.
“Making critical components more accessible and cheaper will enable it to become ubiquitous,” Wu said. “Digital cameras used to be very expensive … but very high-performance cameras are now very cheap to the point that people will not hesitate to add a camera anywhere they need it.”
Zhang and Kwon noted they often see self-driving cars being tested in San Francisco — many of which use LiDAR. By improving this technology, they hope to benefit self-driving cars and LiDAR’s many other applications.
Wu’s team’s next obstacle is to bring this technology from the lab to the market — a task that includes determining how to make the device reproducible in large quantities.
“We are ready to take on that challenge,” Wu said.