Lidar (light detection and ranging) rangefinders, which are common tools in surveying and in autonomous-vehicle control, among other applications, gauge depth by emitting short bursts of laser light and measuring the time it takes for reflected photons to arrive back and be detected.
Now, researchers from MIT’s Research Laboratory of Electronics have developed a new lidar-like system that can gauge depth when only a single photon is detected from each location.
Since a conventional lidar system would require about 100 times as many photons to make depth estimates of similar accuracy under comparable conditions, the new system could yield substantial savings in energy and time — which are at a premium in autonomous vehicles trying to avoid collisions.
The system can also use the same reflected photons to produce images of a quality that a conventional imaging system would require 900 times as much light to match — and it works much more reliably than lidar in bright sunlight, when ambient light can yield misleading readings.
All the hardware it requires can already be found in commercial lidar systems; the new system just deploys that hardware in a manner more in tune with the physics of low light-level imaging and natural scenes.
In a conventional lidar system, the laser fires pulses of light toward a sequence of discrete positions, which collectively form a grid; each location in the grid corresponds to a pixel in the final image - a technique called raster scanning.
The laser will generally fire a large number of times at each grid position, until it gets consistent enough measurements between the times at which pulses of light are emitted and reflected photons are detected that it can rule out the misleading signals produced by stray photons.
The MIT researchers’ system, by contrast, fires repeated bursts of light from each position in the grid only until it detects a single reflected photon; then it moves on to the next position.
A highly reflective surface — one that would show up as light rather than dark in a conventional image — should yield a detected photon after fewer bursts than a less-reflective surface would. So the MIT researchers’ system produces an initial, provisional map of the scene based simply on the number of times the laser has to fire to get a photon back.
The photon registered by the detector could, however, be a stray photodetection generated by background light. Fortunately, the false readings produced by such ambient light can be characterised statistically; they follow a pattern known in signal processing as “Poisson noise.”
Simply filtering out noise according to the Poisson statistics would produce an image that would probably be intelligible to a human observer. But the MIT researchers’ system goes one step further: it guides the filtering process by assuming that adjacent pixels will, more often than not, have similar reflective properties and will occur at approximately the same depth. That assumption enables the system to filter out noise in a more principled way.