Core Technology

LightCode Photonics’ active solid-state 3D cameras are based on the direct Time-of-Flight (dToF) measurement principle and incorporate an additional 2D RGB camera for high-confidence computationally efficient object detection applications either as a standalone solution or as a part of a 2D/3D fusion perception system. 

Direct Time-of-Flight

LightCode Photonics’ cameras are based on direct ToF technology, which uses a sub-nanosecond electronic stopwatch and a pulsed light source to measure the flight time of an emitted and subsequently detected pulse (Figure 1), eliminating the need for computationally heavy processing. Due to the speed of light, the device measuring the arrival time of a photon must have a timing resolution of <1 nanosecond as it affects the accuracy of the distance measurement. Advances in photodetector technology have given rise to highly sensitive silicon-based photodetectors – SPADs. SPAD temporal response (timing) properties are suitable for dToF-based 3D imaging systems, and the detector’s single-photon sensitivity allows for increased detection ranges and a higher probability of detection. 

Having the transmitted laser pulses as short as possible is beneficial as it increases the range detection accuracy and different objects within the field-of-view of one pixel are better resolved along the depth axis. In addition, having shorter pulses with high peak power increases the signal-to-noise ratio over ambient background noise as the laser pulse reflections appear as more prominent peaks in the detected signal. In contrast, the time-averaged power of the laser is kept low ensuring compliance with laser safety standards. LightCode has patent-pending technology for creating sub-nanosecond laser pulses from VCSEL diodes, which are several times shorter than the competitor systems. Direct ToF technology is typically employed in the form of scanning 2D and 3D LiDARs and is rare among 3D cameras.

Figure 1. Direct ToF detection utilizes a high-speed electronic timer which is initiated simultaneously with a narrow pulse width high-peak light pulse emission. Emitted light pulse scatters on different objects on the scene and reflects back to the receiver side of the sensor, where the light pulses are registered and linked to the flight time. As the speed of light in various conditions is known, the distance between the camera and the target object can be calculated as D = (T * c) / 2, where D is the object distance, T is the measured flight time and c is the speed of flight. 

Time Correlated Single Photon Counting

LightCode Photonics’ cameras use time-to-digital converters (TDC), which are electronic devices (stopwatches) capable of measuring the arrival times of individual photons with high (below 0.1 nanoseconds) temporal resolution.  For each detector pixel, the arrival time (or a timestamp) of a photon can subsequently be added to a histogram (Figure 2), where, after a series of detected light pulses, a full waveform containing depth data of the scene is obtained. This timestamping method is also called time-correlated single-photon counting (TCSPC). TCSPC improves the sensor’s resilience to other active sensors (e.g. LiDARs) by only correlating the desired light pulses, increasing the data reliability in a multi-camera setup. At the same time, LightCode’s improved TDC architecture and feedback systems make the sensor more resilient to sunlight. Lastly, using TCSPC full waveform acquisition enables simple discrimination of multidistance objects, while also increasing the probability of detecting difficult-to-see objects, such as small and dark or (semi)transparent objects, and reducing the false positive rate.

Figure 2. Photon timing histogram acquired with TCSPC for Figure 1. A dToF receiver can differentiate return pulses from many different objects in its Field-of-View (FoV) as long as the temporal resolution of the system is sufficient and an adequate amount of light is reflected from objects. In this case, up to three peaks are distinguishable from the background signal. 

RGB + Depth data

LightCode Photonics integrates a global shutter 2D RGB camera into its 3D active solid-state cameras. This allows multi-purpose usage as the camera can be used for advanced machine learning tasks, teleoperation, or as a monitoring camera. As the two technologies are integrated into one hardware, this ensures the best alignment and facilitates the implementation of high-confidence and computationally efficient object detection and SLAM solutions. This allows the output of RGBD data (Figure 3), where each color camera pixel contains depth information in addition to RGB color. This can be easily combined with a mature 2D machine learning algorithm to improve the quality of predictions or used for advanced visual SLAM algorithms

Figure 3. Illustrative image with RGBD data, where each color block represents the object distance with respect to the LightCode's camera, with blue being the closest and red the farthest object.

Product Lineup

Medium-range 3D dToF camera designed for indoor service robotics including features like simultaneous dual-resolution output and multiple returns.

Future products

Upcoming generations of TrueSight 3D cameras include outdoor operation, increased range, and field-of-view, and significantly reduced form factor.