LiDAR (also written as LIDAR, Lidar, and LADAR) is a survey method that measures the distance to a target by illuminating it with an ambient laser and measuring reflected pulses with a sensor.
Basically, LiDAR consists mainly of two parts: transmitter and receiver. The vertical cavity surface laser (VCSEL) laser is used as a terminal that emits a beam of infrared light to the object, captured by the CMOS image sensor after reflection. This time of the beam is called flight time (ToF).
LiDAR is commonly used for topographic measurements
Flight time is divided into direct flight time (dToF) and indirect flight time (iToF). The first device to emit light pulses to the target to directly measure the time the photon took from departure to return; the second emits a series of light waves modulated to measure flight time by detecting the phase difference between the back and forth light waves.
Knowing the speed of light, we can get the distance to the target object through a simple calculation. This range principle is similar to radar. However, LiDAR based pulsed lasers, with shorter wavelengths and higher frequencies than radio waves used by radar, can obtain images with higher resolution, with an accuracy of up to centimeters or even millimeters.
Therefore, LiDAR is also used in the fields of cartography, self-driving driving, surveying and protection of cultural sites. The question is, what can this sensor scanner do on mobile devices, specifically the iPhone 12?
This is not the first time Apple has introduced LiDAR in its products. As early as March this year, iPad Pro 2020 has been equipped with LiDAR scanning, using direct flight time technology (dToF). Compared with iToF, it has lower power consumption and is less affected by ambient light, allowing for wide range measurements.
When in use, the generator projects a 9 * 64 matrix beam to capture and map the depth of the location up to 5 meters. This difference will be evident in the photos you take.
Shallow depth of field is achieved by adjusting the camera’s aperture and focusing closer to the camera. Today, with the help of multiple cameras and machine learning algorithms, this kind of background blur can easily be captured with a mobile phone.
First of all, neural networks and machine learning can tell cell phones what we are shooting and how far away from us the target is. For the average person, it is easy to judge the extent of the object and the distance relationship from a photograph, but for machines it takes a lot of resources to learn, such as color distribution, object brightness and size.
Just like the photo above, with uniform exposure and a clear subject, the camera can easily analyze the points closer to the lens and those farther from the lens. But once there are more complex relationships in photographs, things get complicated.
Human or animal hair, many colors and clothing close to the background color are all susceptible to mistakenly judged by the machine as the background, especially on the edge of the subject, suggesting an unnatural blurring effect.
The addition of LiDAR can yield more accurate depth information, so that the iPhone can understand the relationships between objects, draw spatial bitmaps through algorithms, and crop images. The iPhone divides the photo into sections based on a depth-of-field information map within a few nanoseconds, the farther the layer, the higher the degree of blur, creating a fake portrait with a shallow depth of field.
At the same time, in low light, the iPhone’s focusing speed will also be significantly improved, being able to accurately find people in photos in low light conditions, creating the foundation for portrait photos in ban mode. night.
Of course, LiDAR’s camera improvement on the iPhone 12 Pro really depends on the actual shooting tests. You can expect another area where it really shines, in augmented reality, especially in titles that use the technology.
After adding LiDAR, the in-game 3D model is attached to the ground, desktop, and other planes, by measuring the depth of the image and the spatial position relationship, the entire camera field of view analyzed to dynamically adjust the lighting and shadows of the model, providing a more realistic experience for games