DEPTH MAPPING

Stereoscopic 3D Depth Map Creation

depth algorithm

Play Video

Corephotonics’ depth algorithm is based on stereoscopic vision, similar to that of the human eye. We use our dual cameras to produce a dense, detailed and accurate depth map of the scene.

In stereo vision, objects close to the camera are located in different regions of each image. As the object moves further away from the cameras, the disparity reduces until, at infinity, the object will seem to be at the same place in both images. A stereoscopic depth map is based on such camera disparities.

The Corephotonics depth map algorithm is uniquely designed to deal with various challenges, while requiring minimal load from the application processor. Such challenges include:

  1. Optical aberrations: Optical characteristics of one lens versus the other can make the registration process more challenging.
  2. Depth bleeding: A phenomenon in which the transition between a close object and a far object is smooth and gradual instead of sharp and clear.
  3. Occlusions: Depending on the scene, there could be areas within the field of view of one camera that are occluded from the other camera’s field of view. In order to build a depth map from two images, the algorithm has to find matching pixels between the left and right images, and occlusions impede this matching.
  4. Texture-less planes: In order to distinguish between different depths within the image, registration is performed. Regions with no texture information create a challenge in distinguishing different depths.

The algorithm

The algorithm is highly optimized not only for accuracy but also for real-time execution on smartphone application processors, up to 30fps, while maintaining a low memory footprint.

In turn, such depth maps can be used for various applications, including:

  • Applying a Bokeh effect (also known as Portrait Mode), whereby the main object (usually a person) remains in clear focus, whereas the rest of the scene is blurred, giving the image a shallow depth-of-field effect that is typically found in DSLRs.
  • Augmented Reality (AR) and Virtual Reality (VR) applications, enabling such camera-based devices to accurately sense their environment for the most immersive user experience.
  • Refocusing the scene on different objects at different distances from the camera. Such an effect can be applied post-capture, enhancing the camera user experience “after the fact”.
  • Accelerating the auto-focus camera function by augmenting existing focus technologies with depth-based focusing.
  • Scene analysis and object recognition based on depth maps.
 

Image quality

Image quality testing
UI/UX testing
Testing benchmark design and integration

Camera hardware

Compact Lens design
Micro electro-mechanical systems for zoom, auto-focus, optical image stabilization
Diverse actuator technologies and control systems
Environmental and reliability testing in preparation for ultra high volume Manufacturing

Computer Vision

Deep Computer Vision models for: scene understanding, object detection and recognition and tracking; classification; depth analysis
Stereo vision and depth mapping
Image fusion
Dynamic multi aperture calibration
Heterogeneous computing (MT CPU, GPU, DSP, unified-memory architecture)
Mobile camera software architecture
UI/UX design for camera applications