Industry News

June 06, 2017

Dual Cameras for Image Fusion

How best could we utilize the information coming from dual cameras to enhance the overall image quality?

By Roy Fridman and Oded Gigushinski

Dual camera smartphones are here, faster and in larger volumes than analysts expected.

Smartphone manufacturers integrate a second camera for several reasons, primarily to improve image quality and to be able to extract depth information for applications such as DSLR-like shallow depth-of-field effect (Bokeh).

Adding a second camera brings forth new challenges. Among these challenges are how to calibrate such dual cameras, one with respect to the other, how to switch between cameras in a way that enhances the user experience and how to optimize the image quality of this new and innovative mobile imaging hardware using advanced algorithms and software tools.

In this article, we wish to focus on the latter: How do we best utilize the information coming from two cameras to enhance the overall image quality? One such approach is called Image Fusion.

Introducing Image Fusion

Image Fusion is the process of combining two or more input images into a single image. The main reason for combining the images is to get a more informative output image.

In mobile, dual camera Image Fusion comes into play in several ways: The first is related to a dual camera with one color sensor and another monochromatic sensor (with the Bayer filter removed). The monochromatic sensor captures 2.5 times more light, thus reaches better resolution and SNR. By fusing the images coming from both cameras, the output image has better SNR and resolution, especially in low light conditions.

The second is with a zoom dual camera – a wide field-of-view camera coupled with a telephoto narrow field-of-view camera. In this case, Image Fusion will also improve the SNR and resolution from no zoom up to the point the telephoto camera field-of-view is the dominant one. In the example images below it is easy to see the resolution improvement in the fused image vs the standard digital zoom (images were taken with a 3x optical zoom camera).

Image Fusion of Wide and Tele Frames Results in Higher Resolution

The algorithmic flow of Image Fusion includes Rectification, Global registration, Local registration and parallax correction, decision and fusion.

Image Fusion Algorithm Flow

  • Rectification – As a first step, the algorithm rectifies the two input images and corrects for distortion, scale and shift that may be introduced by the optics and AF mechanism. For this purpose, the algorithm uses pre-computed rectification data that is stored in the sensors’ OTP memory. After this step, corresponding points in the two images lie on epi-polar lines that are parallel to either the x axis or the y axis, depending on the camera module configuration.
  • Global registration – The second step is to perform global registration. The algorithm calculates and compensates for global differences between the two cameras which could be attributed to dynamic changes between images and specific properties of each camera.
  • Local registration and parallax correction – following the global registration, a fine local registration step is performed, in which the parallax (i.e. shift in x and y dimensions, which is dependent on object distance) is determined for each pixel in the input images.
  • Decision and fusion – In the last two steps, the algorithm fuses the two images according to the local parallax found in step 3. This stage is composed of a sophisticated decision block that handles occlusions, corrects for registration errors and has an adaptive algorithm that eliminates artifacts from the final image. The fusion block implements the actual data fusion in a way that the 2 original images are seamlessly blended together.

Performing image fusion presents several algorithmic challenges, among which are occlusions, lens imperfections and transitions between overlapped and non-overlapped areas. Next we will review each of these challenges.

Image Fusion Challenges: Occlusions

How to best handle areas each camera sees differently? If we cover one of our eyes at a time, it is clear both eyes do not view the world from the same perspective. Same applies in dual cameras: Depending on the scene there could be areas within the field of view of one camera that are occluded from the other camera’s field of view.

In order to fuse two images the algorithm needs to use matching pixels between the left and right images. This can be done for areas seen by both cameras. In case an area is occluded from one camera a matching point cannot be detected for that area. This can result in an artifact in the output image and an unnatural viewing experience around the occluded area if not handled correctly by the fusion algorithm.

In the example images below there is a noticeable difference in what the wide and tele cameras see. The fused output image clearly shows an artifact at the area occluded from the tele camera. Proper decision making and fusion algorithms can eliminate such artifacts.

Image Fusion Must Deal with Dual Camera Occlusions

Image Fusion Challenges: Lens Imperfections

How does the optics of the lens affect image fusion? In a variety of ways.

We will address two ways that can be overcome by robust image fusion algorithms. First is the optical distortion of the lens that makes registration more difficult. The second is differences in depth of field – areas that can be in focus in one camera and out of focus in the other camera. As in the case of distortion, registration becomes much trickier because of the difficulty to find feature points and matching pixels.

In the example images below it is easy to see the difference in depth of field between the wide camera and the tele camera.

Wide and Tele Images with Different Depth of Field

This results in an area in which the wide camera is in focus and the tele is out of focus as seen below. The output image is perfectly focused because of successful decision making and image fusion.

Smart Decision Making is Crucial for Image Fusion

Finally, the images below show the resolution benefits of image fusion over the standard wide camera at zoom factor of x1.5:

 

Image Fusion Challenges: Overlap Transition

Last among the example challenges is the transition between overlapped area to non-overlapped area.

The difference in the field-of-view and the resolution creates an inevitable inclusion of one image inside the other (usually it is the wide image that includes the tele image). This means fusion will not cover the entire field of view and the algorithm will have to adjust the fusion “strength” gradually to allow a natural user experience in the transition area between the images. A good image fusion algorithm must be able to address this challenge.

Image Fusion in Real Life

Major OEMs are already utilizing image fusion techniques, including Apple, OPPO, Huawei and others.

Apple, who puts a lot of emphasis into image quality, already utilizes image fusion in the iPhone 7+, combining their wide and tele images into a single output image.

In the figure seen below, red wires were placed in front of a chart, causing the image fusion performed by the iPhone to mistakenly duplicate the wires (an outcome that occurred from incorrectly fusing between the wide and tele images). Corephotonics’ Image Fusion (as depicted on the right) nicely deals with such complex scenes.

 

Image Fusion Artifacts in Apple’s iPhone7+

In another case, a leading OEM is using Image Fusion to fuse color and monochrome images in various lighting conditions. The figure below shows 3 images of a star chart taken in low light conditions: the left image is without fusion; the middle is using a the smartphone fusion and the right image is using Corephotonics fusion (all images are based on the same monochrome/color input frames, so the comparison is pure algorithmic).

Proper Image Fusion Yields Resolution Benefits

Aside from the noticeable artifact shown in the leading OEM fusion, a measurement of the MTF shows a significant benefit in case of Corephotonics fusion vs a negligible benefit in the leading OEM fusion case.

OPPO raised the bar in MWC 2017, showcasing 5x zoom technology using folded tele cameras, supporting Image Fusion. As zoom factors grow in dual cameras, so will the intermediate zoom area before fully switching to the tele camera field-of-view; hence, image fusion significance will increase even further. When completely switching to the tele camera in high zoom factors, the user benefits mostly from the hardware improvement that comes with the tele lens. In the range between no zoom and full tele camera field-of-view, image fusion can significantly improve image quality, and overall camera user experience.

OPPO Unveils 5x Dual Camera Zoom Technology at MWC 2017

Our mobile devices accompany us everywhere we go and the significance of the camera is clear.

Dual cameras allow us to bridge the gap between DSLR cameras and smartphone cameras in terms of low-light performance, optical zoom and depth-of-field.

With the help of Image Fusion, image quality and user experience can be significantly enhanced in either wide + tele or color + monochrome dual camera setups. Done right, Image Fusion is becoming a key contributor that helps change the way we take pictures with our mobile devices.

 

— This piece was co-authored by Roy Fridman, director of product marketing, Corephotonics, and Oded Gigushinski, director of Algorithms, at the same company. Fridman leads product marketing and business developent in mobile imaging, while Gigshinski is responsible for planning & designing innovative imaging products.

 

Read More