Recent flagship smartphones are integrating dual cameras, enhancing the camera user experience in various ways. To create improved image quality, the two images generated by these dual cameras need to be combined into a single image, providing higher resolution, better low-light performance, lower noise levels and other photography features not available from a single image. The process of combining these two images into a single image is known as image fusion.
In smartphones, dual camera image fusion comes into play in several ways. The first employs a dual camera with one color sensor and another monochromatic sensor (with the Bayer filter removed). The monochromatic sensor captures 2.5 times more light and thus reaches better resolution and signal-to-noise ratio (SNR). By fusing the images coming from both cameras, the output image has better resolution and SNR, especially in low light conditions.
The second approach is a zoom dual camera – A wide field of view camera coupled with a telephoto narrow field of view camera. In this case, image fusion improves the SNR and resolution from no zoom up to the point the telephoto camera field-of-view is the dominant one. In this low zoom factor range, the fusion utilizes the fact that few Tele pixels are mapped into the fusion image. This means that the output fused image is comprised from 1 pixel from the wide pixel and number of pixel from the Tele, resulting in better SNR and higher resolution.
Performing image fusion presents several algorithmic challenges:
- Occlusions: Since the two images are taken from a different point of view, there could be areas in one image which are occluded from the other image.
- Optical distortion: a form of aberration in which the lens creates a deviation in the output image. This type of aberration of the lens makes the process of determining the parallax for each pixel more difficult and requires the algorithm to compensate for the distortion.
- Differences in depth of field: areas that may be in focus in one camera and out of focus in the other camera, in which case the registration becomes much trickier.
- Transition between overlapped area to non-overlapped area: The differences in the field of view and the resolution create an inevitable inclusion of one image inside the other (usually it is the wide image that includes the tele image).
- ISP differences: since each of the 2 images is processed by a different ISP, the input images could suffer from different sharpness, noise level, contrast and colors.
- Wide/Tele resolution differences: as part of the optical design, the Tele image results in higher resolution than the Wide image. Due to the resolution differences between the 2 cameras, small objects which are detected by the tele camera might not be detected by the wide camera.
Corephotonics’ image fusion algorithm deals with the above challenges, while requiring minimal processing load, fast run time and optimal resource utilization of the application processor. Having a mature, well tested algorithm significantly reduces the chance of image artifacts, which are unacceptable in today’s uncompromising mobile imaging world.