Recent flagship smartphones are integrating dual cameras, enhancing the camera user experience in various ways. To create improved image quality, the two images generated by these dual cameras need to be combined into a single image, providing higher resolution, better low-light performance, lower noise levels and other photography features not available from a single image. The process of combining these two images into a single image is known as image fusion.
In smartphones, dual camera image fusion comes into play in several ways. The first employs a dual camera with one color sensor and another monochromatic sensor (with the Bayer filter removed). The monochromatic sensor captures 2.5 times more light and thus reaches better resolution and signal-to-noise ratio (SNR). By fusing the images coming from both cameras, the output image has better resolution and SNR, especially in low light conditions.
The second approach is a zoom dual camera – A wide field of view camera coupled with a telephoto narrow field of view camera. In this case, image fusion improves the SNR and resolution from no zoom up to the point the telephoto camera field-of-view is the dominant one. In this low zoom factor range, the fusion utilizes the fact that few Tele pixels are mapped into the fusion image. This means that the output fused image is comprised from 1 pixel from the wide pixel and number of pixel from the Tele, resulting in better SNR and higher resolution.
Performing image fusion presents several algorithmic challenges:
Corephotonics’ image fusion algorithm deals with the above challenges, while requiring minimal processing load, fast run time and optimal resource utilization of the application processor. Having a mature, well tested algorithm significantly reduces the chance of image artifacts, which are unacceptable in today’s uncompromising mobile imaging world.
Tel: +972 3 641-9888
Fax: +972 3 641-1818
Image quality
Camera hardware
Computer Vision