Our image fusion algorithm registers two frames from two cameras and fuses information from both to improve the output image. It is capable of increasing the output resolution, while enhancing noise performance and is applicable to Bayer-Mono as well as Wide-Tele dual cameras.
The image fusion algorithm can cope with any scene in any light condition and is capable of handling:
- Parallax, occlusions and point-of-view shift
- Handshake and movement in the scene
- Fusing images with details appearing in one image but not in the other
- Noise in low-light situations
- Wear and tear, manufacturing imperfections
The result is an impeccable image, artifact-free with superb quality. The algorithm is embedded in the device’s application processor and uses heterogeneous computing (multiple CPU cores, GPU & DSP) for guaranteeing an efficient implementation on the device’s application processor.
Our image fusion algorithm results in an image with higher dynamic range and significantly reduces the noise level making it ideal to use in the most challenging scenes. When used alongside a dual camera zoom, our algorithms enable video zooming, while smoothly fusing and transitioning from one camera to another.
As part of the algorithm’s pipeline, local registration is performed and a dense depth map is generated. This depth map can be used to further improve the image, refocus it to a different distance, control its depth of field, change its background, measure distance and enable AR/VR applications.
To read Corephotonics White Paper about Image Fusion, please click here