Why Dual Cameras Are Better?
By Roy Fridman
By using dual cameras, smartphone manufacturers are able to support extremely advanced imaging features while keeping the solution slim (below 5mm height), lightweight and robust.
September 7, 2016 is going to mark a significant milestone in the mobile imaging domain. One of the most anticipated revolutions in the upcoming iPhone7 lies in the camera complex. According to recent rumors the large sibling of the iPhone7 will carry a dual camera structure, with the key objective of creating higher quality images.
Dual cameras come in various forms and types, including symmetrical (e.g. 13Mpixel + 13Mpixel), asymmetrical (e.g. 16Mpixel + 13Mpixel), Bayer + Mono (color sensor + B/W sensor), wide + tele (wide lens camera + tele/zoom lens camera) and others. In all cases, such dual camera structures are used to replace the larger sensors, optics and moving parts normally found in advanced DSLRs. By using such dual cameras, smartphone manufacturers are able to support extremely advanced imaging features while keeping the solution slim (below 5mm height), lightweight and robust.
Soon we will better understand what these dual cameras in the iPhone7 are going to be used for. In the meantime, let’s run a quick review of the various enhancements a dual camera can provide.
Improved Low Light Performance
One of the most challenging scenarios users face when taking an image is low light scenes – whether we want to savor a moment strolling down a colorful night market, or if we take an image in a dimly lit room. The image sensor should absorb as much light as possible to enhance the image quality.
So how can different dual camera types help us improve low light performance?
Mono (black and white) + Bayer (color) type dual cameras mainly rely on the additional light that comes into the mono sensor. The color filter array is removed from the mono sensor so every pixel sums up all 3 R, G and B colors instead of filtering one specific color (much more light coming in). When fusing the B/W image with the color image the result is significantly enhanced, even in extreme low light conditions. Another interesting fact is that the Mono pixels are sampled in full frequency range as opposed to the Bayer (R, G, B) pixels that go through frequency attenuation according to the color filter they pass. Sampling in full frequency range results in higher sampling density and higher resolution.
Interestingly, Mono + Bayer dual cameras can also help reduce the effect of motion blur. By allowing 2.5x more light onto the mono image sensor, the resulting exposure time of the camera can be much shorter. Thus, blurriness that might occur due to fast moving objects or simply by an unstable camera would be reduced, especially in low light scenes.
Imagine your daughter in her graduation day, standing on the podium while you are watching from a distance. Don’t we all wish we can capture a crystal clear, perfect image to savor such moments? However, not all of us are photography buffs that carry around a supersized DSLR camera with a heavy lens everywhere we go. Instead, we all carry around our mobile phones which in fact became one of the most accessible items we possess. When we capture an Image with our existing smartphones, zoom is achieved digitally, which in fact makes the experience and resulting image quality less than optimal.
Dual camera zoom uses a combination of a wide and tele lenses in order to achieve real optical zoom, similar to the electromechanical zoom used in the professional DSLR cameras.
This is a huge step forward in mobile photography as it uncovers a new realm of possibilities for the smartphone users who want to enjoy DSLR quality images with the comfort of using their mobile device.
In order to achieve a wide range of zoom options, up to 5x, without any extra thickness to the smartphone itself, a lot of smartness is needs to be developed. For example, the optics of the tele lens, the actuator that is responsible for auto focus and image stabilization, and a power-efficient image fusion.
Another very interesting characteristic that comes from the world of professional photography is the optical Bokeh effect. When having a lens with a Shallow depth of field it minimizes the objects in focus while the rest of the scene in different depths is blurred. This effect stems from having a tele lens with shallow depth of field as opposed to a wide lens currently used in smartphones.
The wide + tele dual camera topology brings the user the best from all worlds – having the ability to have a wide angle view where a lot is in focus or zooming in with real optical zoom to focus on the object the user desires while the rest of the world is blurred (Bokeh) – same as in professional cameras.
Another major advantage of having dual cameras is the ability to sense depth – similar to the way our eyes operate. Using advanced algorithms with the appropriate sensors, we can extract depth information and use it in various applications.
What can be done with depth information?
Refocus – previously we discussed the optical qualities that allow the user to have the desired Bokeh effect. This effect can also be done digitally when having depth information. The object in focus is chosen by the user and all the rest are digitally blurred.
Segmentation is the ability to partition an image into multiple segments allowing a more convenient way to analyze the image – for example determine object boundaries. Having depth information is the basis for efficient segmentation. It makes it easier to separate between objects and allows various implementations such as Panorama view (looking at an image from different angles), Iris recognition, face detection and many more.
Augmented Reality (AR) as its name suggests is basically taking the real image information and augmenting it with imaginary objects or other data. In order for the added object to coexist in a realistic way, accurate depth information is crucial. The Pokémon Go craze had given us a glimpse of how viral such applications can become, but also demonstrated the low quality of existing AR, mostly due to the lack of accurate depth maps.
Bringing it all together
whether for optical zoom, depth map or enhanced low-light performance, the toughest part about any dual sensor technology is being able to encapsulate it all into a real-life product that is both robust and provides consistent high quality images. This requires an unusual mix of disciplines, including optics (being able to design your own lenses), actuators (having unique support for focus and image stabilization in dual sensors), image fusion algorithms (be it for depth, zoom or others) and system knowhow (application processors, image sensors, ISP, etc.). This is where Corephotonics excels, intersecting multiple domain specialties, working with various supply chain vendors and supporting smartphone vendors each with his own unique needs.
— Roy Fridman is a project manager at Corephotonics. He is in charge of OEM projects at Corephotonics, responsible for leading market engagements, understanding customer’s product needs, and translating into R&D product requirements.