In the previous posts I covered the basics of the lens assembly, how a CMOS Image Sensor works, and briefly on the Bayer Filter. In this post I plan to cover the basic Image Processing done by the camera phone to produce the image you see on the screen. In order to understand what happens, we need to first understand how the image is captured by the CMOS Sensor. In the image on the left is the original scene that we will use as our example. As it was covered in the previous posts, this scene is imaged on the CMOS Imager and broken apart by the individual pixels that are laid out in the Bayer Grid. If you remember I had stated that 50% of the pixels are green in the Bayer Filter and there is a reason for this; the human eye is most senstive to green than from all other colors, because of that more information is recorded in the green channel. With that information in mind you can see in the figure to the right how the CMOS Imager records the original scene. As you can see the recorded image is mostly green. You are probably wondering how a mostly green picture ends up looking normal on my screen? Well that is a good question and the answer is a process called Demosaicing.
So what is demosaicing? The short answer, it is an algorithm that interpolates the complete picture from the partial raw data recorded by the CMOS Image Sensor. Now, for many this may not make sense, so lets go back to the original Bayer Grid that is show on the left here. Looking at the grid, lets break this grid up into 2x2 grids composed of 1 Red, 1 Blue and 2 Green pixels, similar to the figure to the right. We now need to assume that each of individual grids describes one full color cavity and the camera phone now has half the image in the vertical and horizontal direction. So how do we get the remainder of the full image, simple we now need to realize that if we create grids that overlap as shown in the figure on the left we can now be able to generate the information for the full image. Now an issue I am sure many of you are wondering how do we deal with the pixels at the edge of the image, since we don't have enough info at those points for the full color cavity? Well basically they are cropped out of the image since most of our camera phones have lots of pixels already and we can afford to do that. I have gone over a very basic demosaicing technique, to give you a better idea of what is going on. In reality there are numerous techniques and many are proprietary to each camera phone vendor.
Now there are issues related to Demosaicing, one of them is in relation to fine details in the image. If you look at the image above you can notice many fine details when this image is captured by a camera phone and then demosaiced we notice something has changed and you can see that in the image below.
If you notice each of the sections of the original image has developed a unique artifacts, this artifacts are referred to as moire patterns. Some typical charateristics of moire are repeating patterns, color artifacts or pixels arranges in an unrealistic maze-like patterns, which can be seen in the image above. This typically happens when we have reached the resolution limit of the sensor and as a result the demosaicing algorithm creates these moire patterns.
So now that wraps the basic image processing that happens on your camera phone in order to take a picture. I hope most of you guys enjoyed these past 3 posts on how a camera phone works, now I am sure you are wondering what is coming next. In the next few days I will begin posting up the techniques I use to analyze the performance of various camera phones.