LRGB Color Production

Generally a color image is combination of several pictures acquired using color filters which are merged together on the computer. CCD cameras do not automatically differentiate colors of light. They are sensitive to a particular set of colors. The ST10XME CCD camera that was used to acquire images in the gallery can detect colors (wavelengths) of light from the infra-red right through the visible spectrum. So, when light is incident to the chip the CCD detector does not care if the photon ("particle" of light) is red or blue- it just counts off "PHOTON DETECTED IN PIXEL blah blah..." (yes... this is actually what the camera is thinking :) ).

In order to discriminate between the different wavelengths of light, a picture of an object is taken using 3 filters which encompass the full set of colors the CCD detector is sensitive to. The reddish wavelengths are passed by the RED filter, the middle of the set are greenish wavelengths passed by the GREEN filter, and blueish colors are passed through the BLUE filter. See the images below:




The above are the Red, Green and Blue images of the "Ring Nebula" (M57). Of course each image is still black and white; it is only later that computer combines the images. Notice how different the nebula appears in each image. There is a striking difference between the red and blue images. Also note the star in the center of the nebula (it is responsible for making the nebula glow)- it appears brightest in the blue image and dimmest in the red. Therefore this star is not emitting much reddish light and will look bluish in the final color (which in this case means it is a very hot star). If there is something that is equally bright in each image then the final color image will show this object as white.

These images are low resolution (binned 2x2) pictures. Combining them right now will yeild a low resolution color image. However, we are interested in a high-resolution image! See below:

The above is an image taken with a clear filter which lets all colors of light pass. This image is called the Luminance layer because it determines how bright to make the color information. Let us say that the Luminance image records the center star as having 1,000 units of brightness. This factor is multiplied by the values recorded in the color images. So in the Red image the central star is not bright (1000 x small number)- but it is very bright in the Blue image (1000 x a larger number) and so on. When these four images are combined on the the computer the detail of the image comes from the Luminance layer with the color pictures painting in the pixels accordingly. See the image below for the result:

The final question is WHY? Why not just take a high-resolution (binned 1x1) image in each of RGB and combine? The most persuasive reason to take the L image is that it actually saves time. Let us pretend that a 20 minute high-resolution (binned 1x1) image in the RED image records stars of 20th magnitude (very dim stars). Then you take another 20 minute exposure in the Green filter. Finally you take a 30 minute exposure through the Blue filter (longer because the CCD chip is less sensitive to Blue light). Total time = 70 minutes to produce a high-resolution color image. Now, let us take a 20 minute Luminance image (clear filter) which easily detects things as dim as 20th magnitude. A 5 minute binned 2x2 image through the red filter will also detect stars of 20th magnitude because of the larger pixels (follow the link if you have not already!) but at the cost of lower resolution. Then expose through the Green filter for 5 minutes and the Blue filter for 10 minutes. Total time = 40 minutes!

Although the color information is at a lower resolution, low-res color information applied to a high-res black&white image looks like a hi-res color picture- but completed in almost half the time! For amatuer astronomers who aspire to take high-quality color images, this method of imagery is truly remarkable.

Updated: 01/01/2005