next up previous
Next: M42 - Orion Nebula Up: Reduction of 16-inch Telescope Previous: Merope

RGB Images

The images which you have looked at so far have been displayed in a ``gray'' scale, which is to say that the intensity of the light on the screen is related to signal from the CCD recorded in the file, and that all colors are scaled equally. The relationship may be linear (a $\gamma$ of 1.0), or non-linear ( a $\log$ is an example), depending on your choice when using ds9. The scaling might have mapped a wide dynamic range in the data to a more restricted range on the display. We may add color as well as intensity by controlling individually the three primary colors for each pixel. This is useful when there is additional information available about each pixel, such as the signal strength at different wavelengths, that may be represented by color.

Each pixel in the image is associated with a number representing (ideally) the number of photons received at that point in the image during the exposure. Typically this number requires 16 bits to store, and represents decimal values from 0 to 65535. Each bit is a power of 2, and 16 bits includes $2^0$ to $2^{15}$:

0      0000000000000000
1      0000000000000001
2      0000000000000010
3      0000000000000011
4      0000000000000100
256    0000000100000000
32767  0111111111111111
32768  1000000000000000
32769  1000000000000001
65535  1111111111111111
...
if negative numbers are needed
write down the binary positive number 
change all 1's to 0's and all 0's to 1's
add 1
(this is called a 2's complement number)
...
0      0000000000000000
-1     1111111111111111
-2     1111111111111110
-3     1111111111111101
-32768 1000000000000000
The most significant bit ($2^{15}$) is ``turned on'' as a flag to indicate negative numbers. You might have noticed that some of the over-exposed stars in the images had negative values at their centers and in the image ``blooming'' from excess charge on the CCD. This appears in ds9 when a very large positive integer is interpreted by the display program as a negative number instead. The CCD detector saturates above 20,000 counts, so we cannot effectively use the highest order bit in the image anyway.

Unlike the CCD in our camera, the screen display and most printers represent each color at a pixel in an image with only 8 bits. The gray scale intensity on the screen must fall in the range from 0 to 255. The display software ds9, and image processing program such as ``Gimp'' or ``Photoshop'' provide a way of controlling rather precisely how the data are converted to intensities on the screen.

The color in a displayed image is represented by three numbers, usually controlling colors red, green, and blue and referred to as an rgb value. To render a full color image on the screen, we must have data for the rgb values at each point in the image, and have a protocol for converting them to a range of values the display hardware is capable of reproducing. Here we will work with red, green, and blue images, each taken through a different filter, and add them to produce a full color image.

Since our original data are 16-bit numbers, and the display hardware uses 8-bit numbers for each color, there are two ways we might approach the image processing problem. We could convert three 16-bit images to three 8-bit images, and then combine them to produce an rgb image with 8 bits per color; or, we could combine the three 16-bit images into an rgb image with 16 bits per color, then convert it to an 8-bit rgb image. Note that a 16-bit rgb image takes $3\times16=48$ bits to store since there is a red, green, and a blue value to deal with at each pixel; an 8-bit rgb image takes 24 bits to store. Image processing programs used for photographs usually work with these 24-bit sets of numbers, and thus cannot handle the full range of the original 48-bit data. To use them, you must carefully guide the mapping of the original red, green, and blue images to new images that are only 8 bits deep, and then combine these to produce the desired full color image. The results can be quite beautiful and informative, but the production process is tedious.

Here, instead, we will use the rgb feature of ds9 to build the full color image interactively, and then export an rgb color image in a 24-bit (8-bit per color) format that photographic image programs can read. There is a fine line here between scientifically useful visualization and artwork, so it is important to keep in mind that the goal is to convey the data in a way that is useful for understanding the physics of the object we are studying.

In this set of observations we have data on the Orion Nebula recorded through filters that enable color images, so we will illustrate the technique with this example. The same method would apply to any other set of images, and there are other useful techniques for producing color images from filtered CCD data that we will not have time to explore here. The available RGB filters transmit:

Red
612-670 nm
Green
488-670 nm
Blue
392-508 nm
Clear
350-1100 nm
Open
No filter



Subsections
next up previous
Next: M42 - Orion Nebula Up: Reduction of 16-inch Telescope Previous: Merope
John Kielkopf
2004-11-30