The steps for colorimetric processing are the following:

1. Acquisition of the three fundamental components of the RGB image,
2. Convert to the HSI space,
3. Modification of one or more components of the HSI image,
4. Return to the RGB space,
5. Visualization of the color image.

We will illustrate this process with an example. Frequently you may have an image in only two colors (R&G or R&B, etc).  The following strategy should be used:

1. Calculate the spectral ratio of the two images (one is divided by the other).  The result is the image H,
2. The I component is one of the monochromatic images (or mean of the two images),
3. The S component is set to a constant level (1, for example),
4. Transform from HSI to RGB, then visualize the resulting trichromatic images.

The result is an image whose level represents the albedo of the object, and whose color represents the spectral signature (here a spectral ratio).

Figure 5. Left, a monochromatic Moon image carried out with 400 nm interferential filter (blue spectral band). Instrumentation: Takahashi 5-inch refractor at f/10 + KAF-1600 directly at the focus. Right, true colors image taken with 400, 560 and 910 nm interference filters. Contrast and hue are accentuated considerably by using the mathematical property of the HSI space. Yes, the Moon is a coloured object ! That gives an unique very useful frame to study the Moon geology. Click on the images to see the original format !

Figure 6. IGB (Infrared, Green, Blue) images from the center of Hyakutake comet. The figure show the frames B, G and I spectral bands respectively  that have been used to create the  tricolor image of figure 7. These filters are not very adapted to comet observation (the V filter corresponds about to the C2 lines, the B filter to the plasma tail if any, and the I filter to the continuum). Note the structures in the G band produced by dust and gaz jets going from the rotating nucleus. A spiral structure very close  to the nucleus is visible in G & B bands. Images captured with a 5 inch fluorine Takahashi refractor at f/5.9 and a KAF-1600 in binning 1x1 and windowing mode on 1996/03/22 around 23H52 UT.

Figure 7. This tri-color image shows the result of combining B, G & I frames. The color saturation has been enhanced with HSI technique. Note that the coma is very green (C2 lines) whereas the regions near the nucleus are redder. Note also a fragment detaching from the main nucleus (it is obvious only in G frame).
The hue and saturation components defines the chromaticity of a colour. It is important to note that chromaticity and intensity of a colour can be considered independently. The very interest of HSI space is that it de-couples color tone and hue from brightness. This is why we can replace in HSI space the I component computed by the algorithm of conversion RGB towards HSI by a new high quality I image. While returning then in the RGB domain, we preserve the original colors, but with a boost in the details, a much cleaner image and a much high S/N ratio comparatively has the initial true colors image.

For further information on LRGB technique visit the links:

For other applications of the LRGB algorithm, click here (Iris software tutorial).
For a discussion about CMY techniques, click here.

2 - Drizzling technique

Drizzle algorithm performs an optimal adding of a sequence of images as far as resolution is concerned. The principle is that, at sub-pixel level, shifts between individual input images are nearly randomly distributed. For example, a star in the first image may be centered  perfectly in the middle of a pixel, whereas it will be across two pixels in the second one, and so on. Since it is easy to know the exact shift between the images, it is possible to create an output image with a finer sampling, in which resolution may be increased with respected to each input image. In fact, energy from each input pixel is dropped in the output image, and the whole processus may be compared to a drizzle.

Drizzling is adapted to undersampled images, for example when the telescope focal lenght is too short for the pixel size. One may consider that the system is undersampled when FWHM is smaller than 2 pixels. In this situation much of the information lost to undersampling can be restored.

Before using drizzling technique, it is necessary to know the exact shift between the images. It is also very important that all the input images are acquired in the same conditions: same exposure time, same sky background level. If this is not the case, you have to adjust offset and gain prior applying drizzling algorithm.

The drizzling algorithm step by step (see figure 8):

Step 1: Reduce or coarse (by calculation !) the size of the pixels in the starting image, but preserve the same interval between pixels.
Step 2: Project in the final image fine grid after a geometrical transformation (take into acount if necessary shifts, rotations, optical distortions).
Step 3:  Calculate the fraction of the pixel projected in each cells of the grid of the final image and add this fraction with the current value with the output pixel.
Step 4:  Start again at step 1 for each input image.

The "shrink" pixel size at the step 1 is crucial. We define pixfrac as the ratio of the linear size of the coarse pixel to the original input pixel linear size. If pixfrac=0 the drizzle algorithm is equivalent to interlacing, while the traditionnal shift-and-add is equivalent to pixfrac=1. One must choose a pixfrac value that is small enough to avoid degrading final image, but large enougth that then all images are dropped, the coverage of the ouput image is fairly uniform. We choose typically pixfrac between 0.5 and 0.7.

Figure 8. Schematic representation of drizzling technique. In this particular case, the central output pixel receives no information from the input image. It will not be necessary the case for the following images of the sequence, and so on. "Dark" output pixels are not a concern as long as there are enough input frames with differents sub-pixel dither positions to fill in the ouput image. The ratio between the input grid size and the output grid size define the "scale factor" parameter.

Figure 9. In blue color, area fraction of the input pixels dropped in the output pixels.

Mathematical formulation of drizzling:

        i = intensity of the projected input pixel
        w = weight of this pixel
        a = fraction of the pixel projected in a cell of the output grid (fractional pixel overlap 0 < a < 1)
        I = current intensity in the output pixel
        W = current average weigth in the output pixel
        I' = resulting intensity in this output pixel
        W' = resulting weight of this output pixel


        W' = a . w + W
        I' = (a.i.w + I.W) / W'

The weight w of the pixel can be zero if it is a bad pixel (hot pixels, dead pixels, cosmic rays event, ...), or can be adjusted according to the local  noise (the value is then inversely proportional to the variance maps of the input image).

Remenber that algorithm is effective if the images are really undersampled (FWHM of 1 to 2  pixel). The displacement, and more generaly, geometric distortion, between the individual input images must be perfectly well-known (to 1/10 of pixel precision typically). The number of input images must be large (10 or more) to avoid holes in the final image. Most important, displacement between the input images (diphering techniques) must be random on 2 axis. So, it is necessary to shift arbitrary the telescope between each exposure during deep-sky sessions. The amplitude of the shift can be of a fiew fractions of pixels in a random direction. At the processing stage the relative shifts between images is precisely determined by calculation of the centroid of stars (PSF fitting btween common stars or cross-correlation between a reference image and the input images). The registration parameters are fundamental quantities for the drizzling method.

Main performances:

1. Resolution gain can be up to 2.
2. Combination of sequence images produce high resolution without sacrificing the final signal to noise ratio.
3. Conservation of photometric quality.
4. Preserve astrometric accuracy.
5. Effective removal of the bad pixels (cosmic rays, traps, etc).
6. Optimal compositing if the weight function is quite selected relative to the local noise.
7. Very good geometrical correction of the images (significant for photometry).

The Iris software implement a version of the drizzle procedure.The algorithm was developed by Richard Hook and Andrew Fruchter to produce the The Hubble Deep Field, the deepest optical image of the universe yet taken. It is now used for many other field.

For more information about drizzle algorithm:

To show the effectiveness of resolution improvement by combining undersampled multiframes we will process diphered  images carried out from the observatory Pic du Midi Observatory (french Pyrénées) during the summer 1999. The instruments used are simple photographic objectives (55 to 80 mm focal lenght) and an Audine CCD camera.

 Figure 10. The Audine camera installed on a Takahashi EM200 german equatorial mount at the Pic du Midi Observatory.