We describe methods for fusing imagery of different spatial and spectral resolution to create a high spatial resolved multispectral image. Classically the fusion process consist to combine three monochromatic images (or more) of an object taken through colored distinct filters with a high quality wide band grayscale image of the same object. The technique is well know in the domain of broadcast compression or satellite remote sensing (colorization of a high resolution panchromatic image with low resolution color images). A special case of fusion is the LRGB, i.e. the combination of Red (R), Green (G) and Blue (B) with a Luminance image (L). Ideally, the L image is a wide spectral band image (a panchromatic frame), with a better spatial resolution, covering the spectral domains of the R, G and B images.
The R, G and B have if possible a high signal-to-noise ratio. On the other hand, these images can not have an excellent spatially resolution (typically images obtained in binning 2x2 mode in order to increase the SNR). The L image has a high spatial resolution, acquired with a very good seeing and/or binning 1x1 or by using of a large instrument. Another less efficient strategy consist to add all separate R, G & B frames in the one high signal-to-noise ratio L image and deconvolve this later.
The final image (L)RGB associates the point like aspect of the L image with the high SNR color contents of the RGB images. The result is a more esthetic image and especially, containing appreciably more useful information. It is significant to note that the weak resolution of RGB images has a weak apparent impact on final LRGB composite.
Consider this small crop of Orion constellation image (Messier 76 and a part of Barnard loop):
Canon EOS350D (internal IR-cut filter removed), KG3 filter, 50 mm lens stopped to f/2.8 and stack of 18x4 minutes exposures.
For a demo of (L)RGB technique, separate the fundamendal colors plane into distinct files (RGB separation command of Digital photo menu, or console command SPLIT_RGB):
Synthetize a panchromatic image, i.e. a broad spectral band image. At first order, it is an image equivalent to an image taken without any colored filter. For this, compute the mean of stacked R, G and B channel:
The image pan.pic is the panchromatic frame:
The 16-bits panchromatic image (grey level).
Open the dialog box (L)RGB of View menu, then enter
Check the button Luminance and enter the name of the panchromatic image. Click Apply
The LRGB image.
The LRGB image is very similar to the RGB original image.
Now, degrade the resolution of R, G and B image. Apply for example a gaussian convolution:
The original FWHM of stars is of 2.0 pixels. In the degraded images (after gaussian filtering) the FWHM is of 4.0 pixels. The spatial resolution is reduced by a factor two.
It is not a surprise, if a true color image is constructed using the degraded channel, the result is significantly also degraded:
>TRICHRO RR GG BB
or from the (L)RGB dialog box:
Blurred color image.
Now, we will make fusion of the blurred RGB images with the panchromatic full resolution image:
The original aspect is recovered in spite of the blur generated in the images containing the color information. It is the key principle of image fusion: only the panchromatic component is acquired with the highest possible resolution. The resolution of images RGB can be degraded, for example because they are acquired with an optimized dedicated instrument or in binning 2x2 for a better signal to noise ratio.
For a better rendition of the star colors in the LRGB image, it is possible of desaturate slightly the result. For that, act on the small cursor of (L)RGB dialog box, then click Apply: