LRGB Processing Workflow using PixInsight

My LRGB workflow has evolved over time, both as I gain more experience, and as the PixInsight tools have evolved. There are many paths to a good astro image processing result - this is merely one.

  1. Assuming that I have a good inventory of LRGB frames, I start with an overview evaluation using CCDInspector. There are a bunch of tools for looking at image quality based on star FWHM (Full Width Half Maximum) and Aspect, but I find CCDInspector is fast and easy to use. I load each set of images separately (L, R, G, and B) and compute FWHM for each member of the group. I sort on FWHM, looking for obvious outliers due to image acquisition problems or poor seeing, and I also review the Aspect values. I separate L,R,G, and B because I may have acquired them on different nights, and also the statistics can vary based on the filter being used. In my location, a FWHM of 2" or less is quite good, and I can see results of 3-4" at times. If an entire group of images is around 4", I tend to keep them and make the best of it, or else decide that I need to reimage under better conditions. However, if the average FWHM is around 2.5" and I see a few at 4", I move them into a deleted folder, and generally don't include them in further processing. I also make note of the Luminance frame with the best FWHM, for later reference.
  2. I also now use the new Blink process inPixInsight to quickly review and discard images. 
  3. Once the images are reviewed, I start with processing the Luminance images in PixInsight. I begin with standard Image Calibration, using the PI Calibration Process.  I use Vicent Peris' procedure for acquisition and processing of Bias, Dark, and Flat frames. That means that when running the Calibration Module, the Calibrate Check Box is unchecked for the Master Bias (no calibration required, because I don't have an overscan area on my camera), checked for Master Dark (because the darks were not calibrated for Bias during Master Dark Integration) and unchecked for Master Flat (because flats were calibrated during Master Flat Integration). I also have Optimize checked for the Master Dark
  4. For all PI Processes, I use 32 bit floating point files. Unless you are working with images that have an extreme dynamic range, or are building an HDR images, 32 bit floating point is plenty.
  5. Continuing with the Luminance images, after Calibration, I use StarAlignment to register all of the frames. I accept all of the defaults for this Process, and have had very good luck, even with images that require a 180 degree rotation due to a meridian flip during acquisition. I don't use the first image of the session as the Reference Image for registration - I use the Luminance frame identified in Step 1 with the best FWHM. This frame establishes the frame of reference for the entire image, as we will see later. While StarAlignment is running, I review the pixel fit and image scale and rotation values in the Process Console window- it helps to pick out images with potential problems for later analysis.
  6. Next is Image Integration, one of the most critical steps of the entire workflow. This PI Process has a lot of options and parameters, and is one where I spend a little time, running with different options and evaluating the outcomes. Hint - PI caches all of the input images (to a point), so rerunning Integration with different parameters is very fast, as the images don't need to be reloaded from disk.
    I recommend taking some time to review Jordi Gallego's presentation Image Integration Techniques - it really opened my eyes about techniques for reducing the integrated image's noise as much as possible. He shows a number of trials in PI with different Pixel Rejection options, and number of images, and analyzes the outcome in each case.
    I use an Average Combination, with Additive Normalization, and Noise Evaluation for the weights. I normally use a Windsorized Pixel Rejection algorithm, with Scale + zero offset Normalization. I set the same reference image selected in the steps above as the reference image for Integration. It doesn't affect the final results, but I think it helps put the statistics into a common context. Setting the High and Low Sigma Clip parameters sometimes requires a bit of experimentation - take a look at Jordi's presentation for a good explanation of how to analyze the results. One of the things that I like best about PI is that it always tells you a lot about what it's doing - be sure to take advantage of all of the data in the Processing Console as you work through a project.
    After Integration is complete, I always look at the High and Low rejection images, just to see if there's anything strange going on. Of course, I also take a critical look at the integrated image so see if I'm on the right track. I use the Screen Transfer Function to stretch all of these images for a first look. Use the "A" button for an automated stretch that usually gives you a pretty good first cut. I just keep the Screen Transfer Function open at all times on my PI desktop, so that I can easily use it whenever I want.
  7. At this point, the basic Luminance master is done, so I save it using a simple naming convention. If I'm working on NGC7331, I save it as NGC7331_L. Even though I will crop it in the next step, I always save this version in case I need to come back and recrop the image. Also, I leave the processing Icons on the desktop and save the Project, which keeps all the images and processes together for later revisiting.
  8. The next step is to crop the Luminance master to eliminate incomplete edges due to dithering and framing differences. It's important to crop now, before using DBE and Stretching as explained below. I do apply a strong screen stretch before cropping so that I can clearly see the edges that need to be removed. The Dynamic Crop gets saved in the image History so that it can be used leter to crop the RGB images as well. Note: in order for the resuse of the Crop to work, the R,G, and B need to be Star Aligned on the same Luminance reference frame! Also, the PI Project should be saved with the images on the desktop (in iconized form if you like) so that all image History is maintained.
  9. It's optional at this point whether or not you want to save the Cropped image, and each subsequent processed step. I sometimes save each image stage (NGC7331_L_Crop for example). However, if you save all your processing steps in the Project as you go along, you can always recreate the intermediate steps as needed.
  10. Next is one of the best tools in the PI Toolbox - DBE, or Dynamic Background Extraction. DBE is the best tool I've found for removing gradients caused by light pollution and vignetting. It's a dynamic process, so you need to have the cropped L image open to proceed. I choose a fairly dense sample point pattern, then manually add points where needed, and remove points that are getting too close to faint nebulosity, etc. I run the process and look at the generated background model to see if it looks OK. Then the model is applied to the L image with either subtraction for removing light gradients, or division to remove multiplicative effects such as vignetting. I recently processed a set of images for which I had no flats, and DBE did a very credible job of removing the vignetting. I save the DBE Model to the desktop so that I can reuse it with the RGB images.
  11. At this point I usually generate a Star Mask from the Luminance image, for later use. I use the Star Mask tool, playing with the Midtone Slider to make sure that I include all of the smaller stars. I save the generated StarMask image.
  12. A note about PixInsight Projects - this very useful tool was added in a recent update to PI. In a Project, you can save the entire state of your processing at any point - all images, processes, workspaces, etc. I always save completed processes on the PI Destop, and frequently save the Project as well. Note that the History of processing for each image is stored as well, and when you return to a Project and re-open an image that was saved in a workspace, you can see, modify, and rerun each step of the processing from the History Explorer Window.
  13. Next is Deconvolution. In the case of 1x1 Luminance and 2x2 binned RGB, I apply Deconvolution solely to the Luminance Image. I start by generating and saving a custom Point Spread Function (PSF) with the new Dynamic PSF tool, and then applying approximately 50 iterations of Regularized Lucy-Richardson Deconvolution. Prior to application, I mask the image with either the inverse of the Star Mask or else a Histogram Transformed Luminance Clone, in order to protect the background and apply the Decon just to the stars and structures. Deconvolution is tricky, and  I've written a little tutorial on that process.
  14. Finally (whew!) I apply some noise reduction to the Luminance image. Lately I've been using the MultiScale Median Transformation (MMT) tool on the first 2 or 3 wavelet layers of the image, to good effect. I mask the image first with the Luminance Clone so that noise reduction is restricted to the background. Mask are a very important and powerful part of PixInsight, and you'll end up using one or more of them in every processing project.
  15. At this point, the linear Luminance image is done, and I save everything again. Although this process sounds long and complex, it isn't really, and I can perform all of these steps a lot faster than I can type them here. An important note - the Luminance Image has been Linear throughout all the this processing. A Screen Transfer Function has been applied so that I can see what's happening, but the histogram of the image has not been stretched at this point.

RGB Processing

 

  1. Now that the luminance  is processed, the RGB frames are quite simple. Using the saved Calibration, Star Alignment, and Image Integration processes, I just update the frames for R,G, and B, and process. Note that if you have created RGB flats, you need to select those in the Calibration process. Also, be sure to select the same L frame for Star Alignment as was used in the Luminance processing. You want all of the R,G, and B masters to be aligned exactly the same as the Luminance.
  2. Once I have the RGB masters, I combine them into the RGB image using the LRGB Combination process. I don't apply any noise reduction at this time.
  3. Using the Screen Transfer Function, I Autostretch the RGB image to get an idea of what it looks like. It will probably have a strong color cast at this point, so I unlock the RGB components in the Screen Transfer (click on the little RGB symbol to unlock the channels), then do another Auto stretch. This gives a pretty good idea of that the data look like, although they will be noisy and there will still be a background gradient.
  4. Next comes a very important step - cropping the RGB image so that it will overlay the Luminance properly. Since I used the same L frame to register everything, the combined RGB will overlay the L once it's cropped. To crop it the same as the L, simply drag the Dynamic Crop process from the L History Window onto the RGB image. In a recent project, I did this step and found that the RGB frames were sufficiently different that even after the initial crop to the L dimensions, there were still some border areas without R,G, or B coverage. You need to remove all of this before DBE and Color Balancing, so what to do? Simple - I cropped the already cropped RGB image to its new dimensions, then dragged the RGB Dynamic Crop from its history onto the already cropped Luminance, and now everything fits, and there are no edges to worry about.
  5. Now I apply DBE to the cropped RGB Image. Remember that everything is still linear at this point - I haven't done any stretching of the histogram yet. If the image is cropped the same at the L frame when I applied DBE to it, I can reuse that DBE process. Otherwise, I simply regenerate the sample pattern and run DBE again.
  6. After DBE, I color balance the RGB image with the Color Calibration process. You can also apply Background Neutralization, but in theory this should not be necessary after DBE. PixInsight approaches color calibration differently than many other packages. As the Color Calibration documentation states:
  7. Finally it's time to stretch the Luminance and RGB images using a Histogram Transformation. It's important to do this properly, and Vicent Peris suggests a very good method in his LRGB Tutorial:

    For the color calibration process to yield coherent results, two conditions must be met: The background reference image must provide a good sample of the true background. In general, this means that the background reference image should be a view or subimage whose pixels are strongly dominated (in the statistical sense) by the sky background, as it has been recorded on the image. The white reference image must provide a sample of a sufficiently rich set of objects, in the colorimetric sense. With a sufficiently large and varied set of objects included in the white reference image, no particular color will be favored and hence our spectral agnosticism will be preserved.

    The main task performed by the LRGB combination process is to replace the implicit lightness component of the RGB color image with the delinearized luminance. For this task to work correctly, it is essential that both images be compatible in terms of overall brightness and contrast. A very effective way to achieve the necessary compatibility is by applying nonlinear stretching parameters based on the same statistical properties, but computed separately for both images. This is exactly what the STF tool does with its automatic screen stretch, so we are going to transfer automatic screen stretch parameters from the STF tool to the HistogramTransformation tool. 

    We click the New Instance icon on STF and drag it to the HistogramTransformation window. Now we have set the corresponding shadows clipping and midtones balance points. Before applying the histogram transformation, we disable STF visualization; then we apply these parameters to the image. We follow the same steps for the luminance.

    It is very important to understand that from now on our images are nonlinear, and that this change is irreversible. This means that relative pixel intensities in these images no longer correspond to relative signal levels in the original data. No reliable color correction can be applied to nonlinear images, so color balance should be essentially correct before applying this initial nonlinear transformation.

  8. At last, it's time to create the combined LRGB image. I use the LRGB Combination process again on the RGB image, adding the L component. I usually apply Chrominance Noise Reduction at this point, and also increase the Saturation by decreasing the value in the Saturation MTF slider. This image represents the basic LRGB product, so I save it with LRGB in the name.

Final LRGB Processing

  1. Although a complete color-balanced LRGB image was produced by all of the preceding steps, there is still a lot of tweaking that can be applied to bring out all of the detail and color in the image. The first step I usually take is to tweak the Histogram a little. There's usually a little space at the black end that can be removed, making sure not too clip too many pixels. I also move the midtone point as well. Note that you shouldn't really change the individual color curves too much at this point, as changing color balance after the images are no longer linear tends to introduce problems.
  2. The final steps are really dependent on the image structure, data quality, and personal preference. Generally, I want to accomplish several things at this point - sharpen structures a bit and reduce background noise in the low SNR areas. Lately I've been using the new Multi Scale Median Transform (MMT) process for noise reduction and the HDR Multiscale Transform to bring out structure and enhance contrast. MMT should be used with a mask to protect the high SNR areas, and HDRMT with the mask inverted to protect the low SNR areas. I sometimes use ACDNR for noise reduction, as it seems to work very well with some images.
  3. Remember that all of these enhancement steps can be tested using smaller Previews of the image, to speed things up.
  4. If there's any hint of a green color cast in the image, I apply SCNR, which limits the green values.
  5. I might also apply a subtle amount of additional contrast and saturation enhancement, using the Curves process.
  6. Remember - don't overdo it! In my opinion, a slightly underprocessed image is preferable to one that's had the stuffing beaten out of it.