Hydrogen Alpha Processing Workflow

I've found that I need at least 10 minutes subs (20 would be better) for good Hα image quality, especially if I'm imaging faint nebular details. I also have found that dithering during image acquisition is important. Even though the QSI583 camera has very low noise, and the noise is well handled by standard calibration techniques, hot pixels can be a problem with some Hα images, because of the very low signal to noise ratio, and the extreme stretching that's required to bring out image detail.

  1. I start with standard Image Calibration, using the PI Calibration Process. As described on my Image Acquisition page, I use Vicent Peris' procedure for acquisition and processing of Bias, Dark, and Flat frames. That means that when running the Calibration Module, the Calibrate Check Box is unchecked for the Master Bias (no calibration required, because I don't have an overscan area on my camera), checked for Master Dark (because the darks were not calibrated for Bias during Master Dark Integration) and unchecked for Master Flat (because flats were calibrated during Master Flat Integration). I also have Optimize checked for the Master Dark. If you have any questions about generating good calibration frames, I strongly suggest that you read Vicent's tutorial.
  2. For all PI Processes, I use 32 bit floating point files. Unless you are working with images that have an extreme dynamic range, or are building an HDR images, 32 bit floating point is plenty.
  3. After Calibration, I use StarAlignment to register all of the frames. I accept all of the defaults for this Process, and have had very good luck, even with images that require a 180 degree rotation due to a meridian flip during acquisition. I don't use the first image of the session as the Reference Image for registration - I pick one from later in the evening where it's really dark and guiding is well established. While StarAlignment is running, it's sometimes helpful to review the pixel fit and image scale and rotation values - it helps to pick out images with potential problems for later analysis.
  4. Before Integration, I load the calibrated and aligned images into the very useful Animation Script, located under the Script/Analysis menu. Once they're all loaded, I expand the image view size, and step through the images looking for gross errors. These might include bad tracking or guiding (rare with the AP900), extreme aircraft trails (the midnight flight from SFO to MFR has appeared in many of my frames), clouds, or anything else that might render an image unfit for later integration. However, I have found that with the powerful weighting and pixel rejection algorithms of PI, I am tending to keep more images for integration than I have done in the past.
  5. Next is Image Integration, one of the most critical steps of the entire workflow. This PI Process has a lot of options and parameters, and is one where I spend a little time, running with different options and evaluating the outcomes. Hint - PI caches all of the input images (to a point), so rerunning Integration with different parameters is very fast, as the images don't need to be reloaded from disk.
    I recommend taking some time to review Jordi Gallego's presentation Image Integration Techniques - it really opened my eyes about techniques for reducing the integrated image's noise as much as possible. He shows a number of trials in PI with different Pixel Rejection options, and number of images, and analyzes the outcome in each case.
    I use an Average Combination, with Additive Normalization, and Noise Evaluation for the weights. I normally use a Windsorized Pixel Rejection algorithm, with Scale + zero offset Normalization. Setting the High and Low Sigma Clip parameters sometimes requires a bit of experimentation - take a look at Jordi's presentation for a good explanation of how to analyze the results. One of the things that I like best about PI is that it always tells you a lot about what it's doing - be sure to take advantage of all of the data in the Processing Console as you work through a project.
    After Integration is complete, I always look at the High and Low rejection images, just to see if there's anything strange going on. Of course, I also take a critical look at the integrated image so see if I'm on the right track. I use the Screen Transfer Function to stretch all of these images for a first look. Use the "A" button for an automated stretch that usually gives you a pretty good first cut. I just keep the Screen Transfer Function open at all times on my PI desktop, so that I can easily use it whenever I want.
  6. OK, now I have my baseline image, the one that is ready for all of the various stretching, enhancement, and noise reduction processes. The first step I take is to use Dynamic Clip to clip out a full image. Especially if I've been imaging over several nights, there are image alignment issues around the borders, where there isn't complete data. If you don't clip this stuff off, the lack of data around the edges will interfere with some of processes to come.
  7. After clipping, I apply the Histogram Transform process to stretch the image. Note that some of the following image enhancement processes (such as AWT) can be applied to both linear and non-linear images. In the case of Hα images, especially where the signal levels are very low, I prefer to perform the Histogram Transform first. I open a Real-Time Preview window inside the HT process, and start moving the Shadow and Midtone sliders to stretch the image. I make sure that I minimize the clipping at the black point, and I never clip at the highlight end of the scale. I use an aggressive horizontal zoom in the top window so that I can make very small adjustments to the sliders. Once I have a stretch that I'm happy with, I apply it to the image, and save it as my baseline non-linear image.
  8. Next I construct several masks for use in the enhancement and noise reduction steps. The goal is to reduce noise in the low SNR areas (generally the background) and to enhance structure definition in the high SNR areas, without messing up the stars. This requires 2 masks - a large scale structure mask, and a star mask,which will be used in combination with the large scale mask. Working with masks in PI is very easy, as long as you remember the old Photoshop saying: "Black conceals, white reveals"". In other words, the dark parts of the mask will protect the underlying image from the effects of image processing operations, and the light parts of the mask will allow the processing to take effect, proportional to the lightness or darkness of the pixels in the mask.
    I generate the star mask with the StarMask process and immediately save it. I then clone my baseline image (just drag the Image ID Tag on the left margin of the image onto the desktop), and use the AWT process to turn off the fine scale image layers, usually scales 1-4 at a minimum. This leaves a nice feathered image containing just the large scale structures. However, it's usually too weak to be an effective mask, so I use the Histogram Transform process to increase its contrast - don't be afraid to clip here. Once the mask effectively contains all the brighter large scale structures, I save it as well.
    The next step is to combine the Star Mask and Large Scale mask into a single mask that will protect both the stars and the background during structure enhancement. The Large Scale mask is already what we want - the black areas are protecting the background. However, we need to invert the Star Mask so that the stars are black circles - this is easily done with Image/Invert. Then, I combine the 2 masks using the "min" function in Pixel Math, to create a new 3rd mask with the minimum of the 2 inputs. This finally is the mask that will protect all the stars and the background during enhancement. Although this process sounds a bit complicated, in reality it only takes a couple of minutes once you become familiar with the tools. Note that I saved the Large Scale mask earlier - I'll also use this mask in a later step.
    An alternative to all this work is to simply clone the original image and apply a Histogram Transform to it, to create a mask in a single step. However, I find that this doesn't protect the stars as well as the 2 step process.
  9. I'm now ready to use the AWT process to enhance the large scale structures in the image. Inside AWT, with the combined mask applied to the image, I apply a bit of Bias to the first 2 or 3 detail layers, using the B3 Spline Scaling Function. Note that I am not applying any Noise Reduction at this stage. I find that Bias amounts of 0.20 or smaller are all that are needed with the pixel scales that I'm normally working at. Larger amounts can introduce some ugly over-enhancements - be careful! A good approach is to create several representative Preview Images, and apply the AWT enhancement to them, using the Undo/Redo Preview arrow toggle on the toolbar to flip back and forth between Preview with and without the enhancement. Once I find a level that I like, I apply it to the entire image.
  10. Next is an application of the ACDNR process to reduce noise in the background. I remove the mask from the AWT step, and apply the inverted Large Scale Structure Mask that was created and saved in Step 8. This mask, inverted, protects the high SNR areas, and allows the Noise Reduction to be applied to the low SNR image background areas. I have also used AWT for noise reduction, but find that ACDNR seems to work better for Hα images. As with AWT, I apply ACDNR cautiously to preview areas, in small amounts, until I find a level that reduces the background noise without damaging the image, and then apply ACDNR to the entire image.
  11. Depending on the original data and the subject, I might consider the image finished at this point. However, I frequently apply a small Curves adjustment to pull out the bright areas a bit more, or a Local Histogram Enhancement. In other words, season to taste! In both cases, use a mask to protect the background - you don't want to being any noise into the low SNR areas.