PixInsight Deconvolution Fun

Like many of you, I'm somewhat familiar with the principles of Deconvolution, and have seen some very powerful applications of the technique in other people's images. The PixInsight website has several good Deconvolution resources, including this Forum contribution from Juan Conejero, and this very detailed Processing Example from Juan on the PI website. However, in spite of these examples, I never seemed to have very good success with my own application of the tool. Living in the Pacific Northwest, the Winter gives me lots of time to play with data and try new techniques, because the clear nights are few and far between. Also, the addition of  Dynamic PSF (Point Spread Function) to PI adds another tool that should make Deconvolution easier and more accurate, so I decided to invest some time and see what I could learn. The following are my results.

What is Deconvolution?

The folowing description is extracted from Juan's Processing Example linked above. Why fool with the words of the Master?

Convolution is a mathematical operation that causes a function to adopt the shape of another function. For example, if we convolve an image with a Gaussian function, all brightness variations in the image will be smeared according to the shape of a Gaussian distribution, which is very smooth. This has a blurring effect, hence the name Gaussian blur associated with that kind of convolution filters. Contrarily, if we convolve the image with a function that has a hard transition from negative to positive values, all brightness variations will be intensified locally, which causes an edge enhancement effect, also known as a sharpening filter.

So our observed image is just the result of a convolution with a point spread function (PSF). The PSF represents all instrumental imperfections and any fact or accident that might have affected the observing process, causing the observed image to differ from the real image. For ground-based astronomical observations, perturbations due to the Earth's atmosphere (seeing) form the greatest contributions to the PSF. Under ideal observing conditions —in absence of atmosphere, using a perfect telescope and a perfect tracking system—, the observed image would be diffraction-limited, which means that the PSF would be just the Airy disc corresponding to the instrument's aperture.

Deconvolution works by undoing the smearing effect caused to the original image by a previous convolution with the PSF. In principle, if we'd happen to know the exact PSF, and there was no noise at all in the observed image, we could apply a direct deconvolution, to obtain something very close to the actual image.

So, in its simplest terms, the image is smeared by mostly atmospheric effects, and if we can model the smearing, we can undo its effects. Of course, it isn't that simple, because there is noise in the image and uncertainty in the PSF, but we can approximate the process well enough to improve the image. I'll use some moderately poor data that I recently acquired to illustrate my attempts to apply Deconvolution.

The Image Data

Here are the details on the M1 test image:

- Planewave CDK 12.5"
- QSI 583wsg
- AP900
- 10x300 seconds luminance frames 1x1
- Image scale = 0.44'' (way over-sampled I know)
- Guided and acquired with Maxim DL
- Poor seeing, strong jet stream, some worsening high haze, stars jumping around
- FWHM average from CCDInspector approximately 3.5"
- Background average from CCDInspector approximately 600
- Calibrated and integrated in PI

Here's a central crop of the integrated but unprocessed image with a Screen Transfer Function applied:

 

 As you can see, the stars are soft and the core of M1 is blurred as well. The CDK is in pretty good collimation, and prior images were much better (sub 2" FWHM) so I put this down to mostly seeing conditions. You might ask why I even bothered - well, if I wait for perfect conditions here in the Winter, I won't get much done!

Determining the PSF

 As mentioned above, the first step in Deconvolution is determining an estimate of the Point Spread Function (PSF) that has smeared the image. Luckily, PI now includes a Dynamic Point Spread Function process that makes this striaghtforward. You simply select a good sample of stars by clicking on them, rejecting any that don't fit an average shape and orientation. If you look closely at the image below you can see some small differences in orientation, but I kept all these sampled stars for computing my PSF. I used Auto for my PSF Model Function, and they all used the Moffatt model. Also, the average FWHM was around 3.5" which confirmed CCDInspector.

An important note: the PSF sampling and Deconvolution should be done with a linear image (not stretched). That's because the PSF is only valid in the linear image - as you apply histogram stretching, the PSF will change depending on the star brightness and can no longer be modelled and applied across the image uniformly.

Here's a look at the sampled stars in the cropped image:

  Once you're happy with your PSF modelling, then choose Export a Synthetic PSF. Here's my result:

 

Deconvolution

OK, we're finally ready to try the magic of deconvolution. Here are the default parameters in the Deconvolution tool, with my External PSF selected:

So, I'll just apply the defaults to get started, and 10 iterations is enough to see what's happening:

Well, I see some improvement, but there's a lot of ringing around the stars, which is not unexpected. So, I'll just turn on deringing, which should fix that:

Well, that doesn't look too good! It turns out that Deconvolution and deringing are very sensitive to the process parameters. This is the point where I have usually given up in the past, but this week I had time to try a lot of different parameter combinations, and I found one that worked very well.

Here's the cropped M1 again after 10 iterations:

Here are the revised deringing settings that worked for this image:

 First I created a simple star mask with the Star Mask process, and used it for Local Deringing Support. The main changes were to the Global dark and Global bright settings. For example, Global bright did not work well at 0.05 or 0.07, so it's a very sensitive setting. Only time will tell whether or not these settings are good starting points for my typical images or not. I left the Wavelet Regularization parameters the same as the defaults.

M1 is looking a lot better - the stars are tighter and the nebula better defined. However, the large bloated stars have an unpleasant effect around them, probably due to to their poor definition in the original image. It might be possible to shrink them with a Morphological Transformation, but for this example I defined another Star Mask that protected just the large stars from the Deconvolution. I also applied 50 iterations of the Regularized Lucy-Richardson for this "final" version.

 

Original:

Is the image perfect? Of course not, but it's a lot better than the original, especially considering the poor data that I started with. I still see some slight dark ringing around the stars in the nebula, and I'll continue to see if I can eliminate that. Once I can complete the RGB acquisition, I'll post a final color image, and then I'll really see whether or not the Deconvolution has been worthwhile.

Addendum

Steve Leshin has pointed out that it's a good idea to mask the background, so that the low signal-to-noise areas are protected from the deconvolution, which could potentially increase the backgorund noise. Fortunately, that's extremely simple in PixInsight:

  1. Drag the Image ID tab on the top left side of the image border to create a clone of the Luminance
  2. Drag and drop the Image ID tab of the Clone onto the left border of the image that you are Deconvolving. You know you're in the right place when the left arrow changes to a little hollow square, indicating that it's OK to drop the image to be used as a mask
  3. If you want to make sure the mask is OK, turn the mask visible using the Show Mask command, and you should see something like this:

 Remember that "White reveals and Red Conceals" : the white areas of the mask allow the processing to affect the image underneath, while red protects the image. Also, you can apply a Histogram Transformation to the mask to change its density if needed, and 2 or more masks can be added together using Pixel Math. Remember also that masks are applied whether or not they are visible, so always check your mask status before proceeding to additional processing steps.

I hope you found this little tutorial useful. If you have comments or suggestions, please add them to the Forum discussion at PixInsight.