PIXINSIGHT : Deconvolve with a "moving" radial PSF function ? [Deep Sky] Processing techniques · Jérémie · ... · 10 · 401 · 0

JO_FR_94 6.49
...
·  5 likes
Hi All,

Maybe it has been asked many times before : is there a way to "script", or maybe an already existing function in PI, to deconvolve a picture using a "moving" PSF radial function ?

Let me explain : we can use PSF functions that have an aspect ratio <> 1, for example to reduce the effect of motion blur. But this PSF is a constant, and does not depend on where you are on the image.

Would it be possible to deconvolve using a kernel that is a function of space (coordinate in the image) ?

My guess is "NO", because it would probably be computationnaly expensive (and maybe not mathematically possible ?). But if that's possible, it would be helpful reducing the coma introduced, for example, by the adjunction of a bad quality Barlow on a flatfield corrected refractor (guess what : that's why I am looking for that !). In this case, it seems to me that the kernel / PSF is a radial function of space : the aspect ratio growth with distance to the center of the frame and the orientation is a function of the polar coordinate of the point of the image...

Thanks for any highlight !
Like
jhayes_tucson 22.40
...
·  9 likes
In general, both convolution and deconvolution are mathematical operations that assume that a) the process is linear, and b) that the process is shift-invariant.  That second assumption implies that the kernel is constant over the entire field.  Having said that, we know that in the real world, optical systems are not shift-invariant.  Field aberrations like 3rd order coma (which varies linearly with field angle) and 3rd order astigmatism (which varies with the square of the field angle) change across the field.  To handle those variations, it seems possible to construct a deconvolution process that models the field as a mosaic of isoplanatic patches, each with it’s own unique PSF (which forms the local kernel).  Linearity and shift invariance would be maintained within each patch and the patch size (and shape) would be determined by the size of the region where the PSF “doesn’t change by very much.”  I’m not aware of any software that enables this process, but it certainly should be possible.  Coming up with a completely new deconvolution algorithm to handle a smoothly varying kernel over the field is a completely different mathematical exercise and I believe that approach would be considerably more difficult.

John
Like
JO_FR_94 6.49
...
·  4 likes
Hi John,
Your response is very interesting.
To be honest I haven’t practiced this sort of maths for a long time (I studied engineering... long time ago), so I can’t really challenge the basis hypothesis of convolution (for it is just an integral between two functions, with a change of variable in one of them - to make that moving along the integration range :-) ).
At first I think I could do sort of what you said : calculate several deconvolutions of the same image with various kernels, and combine these results together by a proper mask function. It is probably mathematically incorrect as Deconv(image,kernel 1 + kernel 2) is probably different from Deconv(image, kernel 1) + Deconv(image, kernel 2) - I havent checked :-). On top of that it may works for a one dimensional function of the kernel : you would interpolate between two « useful » extremums, but for a 2 dimensional functions, you need 4 extremums, which complicates a bit more the combination.
I will make some tests, just to see...
By curiosity I have had a look at the Lucy Richardson algorithm : at each iteration you use the kernel to compute a new value for each pixel, using the surrounding pixels of the result of the last iteration weighted by the kernel. I guess it would be possible to call a function that gives you a specific kernel depending on the pixel you are updating ? This function would be calibrated prior to the application of the deconvolution, just like when you calculate your own PSF based on your image before using the deconvolution function of your software. It would the optical center of the image, and a few extremums PSF in a few direction, and then the kernels would be interpolated in between...
Not sure of course it wouldn’t mess the convergence of the algorithm  toward the correct / maximum likelihood solution, but I think it is worth a test : just for me at least to understand why it is wrong if it is :-)
I was wondering what would be my next little Python project, now I know !
Like
barrabclaw 0.00
...
In radiation therapy, one method to calculate dose from a medical linear accelerator photon beam to the patient's tissue is to perform a convolution of a dose deposition kernel with the total energy released in matter by the photon beam. The dose deposition kernel essentially represents how electrons liberated by the primary photon interaction deposit their energy in the patient's tissue. However, since people are not homogenous chunks of water, this will be inaccurate because a shift invariant dose deposition kernel will not account for tissue inhomogeneities. Electrons interact with bone differently than they interact with muscle. They won't travel as far in bone, so if you use a shift invariant dose deposition kernel, your calculation will show the dose spread out over a larger volume than in reality.

So, what you can do instead, is use a kernel that varies based on the voxels surrounding the initial primary photon interaction. Electrons won't travel as far in bone, so the kernel would shrink. Electrons travel further in lung, so the kernel is expanded.

This is going the opposite direction of what you want to do, but the inverse would be possible as well.

More related to astrophotography, I found this paper. It seems to touch on exactly what you are interested in.

Fast Approximations of Shift-Variant Blur (archives-ouvertes.fr)
Like
JO_FR_94 6.49
...
Thanks @barrabclaw , very interesting paper, will have a deeper look at it later. I don’t know if they talk about modeling a PSF function that is practical for computing, as they say, or about the algorithm that would use such a function.
By the way, I have seen they mention what @John Hayes suggested : splitting the image in areas where the PSF is considered constant, apply deconvolution with a proper PSF per region, and composite the results by interpolating between regions to avoid artifact at boundaries. This is what I clumsily tried to explain when talking about combining them with a proper mask function, and it seems this method has been explored and is sometime used. I think it is worth a try on my images, but the polar / radial configuration of my PSF does not make it practical, I have to think about it.
Thanks again, definitely worth diving in it !
Like
barrabclaw 0.00
...
@Jérémie Ochin I briefly scanned the paper, but mostly just wanted to point out that your idea is not crazy and it could be done. Whether or not anyone has done something like that in a commercial piece of software is a different story, though.

My point with the radiation therapy thing was just that what you are inquiring about is done in other fields - shift variant kernels are used constantly to help accurately calculate patient dose.

@John Hayes 's method is also good, especially as you shrink each panel. Less computationally expensive using a shift variant kernel, but more human labor intensive.
Like
JO_FR_94 6.49
...
@barrabclaw I plan to make some tests in both methods but in any case first : model the variation of PSF across the field of view.
I will use PixInsight and save those PSF as images and see how I can distort the center one using regular openCV rotation matrices and skew to obtain the other ones quickly (and therefore have a function that calculate the PSF on the fly given the center PSF, the optical center coordinates and the position at which we evaluate the function). And find an approximation law to that purpose : I guess that for coma something to the square of the distance from the optical center will be required.

I was also wondering if the PSF was the correct approach for coma : when you deconvolve an image that is produced by a corrected / flat field optical device, you try to compensate for dispersion of light in the lenses and diffraction mainly I guess ? Maybe a tiny bit of defocus ? But the physical image at least matches the sensor plane, while in coma, the « metric » is changed, as the image is formed on a curved plane, but recorded / projected on a flat plane. I am wondering if prior to deconvolution, one should try, first, to compensate for that « metric » change ? Like when you try to make planar projection of a spherical map ? I think that can be done very quickly by geometrically and non linearly stretching the image toward the optical center, and resampling the result. And then stick to classical PSF ?
It may be completely stupid, as I am by no way an optical expert nor a mathematician, but would love to read your comments.
Like
jhayes_tucson 22.40
...
·  1 like
Let me add something here.  First, convolution is a pretty simple mathematical operation that could be very easily adapted to a spatially dependent kernel.  Using a spatially dependent kernel would prevent performing the operation in frequency space, which is the simple (and fast) way to do the computation on large data sets like images.  The real problem with a spatially varying kernel occurs with deconvolution which conceptually comes across as simply a way to "reverse" the convolution operator, but which in general, is an ill-posed problem--even with a constant kernel.  Deconvolution is particularly unstable in the presence of even small amounts of noise.  That's why most successful deconvolution algorithms are iterative.  The idea is to make a small step toward what might be a solution and to evaluate an error function that is fed back into the next guess at the solution.  The algorithm then loops until the error becomes acceptably small.  Solving a deconvolution process that has a kernel that continuously varies over space might be an especially challenging problem to solve in general.

My suggestion to approach the problem using a "tiling" process could be totally automated so it automatically generates the size and shape of each tile as well as the PSF for each region.  This process strikes me as a problem well suited to an AI solution--both for selecting the tiling regions and kernels but also for finding a most likely deconvolved result.   I bet that a literature search would show that I'm not the first person to make that suggestion...and that that approach has already been implemented!

John
Edited ...
Like
Gunshy61 10.10
...
Just to join the conversation, your suggestion of using a different PDF for different parts of the image is a very good one, and is completely doable.      Just takes the time and effort to determine how to determine and interpolate the PDF over various areas of the image and actually code it.   Of course we would then have to create PDFs over different parts of the picture, which would be more time-consuming.

For my fellow geeks, deconvolution is simply trying to determine what the signal really is, given the response that we record on our image, assuming that image is a linear (sum of response) to all of the individual signals over the field.   In other words, if we see a response (brightness in a pixel) the only assumption is  that all of the various potential contributors (ie, stars, nebula) that are in the signal contributed to the overall response in a linear way.  For example, the brightness of a given pixel may have 4% contribution from the sky,  40% contribution from the star to the east, and 56% contribution from a star to the west.   Now I just made these numbers up, but the key to deconvolution is determining what these numbers are.   Again the only assumption is that the contributions of every signal (every star) adds up (linearly) to the total response that is seen in the image.

To perform deconvolution, we  need to know what the response is be to a single, known signal.  The details on how the transformation (convolution) occurs is not important.   Once we know that, we can "deconvolve" what the signal needs to be to give the response on our image.   In otherwords what combination of "stars" or signals do we need that when turned into a response by our atmosphere, telescope and camera and added all together, yields the result we see on the image.

In engineering this is often done via a calibration.   Put a known, simple single signal through whatever thing it is you are investigating and see what the response is like.  For most engineering examples, this "calibration" involves putting a simple signal (point source of light, pressure pulse, Dirac-delta, etc.) through a process and seeing what the response is like.

In astronomy, we are fortunate to already have such calibrations in our image.   The signal consists of a star, which is a single point source of light (close enough to have 0 diameter, yet emitting photons) which should appear as a single pixel, of a given brightness in our image if our optical system, camera, atmosphere, whatever else, were perfect.    The response we actually get is a fuzzy circle that is brightest in the centre and tails off with distance away from this.    So we know the signal - a single pixel of given brightness and the response, a fuzzy circle we call the PDF.  That, is the machine (our telescope and camera) show us the point source of light as a fuzzy circle (the PDF).    Note our PDFs also have noise added, but we can get rid of most noise by averaging the calibration - using multiple responses, by averagine the PDFs from multiple stars.  Deconvolution doesn't care why the respose is fuzzy or oval or wide,  only how fuzzy, oval, or wide it is.

Finally, knowing that any brightness (other than noise) in an image must be the sum of all the contributions from all of the stars in the image, we can determine what the original signal must look like to generate the image (response) from the camera.   In a perfect world, the deconvolved image should only show stars that occupy a single pixel, and the image would be perfect recording of the signal.   Instead, for a host of reasons, what we get in practice is an image with smaller stars, sharper nebulae, etc that is more representative of the actual signal, - still not perfect but closer to the signal than the original image.  The "host of reasons" includes residual noise, camera resolution, numerical dispersion, non-linearity of camera response, light diffraction (non-linear), etc. etc.

Using different PDFs for different parts of the picture is no different that using a different PDF for a different picture, a different filter, or a different camera - as long as the signal-response relationship, or calibration distance, or PDF diameter,  is small compared to the overall scale of the picture.   Does a point source of light give a circular image or an oval image and how much spread (standard deviation) results.   Any nebulosity in the image is just the linear sum of all of the "point sources" that make up the nebula.

The beauty of deconvolution is that is doen't need to know anything about the nature of what happens to the signal to get the response and doesn't need to know anything about what created the signal,  just what the transformation from signal to reponse is and that is the PDF.

Hope this entertained you a little.

Dave
Like
Annehouw 1.51
...
·  1 like
As mentioned before, this could be done ("quite easily), but the devil undoubtedly may be in the details.

The suggestion of dissecting the image into fixed tiles, calculating the PSF for each tile, doing a classic deconvoltion on each tile and putting back the deconvoluted tile is a simple one that might be doable by writing a script in PI where x images are being generated, an automatic PSF (existing script already) is performed etc. What still would be needed though, is a feedback mechanism for choosing the right parameters. I believe these ideas are already in this thread. There are papers on anitotripic deconvolution, but that might be a bit more than writing a script to use existing processes.

What would make this feasable is very fast deconvolution and that is where GPU's come into play. Startools has an implementation of deconvolution that is GPU assisted. Since there is also an identical version without the GPU support, it is possible the get an idea on the speed increase. And that is impressive (a bit dependent on your current CPU and GPU of course).
PI is working on GPU processing in earnest right now, I have read and convolution is mentioned as one of the first processes to get this support.
Like
 
Register or login to create to post a reply.