John Hayes:Tim Hawkes:
Thanks for your reply and the learning. I certainly didn't know most of that. One further question though. Surely part of the point of Blur Xt deconvolution is that it can beat the seeing - always provided that your sampling rate supports a higher resolution. So in a short frame you get atmosphere -distorted star PSFs but distortions that are at least distorted in a reasonably consistent way within any given region of the frame and over a short time. BlurXt (I presume) iteratively calculates the correct local cmpensatory correction and then applies it. So the question is that while it is clearly always better to start from a near perfect image and to then apply deconvolution to -- in my experience at least - BlurXt takes you a long way even when the star shapes are not perfect . In my M51 picture above average Eccentricity was up at maybe 0.55 prior to correction and deconvolution. Maybe consistency of blur is more important to the end product than lack of blur as a starting point to apply deconvolution to? Tim
Russ had a genius idea for BXT and he had to solve a lot of the details to make it work as well as it does. At a high level, the concept is actually pretty straight forward. I should add here that Russ hasn't given me any inside information but here's my guess about how he might have implemented it. It is simply a neural network that is loaded with NxN patches of Hubble images that have been mathematically blurred (probably with just a Gaussian blur function). N might be a value that ranges from 32 to maybe 512--depending on how Russ chose to set it up. There might be anywhere from 300,000 to 1,000,000 samples loaded into the training set, which is then trained using the original blurred data to find the best match out of all of the samples. The training can include a lot of different parameters including the amount of blurring, asymmetry in the blurring (smear), and noise levels. When you sharpen your own image, the data is subdivided into NxN patches so that each patch in your data can be identified with the "mostly likely" fit to a solution. Once identified, the information in that patch is replaced with the original image data that created the best-fit blurred data. Note that this is not the same as simply inserting Hubble images directly into your image. The image patches are small enough that the Hubble data serves mostly as a way of supplying a nearly limitless source of "sharpened patterns" that can be used to show what your more blurry data might look like without the blurring mechanism. I believe that the process for de-blurring the stars is similar but it may be different enough that it runs as a separate process from the structure sharpening. That's something that Russ would have to address. I could imagine that the star correction NN could be loaded with mathematically computed Moffat data that has been filtered through a range of aberrations as well as image translations. One of the tricky parts to all of this is to get everything normalized properly so that the results all fit together seamlessly.
So nothing in BXT is like the traditional deconvolution process that requires a PSF kernel. BXT has the ability to solve for the seeing conditions, but Russ didn't choose to work that into the solution. Regardless, BXT doesn't have to know anything about the seeing to work well. It just uses mathematically blurred data and since the process is applied patch-wise across the field, it can effectively correct for field aberrations (that vary with position) as well as for motion blur.
John
Thanks so much for this likely explanation John. I had just assumed that it started with normal local deconvolution but used the NN to better define what the iteration should be minimised to. Your explanation is simpler and makes more sense. I am still interested to know how factors such as sampling rate feed into the level of detail that BlurXt can get down to -- so I am just doing experiments on varying parameters out of interest to see what happens and what is critical for the final result. BXT almost always does better than normal deconvolution- except on one object - the cats eye nebula which I guess was too far out of Russ's training set? Tim