Blurxterminator… a game changer? [Deep Sky] Processing techniques · Steeve Body · ... · 250 · 16732 · 28

HegAstro 12.28
...
· 
​​​​​​If one likes the tool, awesome use it and have fun. That's what the name of the game is here in AP. If one doesn't like the tool, that's fine. Don't use it. More importantly though, also don't spread incorrect information about it, and don't shame those that do elect to use it.


I think the effectiveness of the tool speaks for itself. In the end, as results from its usage become more widely known, I expect its usage will be the norm and only a small minority will choose not to use it. Which will be fine.
Like
TimH
...
· 
Andy Wray:
If you are looking for feedback on your image you may wish to post that in its own thread.


I'm not sure I want feedback on the image, because I know it is pretty crap, however BlurXterminator did improve it a bit and that was my point.

On the contrary Andy.  Your image is now near perfect and in a way that is both the interest and controversy in this thread.  At some point BlurXt will generate artifacts -- question is for any given image is that at 0.5, 0.7 or 0.9 etc sharpening and how can you tell?

Tim
Edited ...
Like
andymw 11.01
...
· 
Tim Hawkes:
At some point BlurXt will generate artifacts -- question is for any given image is that at 0.5, 0.7 or 0.9 etc sharpening and how can you tell?


Agreed ... I dialed this back to 0.6 for non-stellar sharpening.  In hindsight I should probably have gone with 0.5 on this image as it looks a tiny bit too sharp for something taken from my back yard on a hazy night or two.

What I would say, is that I know my collimation was off, my guiding wasn't perfect and I think I have a pinched primary.  Using a tool like this will not stop me from striving to fix those issues.  This tool helps me get the best out of the data I can capture today with the minimum time wasted, however it will not stop me striving for perfection in other parts of my system.
Edited ...
Like
Bobinius 9.90
...
· 
·  1 like
Hi,

First time post and sorry if this has been answered (here or elsewhere).....

The results are indeed impressive. However, is this really detail lost from "blurring" that has been recovered by deconvolution or is there any element of "detail" that looks good but isn't really a true represntation? I am not going down the "fakery" route here simply querying how accurate the recovered detail is. Also I am not trying to question deconvolution as a concept but specifically to this BlurXTerminator (as it does look so impressive).

I don't have the tool and wondered if anyone has used a picture with good detail (eg Hubble or JWST), convoluted it and the run BlurXTerminator to see if it gives a faithful recovery of detail?

To me, at the moment, it's a bit like the facial reconstructins on Forensic series. The skulls reconstructed to faces often look good but I never really see a comparison to an actual photo when the person has been identified.

If it did do a good job of deconvoluting a convoluted photo I'd certainly pay for this.

Just a query....

Paul

Good point Paul, you're putting the finger on an essential aspect. I thought about it when I first heard about the neural training on Hubble images. From what's mentioned in the documentation, it has been trained on the Hubble images as the output, but used ground based amateur images as an input (from where we don't know, but there a few big sources of free amateur images online...). Meaning that it was not trained by blurring the Hubble image and recovering the detail. Also specified in the documentation is that it does not work of Hubble star profiles. 

Of course, if you apply a uniform or artificially produced noise on Hubble images, the network will find the solution pretty fast. And you're entitled to ask how accurate the recovered detail is. "It looks good" "it looks better than x" does not mean it is accurate. It means it is sharpened and you find it visually convincing. Which is because this particular network was trained to produce nebular or galactic details. Especially without additional noise. Topaz was trained to sharpen animal photos and outdoors, so it can produce hair like details and feather like details on the blurry photo of your dog. And if you sharpen it with Topaz, the photo of your dog will look much better or sharper; even if the hairs where not where they actually were. But to judge the degree of accuracy on your dog hairs is very tough for a human. So even if some hairs are not where they should be, you'll feel pretty confident it is well sharpened. 

But hoping to use Blur XT for your facial investigation is doomed to fail : ) . It was not trained for that.  You can try the convolution reverse on Hubble though, see what it produces.
Like
TimH
...
· 
Wei-Hao Wang:
Based on things I have seen thus far on the internet in the past few days and my own trial of it, I feel it is a good convenient tool in many ways. I probably will run it on every image from now on.  I think the most powerful part of it is the capability to shrink stars and to correct for minor aberration.  The fact that it does this differently in different areas of an image means it can deal with minor focal plane tilt and off-axis aberrations.  These can all be done with traditional methods, but those were highly tedious works.  Now it's basically just one click and a couple minutes of wait.  In some sense, you are upgrading your optics (and your collimation skills) with just a sub-$100 plugin.  For those who uses PI, this is a bargain.  (Sorry to those who don't use PI.  I really hope there is a Photoshop version.)

On the sharpening part, I am less excited than many other people.  (Still excited, but just not that much.).  My criterion is that I don't want deconvolution artifacts.  As a professional astronomer, I have seen enough images from HST and 10-m class telescopes.  I know what real high-resolution, sharp images should look like.  And previous amateur images undergone deconvolution are just not like that. There are either lots of artifacts, or just look unnatural (comparing to real high-resolution images). Because of this, I was never a fan of deconvolution for amateur astrophotography. So, the question I would like to ask for BXT is: when the strength of BXT is tuned down to a point where there are no clear indications of artifacts and the look remains natural, is BXT still better than a standard sharpening tool?  My answer to this question at this moment is: it's better, but not "gamechangingly" better.  I probably would say that BXT can do 20% to 40% better (don't ask me how to define this) than a traditional skillful sharpening.  This 20% to 40% difference alone may not justify the cost (plus the requirement of PI). However, to do that with traditional sharpening, a lot of steps are required and one really has to be careful and "skillful."  Now it's just one click and its 20% to 40% better.  I definitely will not say no to it.

These are my temporary conclusions thus far.  After I try more and see more, my thoughts may change.

That is a very interesting point that you raise in the first paragraph.   As long as the geometric errors across an image are consistent for all the frames integrated together (e.g some tilt) then maybe software can compensate for slight alignment errors etc and correct accurately enough wrt star shape.

Similarly with deconvolution to recover image sharpening?  If the PSF for can be calculated accurately enough across the entire image accounting for all the local tilt distortions etc  ---and most importantly is consistent enough - then will it be possible to iterate to (almost)  just as sharp an image from a slightly misaligned telescope as from a perfectly optically aligned telescope?
Like
TimH
...
· 
·  1 like
Wei-Hao Wang:
Based on things I have seen thus far on the internet in the past few days and my own trial of it, I feel it is a good convenient tool in many ways. I probably will run it on every image from now on.  I think the most powerful part of it is the capability to shrink stars and to correct for minor aberration.  The fact that it does this differently in different areas of an image means it can deal with minor focal plane tilt and off-axis aberrations.  These can all be done with traditional methods, but those were highly tedious works.  Now it's basically just one click and a couple minutes of wait.  In some sense, you are upgrading your optics (and your collimation skills) with just a sub-$100 plugin.  For those who uses PI, this is a bargain.  (Sorry to those who don't use PI.  I really hope there is a Photoshop version.)

On the sharpening part, I am less excited than many other people.  (Still excited, but just not that much.).  My criterion is that I don't want deconvolution artifacts.  As a professional astronomer, I have seen enough images from HST and 10-m class telescopes.  I know what real high-resolution, sharp images should look like.  And previous amateur images undergone deconvolution are just not like that. There are either lots of artifacts, or just look unnatural (comparing to real high-resolution images). Because of this, I was never a fan of deconvolution for amateur astrophotography. So, the question I would like to ask for BXT is: when the strength of BXT is tuned down to a point where there are no clear indications of artifacts and the look remains natural, is BXT still better than a standard sharpening tool?  My answer to this question at this moment is: it's better, but not "gamechangingly" better.  I probably would say that BXT can do 20% to 40% better (don't ask me how to define this) than a traditional skillful sharpening.  This 20% to 40% difference alone may not justify the cost (plus the requirement of PI). However, to do that with traditional sharpening, a lot of steps are required and one really has to be careful and "skillful."  Now it's just one click and its 20% to 40% better.  I definitely will not say no to it.

These are my temporary conclusions thus far.  After I try more and see more, my thoughts may change.

That is a very interesting point that you raise in the first paragraph.   As long as the geometric errors across an image are consistent for all the frames integrated together (e.g some tilt) then maybe software can compensate for slight alignment errors etc and correct accurately enough wrt star shape.

Similarly with deconvolution to recover image sharpening?  If the PSF for can be calculated accurately enough across the entire image accounting for all the local tilt distortions etc  ---and most importantly is consistent enough - then will it be possible to iterate to (almost)  just as sharp an image from a slightly misaligned telescope as from a perfectly optically aligned telescope?
Like
jhayes_tucson 22.82
...
· 
·  1 like
Bogdan Borz:
Hi John,

This is impressive of course. I had the same result when applying BlurXT on my NGC7331, maybe even more impressive since my data is shot from the city with a seeing incomparable with that from the Chilean skies (and it took me around 30 seconds).  And while on your previous Top Pick version of the image I can clearly see that you used classical Deconvolution, it is impossible to tell that you applied BlurXT on the current version. 

The appropriate and interesting comparison would be between the BlurXT final sharpened version vs. your final classical processing. Comparing a raw unprocessed linear image with an image sharpened by an AI capable on working on linear data is always going to produce a difference, for all of our images. Agreed, it looks like a final processing, since that is what the AI has been trained to produce. 

My fundamental question is if your BlurXT version is more detailed than your manual version, how do you know that supplementary details that appear are from the information contained in your image and not inferred/produced by the AI model based on its Hubble training? And how can we tell? Especially when you don't have a Hubble image to compare it to. The difficult part would also be how do we analyse, since the difference appears for low scale details, pretty cumbersome to analyse correctly visually. 

In my case, BlurXT produced supplementary details in the galaxy core for my NGC 7331 (I have to look for the full project before presenting you guys the images). It does a better HDR, but since it is linear I risk maybe losing some data while stretching (or maybe it got lost for my processed version too). How do I tell if the new details are real or if they were actually in the information captured by my image?

And if the AI managed to sharpen the galaxy better than I did (or if you think it managed to sharpen it better than you were able to), do we need to replace our usual processing effort with the application of a BlurXT process ?

CS,

Bogdan

Bogdan,
My current image of NGC 1365 does not use BXT at all.  The example that I posted above shows the effect of BXT on my raw Lum stack and it is quite a bit sharper than what I could do with standard deconvolution.  The standard deconvolution method is limited by a number of issues including convergence control and ringing, which limit how much sharpening you can achieve.  In fact, in order to maximize sharpening on small details, it is entirely possible to actually increase the apparent size of the bright stars due to de-ringing control.  

I'm not sure how Russ trained the AI system used in BXT but using Hubble data makes perfect sense.  I'm not an expert on AI but I gained a pretty good understanding of how a neural net works when I wrote a scientific paper on the subject with Gaston Baudat about the AI technology used in his SkyWave product.  The one thing that surprised both of us was just how sensitive a neural net can be to small differences and I think that is one of the factors that makes BXT so effective.  The trick is to start with known data--such as Hubble data and then to apply a mathematically correct blurring function (taking into account both optical aberrations and seeing values) to load a large set of training data (like 400,000 to 1,000,000 images).  This is what enables the NN to pick a best estimate of the original details that created the blurred image within a given image patch.  The algorithm is then applied piecewise over the field to handle field aberration.  (This is called a non-shift invariant imaging system.)  The data has to be properly scaled and normalized to produce properly scaled output so there's a little "trickiness" to getting this all to work properly.  Like all algorithms, this method may produce errors.  Remember that the neural net is simply finding the best estimate of what the original scene looked like in order to produce the original blurred data.  It can't find an exact solution.  Even very small errors in the original data can cause the estimate veer off course.  So the better the input, the better the output.  That's why it's a good idea to closely examine the output to make sure that the sharpened details match features showing in the original image, which is what you should do with ANY sharpening tool.

John
Edited ...
Like
jhayes_tucson 22.82
...
· 
·  1 like
Andreas:
John Hayes:
BlurXTerminator is indeed a revolutionary implementation of AI and I think that Russell trained it well.  I tried it on the Lum' data from my most recent NGC 1365 image and the result totally blows away what I was able to do with the standard deconvolution tool.  I've examined the details up close by blinking between the original data and the sharpened data--and I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.  For a very long time, the holy grail of image processing has been to reconstruct what the image should look like in the absence of the atmosphere and diffraction and it looks like Russell has made a huge step forward with this approach.  It is certainly possible that like all processing algorithms, there may be some artifacts; but from what I can see, this algorithm does a better job than anything else that I've every used.  Here is what a VERY zoomed view of the raw stacked data looks like followed by the processed data.  The results are extremely impressive!  Clearly, I'm going to have to completely reprocess this data set.

....

John

Hi John,

this really looks fantastic. I would be interested to know what the standard deconvolution produces with this image. In my experience, the details of the nebulae are slightly better with BXT than with standard deconvolution. I see the strength of BXT so far mainly in the correction of the stars.

Many greetings
Andreas

Andreas,
The standard deconvolution that I did is nowhere near as sharp as the example that I posted.  From what I've seen so far, BXT does a FAR superior job on both the details and the stars.  One challenge with BXT lies in producing a color image.  With too much sharpening, some minor details don't line up between the RGB channels, which can lead to weird color artifacts.  I'm still working on that one so I can't add much more about it until I have a little more time to fiddle with it.

John
Like
jhayes_tucson 22.82
...
· 
·  2 likes
Die Launische Diva:
Hello John,

Your result is indeed impressive. Thank you and and all of the expert members who offer their experience in this sensitive but highly intellectually arousing matter.

I would like to hear your opinion about the lack of sharpening on the diffraction spikes. I understand that many astrophotographers are not fond of diffraction spikes but I am not sure if the effect on the diffraction spikes can be said to be the result of a deconvolution algorithm, classical or machine learning. Of course maybe I am nitpicking and/or I have a misconception of what the term deconvolution means.

I think my wetware neural network has already learned to identify BXT images just by looking at the diffraction spikes

A neural net could easily be trained to remove diffraction spikes but that's not a part of BXT.  Likewise it could be trained to sharpen the spikes but again, that wasn't a part of the training.  I don't mind the diffraction spikes so I'm fine with the way it is.

John
Edited ...
Like
Alan_Brunelle
...
· 
·  1 like
I just posted data on the "BlurXterminator technique and usage thread" that I think is also relevant to this forum.  If someone can tell me how to link to that, it might make it easier, but I am sure I am not supposed to duplicate posts.

In any case, the jist of the message, was that I got poor data on NGC 891 last week and worked up a comparison of how well BXT non-stellar sharpening and NXT Detail work with the data using the galactic dust streams as a "fingerprint" to decide whether there is any funny business going on with AI sharpening.  The image was deeply affected by pinched optics, yet sharpening was still faithful to the Hubble image to a degree as expected for a 12 inch aperature scope. 

Thanks
Alan
Like
Bobinius 9.90
...
· 
·  1 like
John Hayes:
Bogdan,
My current image of NGC 1365 does not use BXT at all.  The example above shows the effect of BXT on my raw Lum stack and it is quite a bit sharper than what I could do with standard deconvolution.  The standard deconvolution method is limited by a number of issues including convergence control and ringing, which limit how much sharpening you can achieve.  In fact, in order to maximize sharpening on small details, it is entirely possible to actually increase the apparent size of the bright stars due to de-ringing control.  

I'm not sure how Russ trained the AI system used in BXT but using Hubble data makes perfect sense.  I'm not an expert on AI but I gained a pretty good understanding of how a neural net works when I wrote a scientific paper on the subject with Gaston Baudat about the AI technology used in his SkyWave product.  The one thing that surprised both of us was just how sensitive a neural net can be to small differences and I think that is one of the factors that makes BXT so effective.  The trick is to start with known data--such as Hubble data and then to apply a mathematically correct blurring function (taking into account both optical aberrations and seeing values) to load a large set of training data (like 400,000 to 1,000,000 images).  This is what enables the NN to pick a best estimate of the original details that created the blurred image within a given image patch.  The algorithm is then applied piecewise over the field to handle field aberration.  (This is called a non-shift invariant imaging system.)  The data has to be properly scaled and normalized to produce properly scaled output so there's a little "trickiness" to getting this all to work properly.  Like all algorithms, this method may produce errors.  Remember that the neural net is simply finding the best estimate of what the original scene looked like in order to produce the original blurred data.  It can't find an exact solution.  Even very small errors in the original data can cause the estimate veer off course.  So the better the input, the better the output.  That's why it's a good idea to closely examine the output to make sure that the sharpened details match features showing in the original image, which is what you should do with ANY sharpening tool.

John


Thanks John.  Yes, I know your current/published image did not use Bxt, but it seems to me you used the classical Deconvolution in PI on it, I was referring to that one. Correct me if I'm wrong. Hm, if I read you correctly, you think the Hubble image data set was blurred in order to train the NN? I was thinking this at first, but from the documentation and his very laconic replies, the Hubble was considered the "true deconvoluted" version of the ground image and the ground blurred images had to be deconvoluted by the NN in order to minimize the loss function compared to the final Hubble image. Of course, Russell won't reveal what kind of validation set or generalization set did he use, neither what was the accuracy of his model. Image recognition is increasingly applied in medicine (from what I've seen in the literature there seemed to be an interest in the mid 90's and now in the 5-10y), but the difference is that in medicine we usually use labels as outputs, or let's say continuous variables like age. I consider this a much more intelligible classification of "true result" compared to a "true deconvolved image" without unreal artifacts. If the neural network interprets your ECG as "arythmia" and you're in normal rhythm, it's false. Pretty clear. Some teams trained them to recognize implanted pacemakers or defibrillator brands based on your chest radiograph image, device identification. Well, they have an accuracy of 99% (for other tasks is much lower). But the outcome is the model and brand name. Much more simple to comprehend compared to what does it mean to say the resulting image is accurate, how much should it deviate from reality to be considered true or the "good outcome" of the model? Especially when it has so many structures displayed. 

I'm not sure that the training on 'deconvolved' Hubble images as reference is appropriate, since the reference has much higher resolution than the blurred system. But Russell thinks that this is actually an advantage: " [...] trained using extremely high-resolution images acquired by instruments such as the Hubble and James Webb space telescopes. It "understands" what astronomical structures actually look like at finer scales than can be resolved using amateur equipment."  A perfectly deconvolved C11 is a C11 without atmospheric interference or minimal, not Hubble, even downscaled. We don't have much choice (perhaps training blurred systems on images taken under the best planetary skies), but the NN seems to have been trained to produce outcomes above the capacity of the system. 

I totally subscribe to the principle of checking for matched features in the original, but my difficulty especially with this tool is that it sharpens details that I cannot see in the original. Not to talk if it is the raw image, the jump in detail is huge. I compared my NGC 7331 to Hubble and Bxt is clearly good in keeping the main details. The difference will be at very very high zoom and minute details and filaments, honestly really tough to discriminate.  If the AI does a good enough job tough, it will render our sharpening processing effort pretty superfluous.
Edited ...
Like
CCDnOES 5.61
...
· 
·  1 like
Steeve Body:
Blaine Gibby:
When can we expect this to be released to photoshop so us simpletons can use it?

I may be wrong but this may be a pixinsight exclusive...?

I am pretty sure that is correct, it has to work on linear images so I do not think Russ plans a Photoshop version.
Like
phsampaio 3.61
...
· 
John Hayes:
Bogdan,
My current image of NGC 1365 does not use BXT at all.  The example above shows the effect of BXT on my raw Lum stack and it is quite a bit sharper than what I could do with standard deconvolution.  The standard deconvolution method is limited by a number of issues including convergence control and ringing, which limit how much sharpening you can achieve.  In fact, in order to maximize sharpening on small details, it is entirely possible to actually increase the apparent size of the bright stars due to de-ringing control.  

I'm not sure how Russ trained the AI system used in BXT but using Hubble data makes perfect sense.  I'm not an expert on AI but I gained a pretty good understanding of how a neural net works when I wrote a scientific paper on the subject with Gaston Baudat about the AI technology used in his SkyWave product.  The one thing that surprised both of us was just how sensitive a neural net can be to small differences and I think that is one of the factors that makes BXT so effective.  The trick is to start with known data--such as Hubble data and then to apply a mathematically correct blurring function (taking into account both optical aberrations and seeing values) to load a large set of training data (like 400,000 to 1,000,000 images).  This is what enables the NN to pick a best estimate of the original details that created the blurred image within a given image patch.  The algorithm is then applied piecewise over the field to handle field aberration.  (This is called a non-shift invariant imaging system.)  The data has to be properly scaled and normalized to produce properly scaled output so there's a little "trickiness" to getting this all to work properly.  Like all algorithms, this method may produce errors.  Remember that the neural net is simply finding the best estimate of what the original scene looked like in order to produce the original blurred data.  It can't find an exact solution.  Even very small errors in the original data can cause the estimate veer off course.  So the better the input, the better the output.  That's why it's a good idea to closely examine the output to make sure that the sharpened details match features showing in the original image, which is what you should do with ANY sharpening tool.

John


Thanks John.  Yes, I know your current/published image did not use Bxt, but it seems to me you used the classical Deconvolution in PI on it, I was referring to that one. Correct me if I'm wrong. Hm, if I read you correctly, you think the Hubble image data set was blurred in order to train the NN? I was thinking this at first, but from the documentation and his very laconic replies, the Hubble was considered the "true deconvoluted" version of the ground image and the ground blurred images had to be deconvoluted by the NN in order to minimize the loss function compared to the final Hubble image. Of course, Russell won't reveal what kind of validation set or generalization set did he use, neither what was the accuracy of his model. Image recognition is increasingly applied in medicine (from what I've seen in the literature there seemed to be an interest in the mid 90's and now in the 5-10y), but the difference is that in medicine we usually use labels as outputs, or let's say continuous variables like age. I consider this a much more intelligible classification of "true result" compared to a "true deconvolved image" without unreal artifacts. If the neural network interprets your ECG as "arythmia" and you're in normal rhythm, it's false. Pretty clear. Some teams trained them to recognize implanted pacemakers or defibrillator brands based on your chest radiograph image, device identification. Well, they have an accuracy of 99% (for other tasks is much lower). But the outcome is the model and brand name. Much more simple to comprehend compared to what does it mean to say the resulting image is accurate, how much should it deviate from reality to be considered true or the "good outcome" of the model? Especially when it has so many structures displayed. 

I'm not sure that the training on 'deconvolved' Hubble images as reference is appropriate, since the reference has much higher resolution than the blurred system. But Russell thinks that this is actually an advantage: " [...] trained using extremely high-resolution images acquired by instruments such as the Hubble and James Webb space telescopes. It "understands" what astronomical structures actually look like at finer scales than can be resolved using amateur equipment."  A perfectly deconvolved C11 is a C11 without atmospheric interference or minimal, not Hubble, even downscaled. We don't have much choice (perhaps training blurred systems on images taken under the best planetary skies), but the NN seems to have been trained to produce outcomes above the capacity of the system. 

I totally subscribe to the principle of checking for matched features in the original, but my difficulty especially with this tool is that it sharpens details that I cannot see in the original. Not to talk if it is the raw image, the jump in detail is huge. I compared my NGC 7331 to Hubble and Bxt is clearly good in keeping the main details. The difference will be at very very high zoom and minute details and filaments, honestly really tough to discriminate.  If the AI does a good enough job tough, it will render our sharpening processing effort pretty superfluous.



From what I could gather, I don't think we have evidence that BXT creates more detail than what the telescope could achieve solely by its aperture.

I say this because the recovery of sharpness and detail is markedly more pronounced the larger the aperture of the telescopes. Those big telescopes are much more seeing limited than smaller ones. I've yet to see an image from a small refractor that gained more than a good deconvolution could already do (see my previous comment about my image of the Dragons of Ara). Are the results impressive? Sure, and way less tedious than the whole Deconvolution process. But it's not creating Hubble like images on a 100mm refractor. 

If BXT's NN was trained to imitate the HST images, why would it work for some apertures and not others? The program doesn't know the aperture or pixel scale of a given image.
Edited ...
Like
CCDnOES 5.61
...
· 
·  1 like
Comparison on a small galaxy with the same data. BX and NX used instead of LR and MMT. Also spectro color instead of standard so better color. Clearly Fewer artifacts.
NGC4216 Old Crop.jpgNGC 4216 New Crop.jpg
Like
Alan_Brunelle
...
· 
·  1 like
Bill McLaughlin:
Comparison on a small galaxy with the same data. BX and NX used instead of LR and MMT. Also spectro color instead of standard so better color. Clearly Fewer artifacts.
NGC4216 Old Crop.jpgNGC 4216 New Crop.jpg

I'm not sure which panel is which.  The top panel shows what can be interpreted as typical over-decon bright features (snake like) that exist in areas on either side of the center and lower rim.  The lower image lacks these artifacts and is much improved over the top, but I don't really see sharpening.  Perhaps show an unprocessed image?

You might have a look at my comparison of BXT and NXT sharpening vs Hubble and unprocessed of NGC 891 on the sister BXT thread. Bottom of 1st page.
Edited ...
Like
CCDnOES 5.61
...
· 
·  1 like
Alan Brunelle:
Bill McLaughlin:
Comparison on a small galaxy with the same data. BX and NX used instead of LR and MMT. Also spectro color instead of standard so better color. Clearly Fewer artifacts.
NGC4216 Old Crop.jpgNGC 4216 New Crop.jpg

I'm not sure which panel is which.

Top Panel is the original version with RL deconvolution, which is why the artifacts. Bottom is the BX version. Fairly small object (about half the size of 891) and the seeing was only fair.
Like
jhayes_tucson 22.82
...
· 
·  3 likes
Bogdan Borz:
Thanks John.  Yes, I know your current/published image did not use Bxt, but it seems to me you used the classical Deconvolution in PI on it, I was referring to that one. Correct me if I'm wrong. Hm, if I read you correctly, you think the Hubble image data set was blurred in order to train the NN? I was thinking this at first, but from the documentation and his very laconic replies, the Hubble was considered the "true deconvoluted" version of the ground image and the ground blurred images had to be deconvoluted by the NN in order to minimize the loss function compared to the final Hubble image. Of course, Russell won't reveal what kind of validation set or generalization set did he use, neither what was the accuracy of his model. Image recognition is increasingly applied in medicine (from what I've seen in the literature there seemed to be an interest in the mid 90's and now in the 5-10y), but the difference is that in medicine we usually use labels as outputs, or let's say continuous variables like age. I consider this a much more intelligible classification of "true result" compared to a "true deconvolved image" without unreal artifacts. If the neural network interprets your ECG as "arythmia" and you're in normal rhythm, it's false. Pretty clear. Some teams trained them to recognize implanted pacemakers or defibrillator brands based on your chest radiograph image, device identification. Well, they have an accuracy of 99% (for other tasks is much lower). But the outcome is the model and brand name. Much more simple to comprehend compared to what does it mean to say the resulting image is accurate, how much should it deviate from reality to be considered true or the "good outcome" of the model? Especially when it has so many structures displayed. 

I'm not sure that the training on 'deconvolved' Hubble images as reference is appropriate, since the reference has much higher resolution than the blurred system. But Russell thinks that this is actually an advantage: " [...] trained using extremely high-resolution images acquired by instruments such as the Hubble and James Webb space telescopes. It "understands" what astronomical structures actually look like at finer scales than can be resolved using amateur equipment."  A perfectly deconvolved C11 is a C11 without atmospheric interference or minimal, not Hubble, even downscaled. We don't have much choice (perhaps training blurred systems on images taken under the best planetary skies), but the NN seems to have been trained to produce outcomes above the capacity of the system. 

I totally subscribe to the principle of checking for matched features in the original, but my difficulty especially with this tool is that it sharpens details that I cannot see in the original. Not to talk if it is the raw image, the jump in detail is huge. I compared my NGC 7331 to Hubble and Bxt is clearly good in keeping the main details. The difference will be at very very high zoom and minute details and filaments, honestly really tough to discriminate.  If the AI does a good enough job tough, it will render our sharpening processing effort pretty superfluous.

Yes, the NGC 1365 image that I posted had a little bit of deconvolution applied to it.  In simple terms, an image is given by the irradiance distribution of the scene convolved with the point spread function (PSF) of the optical system.  That process is ALWAYS a blurring process, which means that the image can never be as sharp as the scene (aka, the object).  The holy grail in image processing is to figure out how to “undo” that process by starting with an image and using some mathematics to “deconvolve” the image to get back to the original scene.  That means that (in principle) the goal is to recover the scene from the blurred image in a way that exceeds the diffraction limit.  Remember that exceeding the diffraction limit is what the original idea of deconvolution is all about.  The ultimate challenge is that deconvolution can’t be written in terms of a closed form mathematical solution so it is typically done as an iterative process, which leads to other problems such as convergence and ringing artifacts.  Of course, the PSF isn’t just determined by the optical system.  For ground based telescopes, the average PSF is given by the time integrated form of the instantaneous PSF that varies with time due to atmospheric scintillation.  That means that the PSF may not be shift invariant over the field (due to the atmosphere and due to field aberrations) so the “simple” method of a global de-convolution algorithm might not be very effective. 

In order to train the neural-net (NN), you need to start with "ground truth" of what a sharp image looks like and that's why it makes sense to start with Hubble or JWST data.  That's not "prefect" data but as far as small, ground based imaging systems are concerned it's close enough.  The blurring that I referred to is required to train the NN so that it can find the nearest match in your input image to a blurred image in the training data.  Once it finds that match, it can then replace the original data with a normalized version of the sharp (ground truth) data.  To be clear, the algorithm does not actually contain Hubble data but it's set up to analyze the small variations in input data so that it can provide a "best guess" about the sharp features that would create the same blurry pattern in your data--all based on the training data set.

BlurXterminator (BXT) is an AI based solution to deconvolution, which eliminates convergence and ringing limitations.  Properly implemented, AI solutions can also be extraordinarily sensitive to small signal variations, which can make them much more effective than iterative solutions.  Still, they need to be used with caution because it is still possible to generate small artifacts (just like other methods.)

John
​​​​​​​
Like
the_blue_jester 0.00
...
· 
Bogdan Borz:
Good point Paul, you're putting the finger on an essential aspect. I thought about it when I first heard about the neural training on Hubble images. From what's mentioned in the documentation, it has been trained on the Hubble images as the output, but used ground based amateur images as an input (from where we don't know, but there a few big sources of free amateur images online...). Meaning that it was not trained by blurring the Hubble image and recovering the detail. Also specified in the documentation is that it does not work of Hubble star profiles. 

Of course, if you apply a uniform or artificially produced noise on Hubble images, the network will find the solution pretty fast. And you're entitled to ask how accurate the recovered detail is. "It looks good" "it looks better than x" does not mean it is accurate. It means it is sharpened and you find it visually convincing. Which is because this particular network was trained to produce nebular or galactic details. Especially without additional noise. Topaz was trained to sharpen animal photos and outdoors, so it can produce hair like details and feather like details on the blurry photo of your dog. And if you sharpen it with Topaz, the photo of your dog will look much better or sharper; even if the hairs where not where they actually were. But to judge the degree of accuracy on your dog hairs is very tough for a human. So even if some hairs are not where they should be, you'll feel pretty confident it is well sharpened. 

But hoping to use Blur XT for your facial investigation is doomed to fail : ) . It was not trained for that.  You can try the convolution reverse on Hubble though, see what it produces.

Bogdan,

As far as the face goes I used it as an analogy - wondering how the output compared to "reality"  rather than suggesting using BlurXTerminator for faces.

As to your first part then I can see that it is a complex implementation. Thefre is still a nagging query for me that is it "creating" some detail rather than simply recovering.

I'll likely still use it as the results so far are impressive.

I think there may be a temptation to see it overused though. If you remember when PixInsight was starting to gain traction there were photos coming out that were looking cartoonish with colours that were too intrusive and "overprocessed".

To me I suspect it'll end up be ing a tool to tweak things but I do think we'll see it overused in images where the basic data is probably less than ideal to start with. A lot of the images here are from experienced, high level astrophotgraphers whose images were good to start with. For them I think this will be a fantastic tool to take their images up a notch to where we amateurs could never have reached before.

Paul
Like
Gamaholjad 3.31
...
· 
Well done that man for creating a topic which has divided the community.  Personally i think its an amazing piece of coded ai, and makes it easier to finalise images. Bottom line kids, think about what this hobby was like 20 years ago. Unafforadle, cameras needed stupidly expensive. Where we are at right now? have software to do all parts of processing, bravo the coders that have made this enjoyable for all. There will always be good and bad reviews, thats a given. But like i said what was it like 20 year ago. Too those that are voicing your opions, have them its good. You wanna do it the old way feel free and those doing the new way you go ahead and do it, the end result is YOUR way not what others think it should be. Once again welldone Mr RC, great piece of software. Everyone enjoy the datasets that you all gather, what you put it delivers the image however you choose. Anyhows Merry Christmas to all in community.
Like
morefield 11.37
...
· 
·  4 likes
John Hayes:
Andreas:
John Hayes:
BlurXTerminator is indeed a revolutionary implementation of AI and I think that Russell trained it well.  I tried it on the Lum' data from my most recent NGC 1365 image and the result totally blows away what I was able to do with the standard deconvolution tool.  I've examined the details up close by blinking between the original data and the sharpened data--and I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.  For a very long time, the holy grail of image processing has been to reconstruct what the image should look like in the absence of the atmosphere and diffraction and it looks like Russell has made a huge step forward with this approach.  It is certainly possible that like all processing algorithms, there may be some artifacts; but from what I can see, this algorithm does a better job than anything else that I've every used.  Here is what a VERY zoomed view of the raw stacked data looks like followed by the processed data.  The results are extremely impressive!  Clearly, I'm going to have to completely reprocess this data set.

....

John

Hi John,

this really looks fantastic. I would be interested to know what the standard deconvolution produces with this image. In my experience, the details of the nebulae are slightly better with BXT than with standard deconvolution. I see the strength of BXT so far mainly in the correction of the stars.

Many greetings
Andreas

Andreas,
The standard deconvolution that I did is nowhere near as sharp as the example that I posted.  From what I've seen so far, BXT does a FAR superior job on both the details and the stars.  One challenge with BXT lies in producing a color image.  With too much sharpening, some minor details don't line up between the RGB channels, which can lead to weird color artifacts.  I'm still working on that one so I can't add much more about it until I have a little more time to fiddle with it.

John

John,

I just uploaded my version of 1365 and I used BXT on the RGB master for just the reason you mention here - better matching color and luminance details.  It is possible, given the different inputs that BXT would produce different details for each but I think the differences were small and certainly much smaller than another method I can think of to resolve the issue.  

I think the right answer would be to run BXT on a linear LRGB combined image but combining RGB and Luminance in linear space is not in my current process flows.  Need a solution to that!

Kevin
Like
neverfox 2.97
...
· 
·  3 likes
John Hayes:
In order to train the neural-net (NN), you need to start with "ground truth" of what a sharp image looks like and that's why it makes sense to start with Hubble or JWST data. That's not "prefect" data but as far as small, ground based imaging systems are concerned it's close enough. The blurring that I referred to is required to train the NN so that it can find the nearest match in your input image to a blurred image in the training data. Once it finds that match, it can then replace the original data with a normalized version of the sharp (ground truth) data. To be clear, the algorithm does not actually contain Hubble data but it's set up to analyze the small variations in input data so that it can provide a "best guess" about the sharp features that would create the same blurry pattern in your data--all based on the training data set.


I think Bogdan's issue isn't that Hubble/JWST data is used as the ground truth for training. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.
Edited ...
Like
HegAstro 12.28
...
· 
·  2 likes
Roman Pearah:
. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.


I don't think the theoretical resolution increase even by traditional deconvolution  is limited by diffraction. These same questions were asked when traditional deconvolution was introduced. For example, see this paper by Lucy:

https://adsabs.harvard.edu/full/1992A%26A...261..706L

Given that, it seems perfectly reasonable to train an AI algorithm on images that exceed the diffraction limit of the instrument used to take the image being deconvolved.
Like
Alan_Brunelle
...
· 
Tim Hawkes:
Wei-Hao Wang:
Based on things I have seen thus far on the internet in the past few days and my own trial of it, I feel it is a good convenient tool in many ways. I probably will run it on every image from now on.  I think the most powerful part of it is the capability to shrink stars and to correct for minor aberration.  The fact that it does this differently in different areas of an image means it can deal with minor focal plane tilt and off-axis aberrations.  These can all be done with traditional methods, but those were highly tedious works.  Now it's basically just one click and a couple minutes of wait.  In some sense, you are upgrading your optics (and your collimation skills) with just a sub-$100 plugin.  For those who uses PI, this is a bargain.  (Sorry to those who don't use PI.  I really hope there is a Photoshop version.)

On the sharpening part, I am less excited than many other people.  (Still excited, but just not that much.).  My criterion is that I don't want deconvolution artifacts.  As a professional astronomer, I have seen enough images from HST and 10-m class telescopes.  I know what real high-resolution, sharp images should look like.  And previous amateur images undergone deconvolution are just not like that. There are either lots of artifacts, or just look unnatural (comparing to real high-resolution images). Because of this, I was never a fan of deconvolution for amateur astrophotography. So, the question I would like to ask for BXT is: when the strength of BXT is tuned down to a point where there are no clear indications of artifacts and the look remains natural, is BXT still better than a standard sharpening tool?  My answer to this question at this moment is: it's better, but not "gamechangingly" better.  I probably would say that BXT can do 20% to 40% better (don't ask me how to define this) than a traditional skillful sharpening.  This 20% to 40% difference alone may not justify the cost (plus the requirement of PI). However, to do that with traditional sharpening, a lot of steps are required and one really has to be careful and "skillful."  Now it's just one click and its 20% to 40% better.  I definitely will not say no to it.

These are my temporary conclusions thus far.  After I try more and see more, my thoughts may change.

That is a very interesting point that you raise in the first paragraph.   As long as the geometric errors across an image are consistent for all the frames integrated together (e.g some tilt) then maybe software can compensate for slight alignment errors etc and correct accurately enough wrt star shape.

Similarly with deconvolution to recover image sharpening?  If the PSF for can be calculated accurately enough across the entire image accounting for all the local tilt distortions etc  ---and most importantly is consistent enough - then will it be possible to iterate to (almost)  just as sharp an image from a slightly misaligned telescope as from a perfectly optically aligned telescope?

I wanted to comment on what @Wei-Hao Wang and @Tim Hawkes stated in the above exchange.  First I agree with what they say.  I think the best comparison if one is trying to assess whether BXT maintains a sense of accuracy and reality in the outcome of its process is for the user assessing it to do a comparison to images that have a bona fide foundation of structure that is better than what can be achieved with the scope you are using.  You will learn both what BXT can do and what it cannot.  You will learn typically what you are looking to accomplish with your image at what particular strength of application (and what not to do, if you so care!).  I will repost the image that I posted in the other BXT thread to illustrate what I found with BXT.  I agree with Wei-Hao that deconvolution often yields a less-than-desirable outcome and I often see images that overuse the method, or incorrectly use it.  One should never be able to tell if DCon has been used at the level of a full frame view of the image.  And DCon can take a lot of iteration to get it right, if it can work on any particular image at all.  Tim is correct that the BTX's advantage is that it works locally to assess the optical defect and corrects the image locally.  I don't know what the scale of the "looking is" but it is better than one psf for the whole frame, as DConv is.  And BTX does it a whole lot faster.  I do wish that once executed, BXT reported the psf diameter used when automatic psf checkbox is used.  This could be helpful to anyone wanting to try the manual psf mode.

Below is my comparison of BXT result compared to using NXT "Detail" and to an unprocessed image.  No star sharpening or noise reduction was chosen.  And these are compared to the Hubble image of the same parts of NGC 891.  The best way to know what is going on is to compare images that are linear and processed as instructed (in the linear mode).  I disagree with what was posted above, suggesting the best way to compare images after the full processing run was completed for each image.  BXT should be judged on what it does, at the point in the processing that it does its job.  Assessing BXT at the end of a processing run offers way too many other PI functions to screw up what BXT did a good job.  And that leaves too much chance to the capability of the imager who processed it.

The Data Comparison:
This is a very heavily enlarged image.  Also, 12 inch Newtonian, f4, ASI0710 MC.  Zero processing other than crop, prior to application of BXT or NXT, which both acted on the debayered color image.  HT done only after function to be able to show here.  Quality of this slide image is affected somewhat by my slide making skills and limitations of resolution of the output from program.
Slide for undrizzled data.JPG

This was posted on the other thread discussing how to use BXT.  Hopefully reposting is not a waste, but I have seen others since it was posted asking to see a comparison of the BXT effects compared to Hubble.  If you want to read my assessment of what this image shows, go to that post and read it.  I believe that this object is one of the best to assess BTX functionality because the dust threads, if generated falsely could never be done in such a way by chance to duplicate all the real features.  This should disprove that BXT is simply randomly creating features.  Yet both BXT and NXT can do so if pushed too hard.  But it is very hard to do so with BXT.  (data not shown, for lack of my time.)

Also, do not ignore the fact that the "Detail" function of NXT can do a lot of what BXT does!

Alan
Edited ...
Like
Die_Launische_Diva 11.14
...
· 
Arun H:
Roman Pearah:
. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.


I don't think the theoretical resolution increase even by traditional deconvolution  is limited by diffraction. These same questions were asked when traditional deconvolution was introduced. For example, see this paper by Lucy:

https://adsabs.harvard.edu/full/1992A%26A...261..706L

Given that, it seems perfectly reasonable to train an AI algorithm on images that exceed the diffraction limit of the instrument used to take the image being deconvolved.

Shouldn't the diffraction limit set a lower limit in the resolution uncertainty of the recovered image?
Like
HegAstro 12.28
...
· 
Die Launische Diva:
Shouldn't the diffraction limit set a lower limit in the resolution uncertainty of the recovered image?


Nope... read the paper attached. The fundamental factor limiting superresolution is not diffraction, but noise. In our cases, for good images, the limiting noise is photon statistics. Therefore, well taken images with large aperture instruments or long integration times will be capable of higher resolution increases.
Edited ...
Like
 
Register or login to create to post a reply.