Blurxterminator… a game changer? [Deep Sky] Processing techniques · Steeve Body · ... · 250 · 16735 · 28

TimH
...
· 
·  2 likes
Shawn:
I use both StartXterminator and NoiseXterminator. If I understand it correctly, SXt distinguishes which pixels belong to a star and then replace those pixels with something based on the surrounding background so that the image is smooth and continuous. Does it create any structure underneath the stars? I don't think so but I am not sure. Also I think NXt tries to decide which pixels are noise and which are not and then smooths out noisy pixels and preserve what it thinks are the actual details. It does not create details but can enhance them.

What about BlurXt? Does it generate real nebulosity details because Hubble has imaged it but there is no way my blurry image has any power to resolve those detail? I mean if I feed the center of my Eagle nebula image to BlurXt, will it look like the iconic Pillar of Creation image? Will the details be scientific correct (after all, the training data includes Hubble's Pillar of Creation) or scientific incorrect but look realistic?

I understand BlurXt  to be basically an enhanced neural net version of  (probably) Lucy Richardson deconvolution which many of us use already.  So while it is certainly an improvement and will be easier to do it should probably be seen as incremental rather than groundbreakingly new?   While it is NN informed in a general way by the entire zoo of astro objects the only actual data it should be pulling the final image from should be what you yourself have presented it with -  i.e. if I understand it correctly there should be zero need to worry about it ever sort of pulling data towards any specific image used in its NN training set.

Deconvolution itself is of course a perfectly respectable mathematical process to recover details  that are buried but nevertheless really are present in what may appear as just a relatively blurry image.  The  deconvolution trick is to use real data (e.g to actually measure the PSF of stars as they appear in the target image)  to  derive a model of exactly how your particular set  up (telescope etc)  has been distorting the data on a given occasion within a particular part of the image  i.e. to characterise  the particular distorted PSF that is  local to the bits of the image that you wish to deconvolute.   The relatively blurry image can then be modelled as a function of the real  underlying image and of the modelled  distorting effects of your set up.   The process works iteratively to shift the PSF of the  target parts of the image  towards a more 'normal'  expected shape .

Obviously just standard deconvolutiuon can't do miracles and creates artifacts if you go too far with it.  All the same limitations apply as normal  -- so if you have sampled at say  1.5  arcsec / pixel  then don't expect  Deconvolution to be finding you details  down below 2 arcsec.  Similarly conventional deconvolution only really works well on regions of high SNR  --and it doesn't like point sources or discontinuous functions  like stars - so normally stars and  noisy parts of the image are masked off.   So if your image is blurry and noisy then it won't work.   But when used properly just normal PI deconvolution can uncover and resolve details in  really quite a dramatic way -- but without creating artifacts -  which is of course something you can always check once you have done it by referring to standard professional images.

Of course I am expecting BlurXt to be a significant advance over the above because of the neural net feature.  But the same limitations on SNR and sampling etc will apply .  One of the basic problems with deconvolution which BlurX does appear to  address is around getting a better handle on exactly what is the 'normal' for the iterative improvement to minimise towards  and that is where the neural net aspect should bring a big improvement.  In addition - watching Adam Block's video -  the program also includes some specific and powerful tools  for dealing with stars --and separately with the stars themselves and their haloes.     I am pretty excited about it 

Tim
Edited ...
Like
CCDnOES 5.61
...
· 
Scotty Bishop:
For those who are saying that BXT replaces stuff here is an easy enough test


I think you misunderstand what those critics mean by "replaces". I think most people feel that the "replacement" might be by way of creating processing artifacts, not some (online or other) library of object detail.  For what it is worth, on the three images I have processed so far using BX (Abell 12, NGC 660, and SH2-115 - see my page), I have not seen anything I would consider to be significant artifacts.

Correct me if I am wrong but my understanding is that Russ apparently used Hubble and other professional data when he developed the plugin but only to verify that the amateur images he was processing to reveal detail were revealing detail that was actually there and he did that by comparing the BX processed images to the much higher resolution images from professional instruments.
Edited ...
Like
AccidentalAstronomers 11.51
...
· 
·  2 likes
Bill McLaughlin:
Correct me if I am wrong but my understanding is that Russ apparently used Hubble and other professional data


Only to train the neural network. There's no Hubble data whatsoever in the product.
Like
phsampaio 3.61
...
· 
·  2 likes
I agree with the general sentiment that BlurXTerminator is a game changer. I have only used in a single image (my latest M42 picture), but the results are amazing.

The nebulosity was deconvolved pretty well, though the stars were the main thing that WOWed me. The capacity to deemphasize stars in a linear state through a deconvolution process that leaves almost no artifacts is a amazing. I could control the stars so much in the linear state that I didn't even need to use StarXTerminator or StarNet to separate stars from the nebulosity. Besides, with my setup, using STX or SN always left several artifacts, specially on big and bright stars.
Like
neverfox 2.97
...
· 
·  1 like
Ironically, given what I take to be the case about what BXT actually is, it's the traditional iterative deconvolution methods that are more likely to end up less faithful to the image data. If there's anything to be bugged by with the release of BXT it's that we accepted those results. They may have been, in a sense, more explainable or "reversible," but I think it would be wrong to say they were, on balance, more faithful.
Edited ...
Like
AndreVilhena 4.42
...
· 
·  3 likes
Roman Pearah:
Ironically, given what I take to be the case about what BXT actually is, it's the traditional iterative deconvolution methods that are more likely to end up less faithful to the image data. If there's anything to be bugged by with the release of BXT it's that we accepted those results. They may have been, in a sense, more explainable or "reversible," but I think it would be wrong to say they were, on balance, more faithful.


@Roman Pearah   Actually I have been having similar thoughts. Current deconvolution process is very trial and error and one has to have a lot of experience and/or spend a lot of time to actually get the optimal result and, even so, unless you do a design of experiment or use Taguchi methods, I really doubt anyone ever can reach that optimal. In more or less extreme cases, we get quite far off and end up with artifacts.
Therefore, it is not surprising that having a more automated and mathematical approach to do it yields better results and *probably* less prone to artifacts (provided the training is properly done). For sure that at to certain extent we are taking away the human factor - probably we will loose some of the artistry but will gain some faithfulness, like you mention.
All in all, what these kind of tool are doing are pushing the bar up - probably some years ago we'd struggle to have a decent image; now that NR and decon are more effective, we can go after other stuff like fainter objects, deeper photos of the same objects, etc... Who knows what we may find?
Edited ...
Like
TimH
...
· 
·  1 like
Once we all start using BlurXt routinely -  as an improved and optimised deconvolution process  -  then I wonder to what extent it will be able to 'fix'  geometrically poor  but nevertheless consistent data collected in the past.

What I mean to say is that - suppose we have had some tilt problem for example -  then  provided it was the same in all of the frames integrated together - will BlurXt deconvolution (which I think is optimised locally across the image)  at least partly  fix the problem by applying different and appropriate psf corrections across the image?
Like
jhayes_tucson 22.82
...
· 
·  15 likes
BlurXTerminator is indeed a revolutionary implementation of AI and I think that Russell trained it well.  I tried it on the Lum' data from my most recent NGC 1365 image and the result totally blows away what I was able to do with the standard deconvolution tool.  I've examined the details up close by blinking between the original data and the sharpened data--and I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.  For a very long time, the holy grail of image processing has been to reconstruct what the image should look like in the absence of the atmosphere and diffraction and it looks like Russell has made a huge step forward with this approach.  It is certainly possible that like all processing algorithms, there may be some artifacts; but from what I can see, this algorithm does a better job than anything else that I've every used.  Here is what a VERY zoomed view of the raw stacked data looks like followed by the processed data.  The results are extremely impressive!  Clearly, I'm going to have to completely reprocess this data set.

Screen Shot 2022-12-20 at 11.28.53 AM.png
Raw stacked Lum data with an average of about 1.7" FWHM



Screen Shot 2022-12-20 at 11.29.25 AM.png
BlurXTerminator processed...wow!


John
Like
rockstarbill 11.02
...
· 
·  2 likes
John Hayes:
BlurXTerminator is indeed a revolutionary implementation of AI and I think that Russell trained it well.  I tried it on the Lum' data from my most recent NGC 1365 image and the result totally blows away what I was able to do with the standard deconvolution tool.  I've examined the details up close by blinking between the original data and the sharpened data--and I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.  For a very long time, the holy grail of image processing has been to reconstruct what the image should look like in the absence of the atmosphere and diffraction and it looks like Russell has made a huge step forward with this approach.  It is certainly possible that like all processing algorithms, there may be some artifacts; but from what I can see, this algorithm does a better job than anything else that I've every used.  Here is what a VERY zoomed view of the raw stacked data looks like followed by the processed data.  The results are extremely impressive!  Clearly, I'm going to have to completely reprocess this data set.

Screen Shot 2022-12-20 at 11.28.53 AM.png
Raw stacked Lum data with an average of about 1.7" FWHM



Screen Shot 2022-12-20 at 11.29.25 AM.png
BlurXTerminator processed...wow!


John



Great image and example John. BXT is definitely a killer tool for image processing.
Like
BenKolt 1.43
...
· 
·  3 likes
I'd like to add to what John Hayes just posted.  This is indeed game changing in my view.  To the limited extent to which I have applied it to my data thus far, I've seen far better deconvolution from BlurXTerminator than anything I've ever conjured up using PI's Deconvolution method.

Now that I have some time off for Christmas, I hope to put this new process to the test on more of my data, and if I have anything interesting to add to this discussion, I certainly will.

Having just started using NoiseXTerminator and StarXTerminator, my PI workflow has undergone monumental changes in a short time!

Ben
Like
rnshagam 0.00
...
· 
·  1 like
Wei-Hao Wang:
On the Photoshop issue, there is some discussion about this on CN. There I argued that linear data are not absolutely necessary, and someone tried it on stretched images and confirmed what I said.  Furthermore, you can absolutely work with linear data in Photoshop, using adjustment layers.


I use StarXT/NoiseXT on Affinity Photo with stacking and preprocessing on SiriL.  I agree that there, too, one can work in linear data in a non-destructive layer.  Too bad RC isn't interested in releasing a PS/AP version--I really don't want to have to learn PixInsight right now, even though it is the Cadillac of processing programs just to get the sharper images.  
Like
whwang 11.64
...
· 
·  4 likes
Based on things I have seen thus far on the internet in the past few days and my own trial of it, I feel it is a good convenient tool in many ways. I probably will run it on every image from now on.  I think the most powerful part of it is the capability to shrink stars and to correct for minor aberration.  The fact that it does this differently in different areas of an image means it can deal with minor focal plane tilt and off-axis aberrations.  These can all be done with traditional methods, but those were highly tedious works.  Now it's basically just one click and a couple minutes of wait.  In some sense, you are upgrading your optics (and your collimation skills) with just a sub-$100 plugin.  For those who uses PI, this is a bargain.  (Sorry to those who don't use PI.  I really hope there is a Photoshop version.)

On the sharpening part, I am less excited than many other people.  (Still excited, but just not that much.).  My criterion is that I don't want deconvolution artifacts.  As a professional astronomer, I have seen enough images from HST and 10-m class telescopes.  I know what real high-resolution, sharp images should look like.  And previous amateur images undergone deconvolution are just not like that. There are either lots of artifacts, or just look unnatural (comparing to real high-resolution images). Because of this, I was never a fan of deconvolution for amateur astrophotography. So, the question I would like to ask for BXT is: when the strength of BXT is tuned down to a point where there are no clear indications of artifacts and the look remains natural, is BXT still better than a standard sharpening tool?  My answer to this question at this moment is: it's better, but not "gamechangingly" better.  I probably would say that BXT can do 20% to 40% better (don't ask me how to define this) than a traditional skillful sharpening.  This 20% to 40% difference alone may not justify the cost (plus the requirement of PI). However, to do that with traditional sharpening, a lot of steps are required and one really has to be careful and "skillful."  Now it's just one click and its 20% to 40% better.  I definitely will not say no to it.

These are my temporary conclusions thus far.  After I try more and see more, my thoughts may change.
Edited ...
Like
jhayes_tucson 22.82
...
· 
·  2 likes
Richard Shagam:
Wei-Hao Wang:
On the Photoshop issue, there is some discussion about this on CN. There I argued that linear data are not absolutely necessary, and someone tried it on stretched images and confirmed what I said.  Furthermore, you can absolutely work with linear data in Photoshop, using adjustment layers.


I use StarXT/NoiseXT on Affinity Photo with stacking and preprocessing on SiriL.  I agree that there, too, one can work in linear data in a non-destructive layer.  Too bad RC isn't interested in releasing a PS/AP version--I really don't want to have to learn PixInsight right now, even though it is the Cadillac of processing programs just to get the sharper images.  

Rich,
There are SO many tutorials, books, live classes, and videos now that when you finally do get around to trying PI, you'll wonder why you waited so long.  I won't sugar coat it.   There is definitely a learning curve but it is a world class piece of code and I predict that you'll really like it once you get over the initial hump.  There is a good reason that it's become the standard for serious processing.  That's why I want to give you a gentle bump to give it a try.

John
Like
Bobinius 9.90
...
· 
·  2 likes
Hi John,

This is impressive of course. I had the same result when applying BlurXT on my NGC7331, maybe even more impressive since my data is shot from the city with a seeing incomparable with that from the Chilean skies (and it took me around 30 seconds).  And while on your previous Top Pick version of the image I can clearly see that you used classical Deconvolution, it is impossible to tell that you applied BlurXT on the current version. 

The appropriate and interesting comparison would be between the BlurXT final sharpened version vs. your final classical processing. Comparing a raw unprocessed linear image with an image sharpened by an AI capable on working on linear data is always going to produce a difference, for all of our images. Agreed, it looks like a final processing, since that is what the AI has been trained to produce. 

My fundamental question is if your BlurXT version is more detailed than your manual version, how do you know that supplementary details that appear are from the information contained in your image and not inferred/produced by the AI model based on its Hubble training? And how can we tell? Especially when you don't have a Hubble image to compare it to. The difficult part would also be how do we analyse, since the difference appears for low scale details, pretty cumbersome to analyse correctly visually. 

In my case, BlurXT produced supplementary details in the galaxy core for my NGC 7331 (I have to look for the full project before presenting you guys the images). It does a better HDR, but since it is linear I risk maybe losing some data while stretching (or maybe it got lost for my processed version too). How do I tell if the new details are real or if they were actually in the information captured by my image?

And if the AI managed to sharpen the galaxy better than I did (or if you think it managed to sharpen it better than you were able to), do we need to replace our usual processing effort with the application of a BlurXT process ?

CS,

Bogdan
Edited ...
Like
TheSpice 0.90
...
· 
·  1 like
John Hayes:
BlurXTerminator is indeed a revolutionary implementation of AI and I think that Russell trained it well.  I tried it on the Lum' data from my most recent NGC 1365 image and the result totally blows away what I was able to do with the standard deconvolution tool.  I've examined the details up close by blinking between the original data and the sharpened data--and I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.  For a very long time, the holy grail of image processing has been to reconstruct what the image should look like in the absence of the atmosphere and diffraction and it looks like Russell has made a huge step forward with this approach.  It is certainly possible that like all processing algorithms, there may be some artifacts; but from what I can see, this algorithm does a better job than anything else that I've every used.  Here is what a VERY zoomed view of the raw stacked data looks like followed by the processed data.  The results are extremely impressive!  Clearly, I'm going to have to completely reprocess this data set.

....

John

Hi John,

this really looks fantastic. I would be interested to know what the standard deconvolution produces with this image. In my experience, the details of the nebulae are slightly better with BXT than with standard deconvolution. I see the strength of BXT so far mainly in the correction of the stars.

Many greetings
Andreas
Like
Die_Launische_Diva 11.14
...
· 
·  1 like
Hello John,

Your result is indeed impressive. Thank you and and all of the expert members who offer their experience in this sensitive but highly intellectually arousing matter.

I would like to hear your opinion about the lack of sharpening on the diffraction spikes. I understand that many astrophotographers are not fond of diffraction spikes but I am not sure if the effect on the diffraction spikes can be said to be the result of a deconvolution algorithm, classical or machine learning. Of course maybe I am nitpicking and/or I have a misconception of what the term deconvolution means.

I think my wetware neural network has already learned to identify BXT images just by looking at the diffraction spikes
Edited ...
Like
ks_observer 1.81
...
· 
I've made comparisons with images taken with much larger scopes to see how well the details correlate--and it looks quite good.

Thank you for this input -- it is very helpful in my evaluation as to what this new tool is doing!
Your images are amazing as always!
Like
the_blue_jester 0.00
...
· 
·  2 likes
Hi,

First time post and sorry if this has been answered (here or elsewhere).....

The results are indeed impressive. However, is this really detail lost from "blurring" that has been recovered by deconvolution or is there any element of "detail" that looks good but isn't really a true represntation? I am not going down the "fakery" route here simply querying how accurate the recovered detail is. Also I am not trying to question deconvolution as a concept but specifically to this BlurXTerminator (as it does look so impressive).

I don't have the tool and wondered if anyone has used a picture with good detail (eg Hubble or JWST), convoluted it and the run BlurXTerminator to see if it gives a faithful recovery of detail?

To me, at the moment, it's a bit like the facial reconstructins on Forensic series. The skulls reconstructed to faces often look good but I never really see a comparison to an actual photo when the person has been identified.

If it did do a good job of deconvoluting a convoluted photo I'd certainly pay for this.

Just a query....

Paul
Like
phsampaio 3.61
...
· 
·  1 like
I began a reprocessing spur after the release of BXT - and got interesting results.

First, the tool is very dependent on the quality of the data it is working on. For instance, my latest M42 image benefited a lot from BXT. But that's one of the brightest nebula out there, so a high SNR in the nebulosity is to be expected. On the other hand, my M33 image was only and hour or so, vey noisy and much lower SNR. Even cranking up the deconvolution, I couldn't get anything more than a tiny bit sharpening, and nothing more than I could already do with a regular Deconvolution process. The big difference was that BXT did not leave any artifacts, while decon did.

Also, I tried to reprocess my Dragons of Ara nebulae to see if I could crank up more detail in the nebula, but the results were essentially the same that I had with my deconvolution process back then. This image had very high SNR, lot's of integration time, and was the most ambitious project I had done yet, and to process I spent a few hours maybe fiddling with all the  deconvolution options. So, the same results but much simpler and quicker. And it also deemphasizes the stars, which is a huge plus.

My preliminary conclusion seems to indicate that BXT gives similar results well configured (i.e. optimized) Deconvolution process, given a data that's of good quality (high SNR, long integration times, etc). But the key here is that while BXT uses a neural network to aproximate the disturbances that blurred the image, a normal Deconvolution needs a lot of tinkering to actually work - and sometimes, it's just so frustating and time consuming to get the perfect deconvolution paramaters.
Like
CCDnOES 5.61
...
· 
·  1 like
Timothy Martin:
Bill McLaughlin:
Correct me if I am wrong but my understanding is that Russ apparently used Hubble and other professional data


Only to train the neural network. There's no Hubble data whatsoever in the product.

I thought that Is what I said.  To train but also to verify that the detail was real, not created by the algorithm. The OP seemed to think that Hubble data was somehow used for the images themselves, which is totally wrong.
Like
andymw 11.01
...
· 
·  1 like
OK, I succumbed ... I have purchased BlurXterminator as it is both a really easy way to apply gentle deconvolution to my images and tame my stars at the same time.  Tested on a few images and I like the results.  Below is a slight reprocess of my wizard image using BXT at the linear stage.  I find it much easier than other doconvolution methods, although I may have overdone the sharpening of the nebula a tad.

CombinedStarsd.png

Full image here:

Fiddling with my scope and captured the Wizard (Added a splash of colour)
Edited ...
Like
rockstarbill 11.02
...
· 
·  4 likes
BlurXTerminator has definitely created a lot of discussion, rightfully so, but some of it has been extremely negative, driven by competitive nature, and has brought highly incorrect rumors about what the tool is or is not doing into the discourse that has made discussion about this tool not very enjoyable. 

​​​​​​If one likes the tool, awesome use it and have fun. That's what the name of the game is here in AP. If one doesn't like the tool, that's fine. Don't use it. More importantly though, also don't spread incorrect information about it, and don't shame those that do elect to use it. 

​​​​
Like
rockstarbill 11.02
...
· 
·  1 like
Andy Wray:
OK, I succumbed ... I have purchased BlurXterminator as it is both a really easy way to apply gentle deconvolution to my images and tame my stars at the same time.  Tested on a few images and I like the results.  Below is a slight reprocess of my wizard image using BXT at the linear stage.  I find it much easier than other doconvolution methods, although I may have overdone the sharpening of the nebula a tad.

If you are looking for feedback on your image you may wish to post that in its own thread.
Like
TimH
...
· 
·  1 like
Just another test- this time on relatively poor data.  Thought that I'd try directly comparing BlurXt  with normal PI deconvolution on a starless  HA image of the running dog part of the heart nebula IC1805

At the default 0.9 setting for sharpening BlurXt does a much more impressive job than my attempt at PI deconvolution (in which the PDF was derived from dynamic measurement of stars local to the part of the image shown).

image.png

image.png

image.png

The starting linear image derives from integrating together about 140 frames that were taken over more than a year apart at an average FWHM of ~ 2.5 and sampled at 1.05 arc sec pixel.  So in no way an image ideal for deconvolution -- i.e. the PSF would be some sort of average  while the correct procedure would have been to deconvolute frames from each imaging session seprately each with their own dynamic PSF  and then combine.  At 1.05 arcsec pixel it would be unreasonable to expect sharpening to do better than Nyquist allows  ...so maybe an improvement from 2.5 to ~ 1.8 the best possible?

BlurXt  certainly seems to do a remarkably good job even on this far from ideal starting image - significantly better than the deconvolution tried here.

I suppose that the key question is has it hit the artifact point when run at 0.9?
Edited ...
Like
andymw 11.01
...
· 
·  1 like
If you are looking for feedback on your image you may wish to post that in its own thread.


I'm not sure I want feedback on the image, because I know it is pretty crap, however BlurXterminator did improve it a bit and that was my point.
Like
 
Register or login to create to post a reply.