RC Blur Exterminator. Poor star shape/ eccentricity of input images has little deleterious impact on the quality of the final output? [Deep Sky] Acquisition techniques · Tim Hawkes · ... · 24 · 1455 · 16

TimH
...
· 
·  1 like
Just interested to know if folks are on the same page when using this tool?

I have been working through hundreds of recent and past image files to better understand which parameters of input images are the most critical to getting the best possible final images whenever RC Blur Exterminator  is included in the work flow. 

Starting from integrated images the standard  test method here was to a) crop to remove drift/ dithering edges b) use the PixInsight GRAD tool and c) dynamic  background extraction before running Blur Exterminator.  This  was then first run with star correction only and then followed with non-stellar  sharpening only at a setting of 0.6 (no stellar size reduction at all)

Broad conclusions thus far -

1)  Image scale / sampling is very important.   Given an image scale of < 0.5 arcsec/ pixel  BlurXt can do a superb job .   Some of the processed images of - for example bright galaxies -  appear to be at an equivalent resolution of about 1-1.2 arcsec and to clearly show most all of the features identifiable in NASA/ ESA HST pictures of the same objects which is pretty impressive.  However at high sampling rates there is a also price to pay in terms of imaging time, etendue etc. to deliver input images with sufficient SNR.   On the other hand, sampling at an image scale of > 1.5 arcsec/ pixel - and -especially under good skies - it is quite likely that BlurXt will deliver relatively little improvement over the original image.   All this is perhaps unsurprising  given Nyquist 

2)  The (PI measured) FWHM - resolution of the input image matters.   The very best (qualitative judgement combined with an estimate of final FWHM sharpness)  final images  were derived from BlurXt processing of the sharpest input images  (typically here with FWHM 1.9-2.1 ).  This seems unsurprising -- although the perception of improvement  was[i] [/i] often greater from more blurry starting points.

3)  Eccentricity (at least up to a level of about 0.65)  and star-shape distortions (e.g. coma from inprecise collimation)  seemed to matter hardly at all ?!   Following the 'correction'  step BlurXt  seemed to go on to deliver an image - that appeared to me at least - about as good as it could be.  In fact probably the best final image - based on comparison with HST images -  was derived from an image with avearge Eccentricty 0.62.

4)  As with any image -- the quality isn't just a question of resolution.  SNR is always key as well


The third conclusion did seem surprising -- but it just seems that whatever the correction process that BlurXt runs is very good and perhaps the NN just very good at accounting for the range of common distortions?


So is it time for us big Newt owners to relax about getting completely accurate collimation  and throw those lovely Howie Glatters out?  Seems like sacrilege!

btw I have plenty of data and side by side comparisons at different image scales etc. should anyone want to see them..

Tim
Like
Paulinho 3.01
...
· 
Hi, Tim.
Whilst not as precise a test as your own, I can attest to your third conclusion and the benefits of BlurXterminator on images from a poorly collimated OTA.
Until recently I could not collimate my Edge HD 8" with the normal techniques - a tri-Bahtinov mask did the trick very nicely.  (As an aside, with a 'perfect' tri-bahtinov collimation, my defocused star donuts are way off centre, so there is some other misalignment going on.  But given I get round stars now, I have the outcome I want).
In any case, my previously distorted stars were 'corrected' by BX.  It does a very nice job of cleaning those up; adjustements to the amount of correction and halo can be tweaked too of course.
Cheers.
Paul
Like
RonaldNC 2.71
...
· 
·  3 likes
My 8" EdgeHD is pretty well collimated when using it native (F/10) and with a reducer (F/7), but I have a devil of a time getting it properly collimated with my HyperStar attached (F/1.9).  I used SkyWave and have gotten the center stars almost perfectly round, but still have some issues on the boundaries (probably sensor tilt?).

After some experimentation and asking some questions on Adam Block's forum, I started using BlurX using the "correct only" as the very first step after WBPP produces the calibrated images.  It's amazing!  It seems to completely fix the deformed stars with no negative effects.  After running the "correct only", I continue with my normal processing chain... DBE, SPCC, BlurX, NoiseX, stretch, StarX, curves, etc.

I'm beginning to wonder if there is any value in continuing my quest for better physical collimation/tilt alignment.

Ron
Like
Doug_Crowe 0.00
...
· 
Ron Clanton:
My 8" EdgeHD is pretty well collimated when using it native (F/10) and with a reducer (F/7), but I have a devil of a time getting it properly collimated with my HyperStar attached (F/1.9).  I used SkyWave and have gotten the center stars almost perfectly round, but still have some issues on the boundaries (probably sensor tilt?).

After some experimentation and asking some questions on Adam Block's forum, I started using BlurX using the "correct only" as the very first step after WBPP produces the calibrated images.  It's amazing!  It seems to completely fix the deformed stars with no negative effects.  After running the "correct only", I continue with my normal processing chain... DBE, SPCC, BlurX, NoiseX, stretch, StarX, curves, etc.

I'm beginning to wonder if there is any value in continuing my quest for better physical collimation/tilt alignment.

Ron

*** I agree. I recently purchased a C6 that I use with .63x reducer. I was about to make an attempt at my first collimation. I too use "correct only" as a first step and was amazed at how it improved the stars. I decided to put off any collimation attempts until it gets worse.
Like
TimH
...
· 
Ron Clanton:
My 8" EdgeHD is pretty well collimated when using it native (F/10) and with a reducer (F/7), but I have a devil of a time getting it properly collimated with my HyperStar attached (F/1.9).  I used SkyWave and have gotten the center stars almost perfectly round, but still have some issues on the boundaries (probably sensor tilt?).

After some experimentation and asking some questions on Adam Block's forum, I started using BlurX using the "correct only" as the very first step after WBPP produces the calibrated images.  It's amazing!  It seems to completely fix the deformed stars with no negative effects.  After running the "correct only", I continue with my normal processing chain... DBE, SPCC, BlurX, NoiseX, stretch, StarX, curves, etc.

I'm beginning to wonder if there is any value in continuing my quest for better physical collimation/tilt alignment.

Ron

Hi Ron,  Thanks.  I am sort of in the same boat with a persistent Eccentricity aligned with the long axis in the RA direction.  I know that is due to a slight imperfection in the mount but - to be fair - it only manifests when I am pushing things to the limit -- i.e. imaging at about 0.4 arc sec pixel with a large telescope and am close to overloading the mount - and achieving a high resolution (sky not too bad).   E can be minimised to about 0.45 -maybe less by running more East heavy - still checking that.  The expensive solution would of course be to invest in a better mount -- but then the question arises of how much additional quality would I really stand to gain and would that be worth it for an improvement  that may well be real but that is probably slight over what I can already achieve with  an imperfect set up and  BlurXt .  The variables are obviously  impossible to control and no side by side experiments are possible but the impression I get at the moment is that I can't myself see a quality difference between final images starting from E =0.7 av input  over 0.45 av input images.  But I don't have any perfectly focussed high SNR images at E = 0.35 to try with ?

Maybe it's just a philosophical thing -- being an 80/20  person or a perfectionist?
Like
TimH
...
· 
Paul Larkin:
Hi, Tim.
Whilst not as precise a test as your own, I can attest to your third conclusion and the benefits of BlurXterminator on images from a poorly collimated OTA.
Until recently I could not collimate my Edge HD 8" with the normal techniques - a tri-Bahtinov mask did the trick very nicely.  (As an aside, with a 'perfect' tri-bahtinov collimation, my defocused star donuts are way off centre, so there is some other misalignment going on.  But given I get round stars now, I have the outcome I want).
In any case, my previously distorted stars were 'corrected' by BX.  It does a very nice job of cleaning those up; adjustements to the amount of correction and halo can be tweaked too of course.
Cheers.
Paul

Hi Paul,

Thanks for your take on this.  Just interested to know  how others are using Blur Exterminator correction.  As per my answer to Ron  --it's almost a philosophical thing.  How long do you labour after optical perfection  when star correction seems to work so well ?   I must say though that I have also found however that it doesn't work in all cases  -- Others have told me that it doesn't fix coma for example -- but, like you, I have found that it does seem to (apparently) perfect fix some level of miscollimation - as well as slight RA drift movement problem (that I have with my mount)
Like
TimH
...
· 
Doug Crowe:
Ron Clanton:
My 8" EdgeHD is pretty well collimated when using it native (F/10) and with a reducer (F/7), but I have a devil of a time getting it properly collimated with my HyperStar attached (F/1.9).  I used SkyWave and have gotten the center stars almost perfectly round, but still have some issues on the boundaries (probably sensor tilt?).

After some experimentation and asking some questions on Adam Block's forum, I started using BlurX using the "correct only" as the very first step after WBPP produces the calibrated images.  It's amazing!  It seems to completely fix the deformed stars with no negative effects.  After running the "correct only", I continue with my normal processing chain... DBE, SPCC, BlurX, NoiseX, stretch, StarX, curves, etc.

I'm beginning to wonder if there is any value in continuing my quest for better physical collimation/tilt alignment.

Ron

*** I agree. I recently purchased a C6 that I use with .63x reducer. I was about to make an attempt at my first collimation. I too use "correct only" as a first step and was amazed at how it improved the stars. I decided to put off any collimation attempts until it gets worse.

*Thanks for your take on this also Doug.  I am sure that what we end up with is in some way less than would be possible from a perfect starting image.  But nevertheless at some point maybe it is wise to stop and say 'good enough'  and image rather than spend too much time trying to fix problems?
Like
TimH
...
· 
Just thought that it might be illustrative to put up a few examples of images that were all derived from  2  input images  whererin each had been corrected in BlurXt  from starting values of Eccentricity between 0.45 and 0.7 before non-stellar sharpening and then the two images being combined in PixMath.  Starting FWHMs of images varied from 1.8 up to 2.4.  After sharpening the apparent resolution came down to about the 1.1-1.3 arcsec level -- with Eccentricity of stars corrected to usually below 0.4

image.png


The whirlpool image combined 351 x 10s subs at FWHM  1.85,  Eccen 0.65  with a second image comprising 313 x10s subs at FWHM 2.2,  Eccen 0.62.  Both images were BlurXt processed prior to addition in PixMath weighted as per their relative PSF weights.  The Eccentricity of the final corrected image is about 0.34 av. 


image.png

The pinwheel image combined 507 x 10s subs at FWHM 2.4,  Eccen 0.7  with a second image comprising 360 x10s subs at FWHM 2.2,  Eccen 0..45 .  Both images were BlurXt processed prior to addition in PixMath weighted as per their relative PSF weights.  The Eccentricity of the final corrected image is about 0.45 av.  So less perfect shape correction in this instance.

image.png

The M106 image combined 686 x 10s subs at FWHM 2.0,  Eccen 0.62  with a second image comprising 355 x10s subs at FWHM 1.8,  Eccen 0.66 .  Both images were BlurXt processed prior to addition in PixMath weighted as per their relative PSF weights.  The Eccentricity of the final corrected image is about 0.37 av.  

The best of the starting images - prior to BlurXt processing  -  appeared much less sharp e.g. as below..  So while by no means perfect I think it fair to conclude that BlurXt has yielded tremendous correction and improvement from modest (at least in terms of Eccentricity) starting points - without adding obvious artifacts in comparison with NASA/ ESA Hubble images -  to the point that the detail might be considered 'enough'?

image.png
image.png

image.png
Edited ...
Like
Paulinho 3.01
...
· 
·  4 likes
Tim Hawkes:
As per my answer to Ron  --it's almost a philosophical thing.  How long do you labour after optical perfection  when star correction seems to work so well ?

Hi, Tim and Ron.
Personal view.  As per general photography, get the best image you can when you take it.  Whilst much can be done in processing, nothing beats the best possible starting point.  I don't think that necessarily means a better mount etc. (although of course it can); it does point towards getting the best possible collimation, tracking/guiding, focus etc. with what we have.
Cheers.
Paul
Like
TimH
...
· 
Paul Larkin:
Tim Hawkes:
As per my answer to Ron  --it's almost a philosophical thing.  How long do you labour after optical perfection  when star correction seems to work so well ?

Hi, Tim and Ron.
Personal view.  As per general photography, get the best image you can when you take it.  Whilst much can be done in processing, nothing beats the best possible starting point.  I don't think that necessarily means a better mount etc. (although of course it can); it does point towards getting the best possible collimation, tracking/guiding, focus etc. with what we have.
Cheers.
Paul

Thanks Paul,

Yes I am sure that is right.  All that I  really wanted to do here was  to ask the question to understand  better and to find out what the general view and experience is.   While getting very good results with BlurXt correction -- that could well be peculiar to my own case and maybe just the particular type of Eccentricity generated in my setup which is predominantly a kind of motion blur along the RA axis.  In other threads some folk have commented that correction doesn't work as well with coma for example . 

 In addition I was hoping to see some examples of what BlurXt can do with input data that - unlike my own -is near as damn it perfect.  That would be useful to put my own E > 0.45 stuff in context . Everyone says --including Russell Cronan -- that the better the data that goes in then the  better the quality  of the data that comes out.  Intuitively that just has to be correct - it is fundamental .  But I'd also love to be able to quantify in some way the level of improvement to expect versus input improvement  --and of course with this hobby it's nigh on impossible to do controlled experiments.  

So in practice - like you I always religiously  try to optimise collimation etc before I start ---  and keep working for better input data while being amazed at what the software can do with what it has been  fed so far.

Tim
Like
ScottBadger 7.61
...
· 
·  1 like
I feel like I’m in the BX sweet spot. My home site is bortle 3 pushing 2, so I can get lots of signal, but seeing sucks. 2.25” is the very best, and very rare, with 3-3.5” being the average. So, of course, the better the data going in, the better what comes out, but also the better the SNR going in, the bigger the resolution difference coming out.

Cheers,
Scott
Like
TimH
...
· 
Scott Badger:
I feel like I’m in the BX sweet spot. My home site is bortle 3 pushing 2, so I can get lots of signal, but seeing sucks. 2.25” is the very best, and very rare, with 3-3.5” being the average. So, of course, the better the data going in, the better what comes out, but also the better the SNR going in, the bigger the resolution difference coming out.

Cheers,
Scott

SNR must be truly outstanding there.  Only occasionally do I get out to anything like a 3 or 4  site  --  where - if lucky enough to have clear skies for the vist - each precious sub is worth 20 or so of those at home.  Home does sometimes have the merit of better steadiness though - very occasionally I have had FWHM 1.8 --so have made a merit of that and focussed more on getting good resolution in the brighter objects. tx Tim
Edited ...
Like
dkamen 6.89
...
· 
In my experience BlurX works better the more stars you have. Otherwise it really messes up small stellar-like features such as tiny galaxies, which it "corrects" in wrong way, mistaking them for deformed stars or even doubles. A particularly difficult case for me has been NGC 4555 because it looks like an elongated star and has very few actual stars in its viccinitty. When it was at the edge of the field, BlurX would turn it into a perfectly round star. Expanding the field somewhat (but with low SNR) turns it into a double. Finally, increasing SNR makes it look almost right but too much like Saturn in low power, a circle with "ears" instead of an ellipse.
Like
TimH
...
· 
In my experience BlurX works better the more stars you have. Otherwise it really messes up small stellar-like features such as tiny galaxies, which it "corrects" in wrong way, mistaking them for deformed stars or even doubles. A particularly difficult case for me has been NGC 4555 because it looks like an elongated star and has very few actual stars in its viccinitty. When it was at the edge of the field, BlurX would turn it into a perfectly round star. Expanding the field somewhat (but with low SNR) turns it into a double. Finally, increasing SNR makes it look almost right but too much like Saturn in low power, a circle with "ears" instead of an ellipse.

That is really interesting.  You picked a particularly difficult example there -  probably outside of the NN training set  ?  Another one is (at least was - maybe fixed now?)  the catseye nebula.  There conventional deconvolution worked a lot better than BlurXT did.   In general though -- I ma not seeing problems with it misidentifying faint fuzzies as stars and just based on checking features off against HST images - it seems to be doing a faithful job of rendering detail in galaxy cores  - I am still amazed by how much it gets right .  cf comparison of an HST image of part of the M108 core and my effort (derived from ca 3h of less than perfect stacked 10s frames - FWHM  1.8, Eccentricity 0.55 due to RA mount wobble) after correction and BXT sharpening)...

image.png
Like
Alan_Brunelle
...
· 
I too use BXT in correct only mode as an early step.  Yes it does wonders on stars 

What seems missing in the comments, however, is I find that this "stars only" correction appears to act as a general deconvolution activity as well.  At least for me, I have noticed considerable sharpening of finer galaxy structures.  And one would hope that this occur, if doing deconvolution.  However, its so much better and more accurate than the old PI deconvolution ever was.

Recently I had a tilt/backfocus issue pop up when at a remote dark sky site.  Correct only had a complete understanding and resolution for that. It also means that I very much have reduced the amount of non stellar sharpening of late.  A big benefit!  I still see too many images with overuse of BXT non stellar sharpening.  Never see those little snake-like structures in a Hubble images. Why do so many feel they look good in their images?
Edited ...
Like
TimH
...
· 
·  1 like
Alan Brunelle:
I too use BXT in correct only mode as an early step.  Yes it does wonders on stars 

What seems missing in the comments, however, is I find that this "stars only" correction appears to act as a general deconvolution activity as well.  At least for me, I have noticed considerable sharpening of finer galaxy structures.  And one would hope that this occur, if doing deconvolution.  However, its so much better and more accurate than the old PI deconvolution ever was.

Recently I had a tilt/backfocus issue pop up when at a remote dark sky site.  Correct only had a complete understanding and resolution for that. It also means that I very much have reduced the amount of non stellar sharpening of late.  A big benefit!  I still see too many images with overuse of BXT non stellar sharpening.  Never see those little snake-like structures in a Hubble images. Why do so many feel they look good in their images?

That is true Alan. I have noticed exactly the same. In my case it is RA motion blur that "correct"  deals with -- and as you say - "correction only"  does in fact seem to also cause some non-stellar sharpening .  I don't really know what BlurXt is doing -- John Hayes suggested that there may be little connection with conventional deconvolution and that  it maybe operating more on the basis of replacing in micro elements of structure from its neural net library - according to probable matches after accounting for blurring?

Tim
Edited ...
Like
Alan_Brunelle
...
· 
Tim Hawkes:
Alan Brunelle:
I too use BXT in correct only mode as an early step.  Yes it does wonders on stars 

What seems missing in the comments, however, is I find that this "stars only" correction appears to act as a general deconvolution activity as well.  At least for me, I have noticed considerable sharpening of finer galaxy structures.  And one would hope that this occur, if doing deconvolution.  However, its so much better and more accurate than the old PI deconvolution ever was.

Recently I had a tilt/backfocus issue pop up when at a remote dark sky site.  Correct only had a complete understanding and resolution for that. It also means that I very much have reduced the amount of non stellar sharpening of late.  A big benefit!  I still see too many images with overuse of BXT non stellar sharpening.  Never see those little snake-like structures in a Hubble images. Why do so many feel they look good in their images?

That is true Alan. I have noticed exactly the same. In my case it is RA motion blur that "correct"  deals with -- and as you say - "correction only"  does in fact seem to also cause some non-stellar sharpening .  I don't really know what BlurXt is doing -- John Hayes suggested that there may be little connection with conventional deconvolution and that  it maybe operating more on the basis of replacing in micro elements of structure from its neural net library - according to probable matches after accounting for blurring?

Tim

Not sure what it is doing either, but when it first came out, I was under the impression that it was not doing a traditional deconvolution.  I am no expert on these things.  However, the word deconvolution is just a word, and whoever co-ops it for whatever purpose, its fine with me.  The old deconv, required that the user put into the function a star shape model.  I assume that was a way for the function to decide by how much image detail shifts were required to bring the stars to a normal round shape.  I always found that to be a poor function, since the star model was just a mashup from all the stars across the field that the user selected.  Well, that only really works when star defects are consistent across the field.  Who has that!?  At least BXT uses tiles to reduce the local areas in size, and therefore right from the start is a better approximation of star defects from point to point across the image.  What is clearly going on with BXT, star only correction, is that the corrections are applied tile-to-tile across all elements of the tile.  And that really is what deconvolution was intended to do.  If the old one did that, it did it poorly.  I have never seen it actually really cause any corrections to the stars themselves.  And the enhancements that it did to nebulosity seemed only to happen in regions of high signal gradients.  The effects were strongest where the gradients were the steepest and that made it hard to control because to try to enhance detail in area less steep, you ended up over deconvoluting in steep areas.  Hence masking, etc. and on and on and on....  Also the enhancements all seemed irrelevant to the star model and never really matched known structures well, when compared to high quality data.  It seemed to just be increasing random contrast for the sake of appearance of structure.  And those snakey thingys!
Edited ...
Like
HegAstro 12.28
...
· 
Tim Hawkes:
I don't really know what BlurXt is doing -- John Hayes suggested that there may be little connection with conventional deconvolution and that  it maybe operating more on the basis of replacing in micro elements of structure from its neural net library - according to probable matches after accounting for blurring?

Tim


I read John's explanation, and John is obviously very knowledgeable. But something about that explanation didn't make sense to me. If Blur X was actually replacing micro elements, wouldn't the sharpened image be only limited by the quality of the training data? That is, shouldn't it be possible, at least on occasion, to set sampling limited resolution even from relatively poor images? Russ Croman actually gave an AIC talk on the technology behind Star X. My understanding in that case is it isn't so much a replacement of structure as a series of weighted mathematical operations. BlurX could be similar. It would certainly explain why you might be limited by the quality of input data that you have.
Like
TimH
...
· 
Arun H:
Tim Hawkes:
I don't really know what BlurXt is doing -- John Hayes suggested that there may be little connection with conventional deconvolution and that  it maybe operating more on the basis of replacing in micro elements of structure from its neural net library - according to probable matches after accounting for blurring?

Tim


I read John's explanation, and John is obviously very knowledgeable. But something about that explanation didn't make sense to me. If Blur X was actually replacing micro elements, wouldn't the sharpened image be only limited by the quality of the training data? That is, shouldn't it be possible, at least on occasion, to set sampling limited resolution even from relatively poor images? Russ Croman actually gave an AIC talk on the technology behind Star X. My understanding in that case is it isn't so much a replacement of structure as a series of weighted mathematical operations. BlurX could be similar. It would certainly explain why you might be limited by the quality of input data that you have.

Hi Arun,  Yes I posted a somewhat similar question back to John yesterday -- but it took me a week or so to think of it :-).  If BXT worked entirely on the basis  of replacing in elements from the library then the quality of the final image would be inherently 'quantized' -- and -- above a certain threshold level - no longer dependent on the quality of the input image that gets fed into it.  So that iwould mean (for example) that there might be no point in me getting a better quality mount that doesn't suffer from RA wobble because BlurXt can already deal with that issue so well that it is already to a point that that the motion blur no longer matters in terms of the quality of the final image (i.e the same get elements selected anyway after the blur is accounted for). 

Of course I have absolutely no idea if that is really true because I  don't know at what point and how the NN libraries impact the final result.  But when I compare the BlurXt sharpened images derived from my humble efforts with real HST images I must say that I am surprised by the depth of detail my images have been lifted to -- to the point that I wonder how much headroom for yet further improvement in resolution there can really be ?

Maybe as John's analysis suggests  the final images really are limited only be the quality of the training set -- but because the NN library so large and fine-grained it can never be a limitation in practice? 

Certainly though I think that you are right that sampling is very important - as it must be - and that BXT sets the resolution limit on that basis  --  i.e. sampling at 0.4arcsec/ pixel  BlurXt seem able to sharpen to a level equivalent to about an arcsec (in the best cases)  whereas at 0.81 it doesn't go lower than about 1.6   (just based on pixel peeping and the FWHM of unsharpened stars). 

Would love to know

Tim
Like
HegAstro 12.28
...
· 
Tim Hawkes:
Certainly though I think that you are right that sampling is very important - as it must be - and that BXT sets the resolution limit on that basis  --  i.e. sampling at 0.4arcsec/ pixel  BlurXt seem able to sharpen to a level equivalent to about an arcsec (in the best cases)  whereas at 0.81 it doesn't go lower than about 1.6   (just based on pixel peeping and the FWHM of unsharpened stars).


This line of thinking gets to be worrisome. Because it almost suggests that BlurX is artificially limiting the sharpening based on the resolution allowed by your acquiring system. Of course, there is no reason to do impose that limitation. Someone could easily make a version of Blur X that outputs a larger file with better "resolved" features. Traditional deconvolution is limited by Nyquist since it is a sequence of matrix operations on your data, the PSF is what is resolvable using your pixel size etc... but there need not be such a limitation on NN based algorithms. The primary limitation may just be the SNR of the feature in your dataset, otherwise you are trying to "replace" noise which does not work too well.
Like
TimH
...
· 
Alan Brunelle:
Tim Hawkes:
Alan Brunelle:
I too use BXT in correct only mode as an early step.  Yes it does wonders on stars 

What seems missing in the comments, however, is I find that this "stars only" correction appears to act as a general deconvolution activity as well.  At least for me, I have noticed considerable sharpening of finer galaxy structures.  And one would hope that this occur, if doing deconvolution.  However, its so much better and more accurate than the old PI deconvolution ever was.

Recently I had a tilt/backfocus issue pop up when at a remote dark sky site.  Correct only had a complete understanding and resolution for that. It also means that I very much have reduced the amount of non stellar sharpening of late.  A big benefit!  I still see too many images with overuse of BXT non stellar sharpening.  Never see those little snake-like structures in a Hubble images. Why do so many feel they look good in their images?

That is true Alan. I have noticed exactly the same. In my case it is RA motion blur that "correct"  deals with -- and as you say - "correction only"  does in fact seem to also cause some non-stellar sharpening .  I don't really know what BlurXt is doing -- John Hayes suggested that there may be little connection with conventional deconvolution and that  it maybe operating more on the basis of replacing in micro elements of structure from its neural net library - according to probable matches after accounting for blurring?

Tim

Not sure what it is doing either, but when it first came out, I was under the impression that it was not doing a traditional deconvolution.  I am no expert on these things.  However, the word deconvolution is just a word, and whoever co-ops it for whatever purpose, its fine with me.  The old deconv, required that the user put into the function a star shape model.  I assume that was a way for the function to decide by how much image detail shifts were required to bring the stars to a normal round shape.  I always found that to be a poor function, since the star model was just a mashup from all the stars across the field that the user selected.  Well, that only really works when star defects are consistent across the field.  Who has that!?  At least BXT uses tiles to reduce the local areas in size, and therefore right from the start is a better approximation of star defects from point to point across the image.  What is clearly going on with BXT, star only correction, is that the corrections are applied tile-to-tile across all elements of the tile.  And that really is what deconvolution was intended to do.  If the old one did that, it did it poorly.  I have never seen it actually really cause any corrections to the stars themselves.  And the enhancements that it did to nebulosity seemed only to happen in regions of high signal gradients.  The effects were strongest where the gradients were the steepest and that made it hard to control because to try to enhance detail in area less steep, you ended up over deconvoluting in steep areas.  Hence masking, etc. and on and on and on....  Also the enhancements all seemed irrelevant to the star model and never really matched known structures well, when compared to high quality data.  It seemed to just be increasing random contrast for the sake of appearance of structure.  And those snakey thingys!

Hi Alan,  I used to use deconvolution a fair bit -- and had even got to the point of breaking images into smaller regions so the stars used to define the PSF were more locally relevant.  So initially I also assumed that BlurXt was mainly more of the same but better in terms of localising the correction and somehow preventing it from running too far to the wiggly snake level.  I think that the BXT stars thing is different ?-- with normal deconvolution I would use masks to a) make sure that the process was only applied to regions of high SNR (i.e continuous functions rather than dots of noise) and b) the stars themselves were masked out (otherwise they just get corrected to Lorenzian sharp points which looks awful.  I did have some real success with it though -- and got down to details in the catseye nebula for example which turned out to really be there when compared with the professional pictures.

I wonder whether BXT does partly what you suggest --runs local star shape-based deconvolution to quantify the 'blur' function for local tile regions but then rather than replace stars with the Deconvolved Lorenzian just works out from the blur function what their initial eccentricity was and then just renders them round with their 'FWHM' diameter determined more or less just by the sampling limit.   I think that one of the problems with deconvolution is that beyond trying to straighten out the blur on stars - except on points - the process has no larger more realistic target to minimise towards .  The neural net has to come in somewhere and maybe there is some sort of process where initial local deconvolution is applied  (based on the local star-based PSFs) - not too far -  and then the neural net library comes into play either via direct replacement based on probability (John Hayes suggestion) or perhaps more subtly by somehow? setting different goals for further deconvolution?  I must say though the Hayes model does have the virtue of conceptual simplicity!

Tim
Like
TimH
...
· 
Arun H:
Tim Hawkes:
Certainly though I think that you are right that sampling is very important - as it must be - and that BXT sets the resolution limit on that basis  --  i.e. sampling at 0.4arcsec/ pixel  BlurXt seem able to sharpen to a level equivalent to about an arcsec (in the best cases)  whereas at 0.81 it doesn't go lower than about 1.6   (just based on pixel peeping and the FWHM of unsharpened stars).


This line of thinking gets to be worrisome. Because it almost suggests that BlurX is artificially limiting the sharpening based on the resolution allowed by your acquiring system. Of course, there is no reason to do impose that limitation. Someone could easily make a version of Blur X that outputs a larger file with better "resolved" features. Traditional deconvolution is limited by Nyquist since it is a sequence of matrix operations on your data, the PSF is what is resolvable using your pixel size etc... but there need not be such a limitation on NN based algorithms. The primary limitation may just be the SNR of the feature in your dataset, otherwise you are trying to "replace" noise which does not work too well.

Yes I really shouldn't have used the word 'sets'  -- it is expected that those limits should fall out naturally depending on the quality of the data -- and they indeed do seem to with poorer input data not going down so far towards the Nyquist  limit in terms of resolution as better data (with a better initial FWHM).  And yes indeed  - I take the point that SNR must also be critical for any probabilistic recognition and replacement process -- maybe that is part of the granularity that would ensure that image quality out remains commensurate with image quality in ?
Like
TimH
...
· 
Many thanks for everyone's contributions on this thread.  There was also a discussion about BlurExterminator and how it works over on page 3 and 4 of this thread too Short vs. Long Exposures - AstroBin  that, for me at least,  provided useful thoughts and links on what BXT is doing and how it is working.

Tim
Like
TimH
...
· 
Just to provide a short update and a correction to this thread  concerning  trying to quantify which parameters of images that are input into BXT have the most impact on the quality of the final output 

--
At the top of the thread I reported a preliminary observation that while the arcsec sampling rate, FWHM  and SNR of images were all - as you would expect - decisive factors in determining the quality of the final image that BXT Could derive them -- it seemed surprising that the Eccentricity (star shape) appeared less important.  I speculated that BXT 'correction'  was so effective that the Eccentricity of input images could be less of an issue?

However  the problem with my original data set was that none of it was really that good in terms of Eccentricity -- so I was only comparing data sets with Eccentricities that were high with data sets that were moderately bad.   

Finally -- after rebalancing the setup - and one nights observation with better seeing resulting in images with good round stars with Eccentricities at 0.4 or less it is clear that  everything is just as expected in fact.  Images with low FWHM, sampled adequately, enough SNR and starting out with the nearest to round stars deliver the most detail - and most accurate detail -  when compared with HST images of the same.

Here I extracted luminance from very tiny parts of published images of M51 from the NASA/ ESA  HST  and compared the fine detail with what I could detect from  two images.  

The first is 221 x 10s frames integrated yielding an image measured at average FWHM 1.62 and Eccentricity  0.57.  Image scale is 0.406 arcsec/ pixel.  This was processed in BXT with correction followed by non stellar sharpening 0.5X  and stellar sharpening 0.33X.  Some of the same detail seen in the HST image at the core of M51 on the left can also  be detected reasonably faithfully in the BXT processed image.  Resolution looks to be at ~ 1.1 1.2 arcsec  based on pixels.


image.png

Below is the same at  664 x 10s --higher SNR -  Fwhm 1.83, Eccen 0.61 
image.png

Below is 219 x 10s frames integrated yielding an image at average FWHM 1.6 and Eccentricity 0.302.  This was processed in BXT identically as above



image.png



Below is 601 x 10s frames and higher SNR at FWHM 1.61 and Eccentricty 0.39


image.png


Overall, at comparable  FWHM and SNR, the lower Eccentricty images seem  to  deliversthe finer and more faithful level of detail relative to the HST image.
Edited ...
Like
TimH
...
· 
·  1 like
Here -- in case useful to anyone - a slightly more systematic investigation of the separate effects of  sampling, FWHM and Eccentricity of  the input image upon the quality of image finally deliverable by RC BlurExterminator processing. 

The approach was to take a single good -FWHM 1.60,  Eccentricity 0.39 image at 0.406 arcsec/ pixel  of M51 - as above -  and to then  look at the effect of integer downsampling and convolution to either increase the Eccentricty or the FWHM of the starting image prior to BXT processing via Correction, nonstellar (0.5X) and stellar (0.33X) sharpening.

The broad conclusions that I drew from the data of the experiments below  (others may disagree) were --

1) The BXT-improved image of the already quite sharp 601 x10s input image (FWHM 1.6, ECCEN 0.395) was remarkably sharp and looks to have delivered a surprising amount of the detail of the HST image (the HST image was obviously far better -- but it did slightly exceed my budget at $16B thus far!).  From pixel peeping I would estimate the resolution of the BXT processed image at less than an arcsec (i.e. starting to approach the 2X Nyquist sampling imposed limit for 0.406 arc sec / pixel) .

2) As would be expected -- BXT delivered great improvement to the image sampled at 0.406 arcsec/ pixel, a good deal less improvement to the image after it was first downsampled to 0.81 arcsec/ pixel and (not shown) no improvement at all after further downsampling to 1.6 arcsec/ pixel (the starting image already exhibited an average FWHM value of 1.6 so it is expected that no improvement should be possible).

3) As expected the sharpness of the starting image is also a critical determinant of what Blur Exterminator can deliver.  Symmetrical gaussian blurring of the input image from an FWHM of 1.6 up to 1.92 (with an associated slight improvement in Eccentricty) led to really quite a dramatic decrease in the level of detail of the final image delivered by BXT. The image actually looked smoother with better SNR - but the finer detail had gone.

4) Finally, asymmetrical Blurring leading to only a slight increase in FWHM and a relatively large increase in Eccentricity from 0.39 to 0.52 was a bit more surprising. This distortion did lead to a loss of detail in the final image delivered by BXT but this decrease was actually quite slight and subtle.

So the BXT correction function does seem pretty effective at compensating for at least some forms of image distortion - at least up to an Eccentricity of ~0.52 -without too much detriment to the quality of the image that it finally delivers.



image.png

image.png


image.png


image.png

image.png


Tim
Edited ...
Like
 
Register or login to create to post a reply.