Blurxterminator… a game changer? [Deep Sky] Processing techniques · Steeve Body · ... · 250 · 16725 · 28

Bobinius 9.90
...
· 
·  1 like
Roman Pearah:
John Hayes:
In order to train the neural-net (NN), you need to start with "ground truth" of what a sharp image looks like and that's why it makes sense to start with Hubble or JWST data. That's not "prefect" data but as far as small, ground based imaging systems are concerned it's close enough. The blurring that I referred to is required to train the NN so that it can find the nearest match in your input image to a blurred image in the training data. Once it finds that match, it can then replace the original data with a normalized version of the sharp (ground truth) data. To be clear, the algorithm does not actually contain Hubble data but it's set up to analyze the small variations in input data so that it can provide a "best guess" about the sharp features that would create the same blurry pattern in your data--all based on the training data set.


I think Bogdan's issue isn't that Hubble/JWST data is used as the ground truth for training. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.

Exactly. Russell had a brilliant idea of training the NN on images that are by definition not convolved / blurred by the atmosphere. The perfect outcome of atmospherically blurred system S is system S without atmosphere. I don't think the goal of this NN training was to obtain super-resolution for our amateur telescopes, beyond the theoretical diffraction limit (like it is done for microscopy). We are limited by seeing and that's what we are deconvolving and sharpening, the atmospherical blurring.  No one is going to criticize Russell that he did not take a ground based set of images with a C11 and then the similar set with the C11 from the ISS or some high altitude plane and trained the NN using the latter set for training. But the Hubble set (not to talk about JWST) contains information and resolving power far beyond what all amateur systems can produce even when used without atmospheric blurring. So that's indeed a question about the impact of this deep information on the NN model that infers the unblurred structures from our noisy data.
Like
Die_Launische_Diva 11.14
...
· 
Arun H:
Die Launische Diva:
Shouldn't the diffraction limit set a lower limit in the resolution uncertainty of the recovered image?


Nope... read the paper attached. The fundamental factor limiting superresolution is not diffraction, but noise. In our cases, for good images, the limiting noise is photon statistics. Therefore, well taken images with large aperture instruments or long integration times will be capable of higher resolution increases.

I'll do, and yes, I think now I understand why. Thank you!
Like
Bobinius 9.90
...
· 
·  1 like
Arun H:
Die Launische Diva:
Shouldn't the diffraction limit set a lower limit in the resolution uncertainty of the recovered image?


Nope... read the paper attached. The fundamental factor limiting superresolution is not diffraction, but noise. In our cases, for good images, the limiting noise is photon statistics. Therefore, well taken images with large aperture instruments or long integration times will be capable of higher resolution increases.

I just read it (skipping the detailed physical math though : ) ). In our case the limiting factor is atmospheric blurring. I really don't think this AI was produced for sharpening beyond the diffraction limit and for superresolution limited by photon noise. The paper is a simulation under ideal conditions and the control experiments were pretty limited (page 709) 14/18 success rate which convinced the authors that "the duplicity was indeed due to the duplicity of the object imaged and cannot generally be ascribed either to the deconvolution algorithm or to the sampling procedure". 14/18 enough for that ! 78% agreement rate (no statistical significance test ...). And now we are sure that it is reality not an erroneous model! Plus they are illustrating the Rayleigh separation with spectroscopy, not what we are talking about. We are far from 5 sigma significance you hear about in present day physics.

We can deconvolve Hubble images too https://adsabs.harvard.edu/full/1992ASPC...25..226K.

But honestly I don't think this is the subject of our debate. The paper is really interesting and I learned something again in this thread, thanks for sharing it. And hopefully anyone can see that Lucy et al. and other physicists asked themselves the same questions we are asking ourselves : is it due to deconvolution or to the real object? And Lucy was not dealing with faith in results produced by an opaque AI, without the possibility of verifying the object directly/via a gold standard method.
Edited ...
Like
Bobinius 9.90
...
· 
Bogdan,

As far as the face goes I used it as an analogy - wondering how the output compared to "reality" rather than suggesting using BlurXTerminator for faces.

As to your first part then I can see that it is a complex implementation. Thefre is still a nagging query for me that is it "creating" some detail rather than simply recovering.

I'll likely still use it as the results so far are impressive.

I think there may be a temptation to see it overused though. If you remember when PixInsight was starting to gain traction there were photos coming out that were looking cartoonish with colours that were too intrusive and "overprocessed".

To me I suspect it'll end up be ing a tool to tweak things but I do think we'll see it overused in images where the basic data is probably less than ideal to start with. A lot of the images here are from experienced, high level astrophotgraphers whose images were good to start with. For them I think this will be a fantastic tool to take their images up a notch to where we amateurs could never have reached before.

Paul

Ah ok, it's a bit clearer now. I found it curious as an application... And I fully agree. A face is something we recognize and we can visually correctly identify easily. That's why I made the parallel with NN outcomes trained for medical purposes. If the outcome is a label (the NN needs to produce the correct diagnosis), then saying it is true or false if simple for us. It's much more complicated when judging if the sharpened outcome is true or false. What does that mean exactly? A few extrafilaments here, one shorter there?

Even the facial image is tricky. If you have a blurred photo of Messi, if the NN produces a false ("fake") result or outcome, whatever that means, is not going to show you Ronaldo! It's going to show you a sharp photo of Messi. But the hairs will be slightly to the left, instead of being straight, the skin pores will not be all at their real/actual place maybe some light reflection on his pupil will be double instead of simple. Let's say the NN really messed up the outcome. But that's still Messi! And you could not distinguish it from a real photo of him. That's exactly what worries when I apply the AI to sharpen my photo. The differences are very hard to examine. I looked around 5 minutes at my Blur XT sharpened galaxy and the Hubble one. There are differences but you could interpret them as "being close" but wrong for certain areas, and at moments I tend to say it's correct. I guess our brains are just oscillating between possible interpretations of the blurred home image vs Hubble. 

Bogdan
Like
HegAstro 11.99
...
· 
·  1 like
Bogdan Borz:
I really don't think this AI was produced for sharpening beyond the diffraction limit and for superresolution limited by photon noise. The paper is a simulation under ideal conditions and the control experiments were pretty limited (page 709) 14/18 success rate which convinced the authors that "the duplicity was indeed due to the duplicity of the object imaged and cannot generally be ascribed either to the deconvolution algorithm or to the sampling procedure". 14/18 enough for that ! 78% agreement rate (no statistical significance test ...). And now we are sure that it is reality not an erroneous model! Plus they are illustrating the Rayleigh separation with spectroscopy, not what we are talking about. We are far from 5 sigma significance you hear about in present day physics.


The purpose of supplying the link was not to get into a pointless debate in an Astrobin forum about a specific paper, rather to make it clear to everyone reading that resolution increases achievable by deconvolution are not limited by diffraction which was what was being suggested. That is settled science that goes beyond one single paper written, incidentally, by no less an authority than one of the two originators of deconvolution. Someone with more time than either of us can probably provide a more recent review, but somehow, I doubt even that will convince those that don't want to be convinced! Personally, learning how deconvolution works, it makes all the sense to me that the writer would use convolved/deconvolved Hubble images as a training set.

Perhaps a valid complaint is the opaqueness of the algorithm since this is not open source code. I anticipate that as people like John Hayes and others compare their images against those taken with larger instruments, it will become more clear whether the resolution gains introduced here do or do not match reality.  By the way, I am curious,  is someone taking the time to compare IOTDs, TPs, or TPNs that use deconvolution against images taken with larger instruments to verify that their implementation of deconvolution didn't introduce false features?

Incidentally, I don't see why either seeing or diffraction should be set as some kind of artifical limit on what resolution improvement, using valid techniques, can be achieved from an image. That certainly does not seem to be the case for the established technique of deconvolution, so I am not sure why Croman should subject himself to that constraint.
Edited ...
Like
Alan_Brunelle
...
· 
Arun H:
Incidentally, is someone taking the time to compare IOTDs, TPs, or TPNs that use deconvolution against images taken with larger instruments to verify that their implementation of deconvolution didn't introduce false features?


I do this often for my images.  It is my choice to try not to post images that full of artifacts.  I sometimes check what my raw unprocessed images look compared to better scopes as a means to see how my equipment is performing and what it is practically capable of.  I do so to images that are processed to be sure structures that can be "seen at intended scale" are plausibly there.  Not just like they could be there.  And I sometime check during some processing steps so that I can catch an obvious mistake before I spend a lot of time after that.

limitations of this approach is finding reliable and relevant data to use in comparison.  Hubble has imaged a remarkably small fraction of the objects we tend to image.  

This is a personal choice.  If some here wants to use a luminance image as a base layer for a water color, I have no problem with that either!

Edit:  And your question (and others') is the reason I posted just such a comparison in this thread earlier today and also to the sister BTX forum post yesterday!
Edited ...
Like
jhayes_tucson 22.64
...
· 
·  1 like
Roman Pearah:
John Hayes:
In order to train the neural-net (NN), you need to start with "ground truth" of what a sharp image looks like and that's why it makes sense to start with Hubble or JWST data. That's not "prefect" data but as far as small, ground based imaging systems are concerned it's close enough. The blurring that I referred to is required to train the NN so that it can find the nearest match in your input image to a blurred image in the training data. Once it finds that match, it can then replace the original data with a normalized version of the sharp (ground truth) data. To be clear, the algorithm does not actually contain Hubble data but it's set up to analyze the small variations in input data so that it can provide a "best guess" about the sharp features that would create the same blurry pattern in your data--all based on the training data set.


I think Bogdan's issue isn't that Hubble/JWST data is used as the ground truth for training. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.

I don't now what Bogdan is questioning; but it sounds like you are misunderstanding my explanation of how a neural net is trained--and what it is doing.  A neural net is a mathematical method of minimizing a merit function across a large number of variables.  That's what allows it to estimate a match between the input data and an accurate model of blurred input data created from an arbitrarily sharp image set.  This isn't a process that simply fills in patches from the sharpened training data.  The NN is finding features from the training data that will "most likely" produce the blurred image that it has been presented with.  The limit of what it can produce comes from both the noise level and the sharpness of the original data.  Less sharp input data will produce a result that is also less sharp.  In addition, this kind of NN can do a lot more than simply sharpen the image.  With the right training data it could also estimate the quality of the seeing and the size of the telescope that took the original data.

John
Like
jhayes_tucson 22.64
...
· 
Bogdan Borz:
But the Hubble set (not to talk about JWST) contains information and resolving power far beyond what all amateur systems can produce even when used without atmospheric blurring. So that's indeed a question about the impact of this deep information on the NN model that infers the unblurred structures from our noisy data.

As long as the NN is trained properly, that's not a problem.  The training data should contain data modeled over a very wide range of blurring functions.  A NN is not an alignment tool that simply replaces the original data with the best fit data from Hubble!  The NN is configured to compute the features that are a best estimate of the irradiance distribution that produced the blurred pattern that it is given.  The sharpness of the output will depend on the noise level and sharpness of the input  data set.

John
Like
TimH
...
· 
Arun H:
Die Launische Diva:
Shouldn't the diffraction limit set a lower limit in the resolution uncertainty of the recovered image?


Nope... read the paper attached. The fundamental factor limiting superresolution is not diffraction, but noise. In our cases, for good images, the limiting noise is photon statistics. Therefore, well taken images with large aperture instruments or long integration times will be capable of higher resolution increases.

presumably for most set ups it will also be sampling?  pictures sampled at 1.5 arcsec/ piixel aren't going to find detail at below 2.4 arcsec or so etc
Like
HegAstro 11.99
...
· 
Tim Hawkes:
presumably for most set ups it will also be sampling?  pictures sampled at 1.5 arcsec/ piixel aren't going to find detail at below 2.4 arcsec or so etc


Yes, Zeiss has a nice writeup on deconvolution which I have attached. See the section on p. 19 which also includes how standard deconvolution can exceed the optical resolution of the system. From this, it is actually not hard to intuit what BlurX is doing. 
"If it is possible to make assumptions about the structures of the object that gave rise to the image, it can be possible to set certain constraints for obtaining the most likely estimate. For example, knowing that a structure is smooth results in discarding an image with rough edges. "


Knowing this makes it clear why a good set of training images is needed and why it makes sense to use Hubble images.

EN_wp_LSM-Plus_Practical-Guide-of-Deconvolution.pdf
Edited ...
Like
neverfox 2.97
...
· 
John Hayes:
Roman Pearah:
John Hayes:
In order to train the neural-net (NN), you need to start with "ground truth" of what a sharp image looks like and that's why it makes sense to start with Hubble or JWST data. That's not "prefect" data but as far as small, ground based imaging systems are concerned it's close enough. The blurring that I referred to is required to train the NN so that it can find the nearest match in your input image to a blurred image in the training data. Once it finds that match, it can then replace the original data with a normalized version of the sharp (ground truth) data. To be clear, the algorithm does not actually contain Hubble data but it's set up to analyze the small variations in input data so that it can provide a "best guess" about the sharp features that would create the same blurry pattern in your data--all based on the training data set.


I think Bogdan's issue isn't that Hubble/JWST data is used as the ground truth for training. Rather it's that the blurred input data isn't blurred Hubble/JWST data but rather ground-based data. So the concern expressed is that it's not just learning to guess what the blurring is hiding but also learning how to guess how the hidden stuff itself would look if you had an optical system with a better diffraction limit.

I don't now what Bogdan is questioning; but it sounds like you are misunderstanding my explanation of how a neural net is trained--and what it is doing.  A neural net is a mathematical method of minimizing a merit function across a large number of variables.  That's what allows it to estimate a match between the input data and an accurate model of blurred input data created from an arbitrarily sharp image set.  This isn't a process that simply fills in patches from the sharpened training data.  The NN is finding features from the training data that will "most likely" produce the blurred image that it has been presented with.  The limit of what it can produce comes from both the noise level and the sharpness of the original data.  Less sharp input data will produce a result that is also less sharp.  In addition, this kind of NN can do a lot more than simply sharpen the image.  With the right training data it could also estimate the quality of the seeing and the size of the telescope that took the original data.

John

Ahem, I was just trying to clarify what I take Bogdan's claim to be, not expressing my opinion or saying anything about your explanation of NN training per se, much less misunderstanding it. I'm still on your side here.
Edited ...
Like
Bobinius 9.90
...
· 
Arun H:
Bogdan Borz:
I really don't think this AI was produced for sharpening beyond the diffraction limit and for superresolution limited by photon noise. The paper is a simulation under ideal conditions and the control experiments were pretty limited (page 709) 14/18 success rate which convinced the authors that "the duplicity was indeed due to the duplicity of the object imaged and cannot generally be ascribed either to the deconvolution algorithm or to the sampling procedure". 14/18 enough for that ! 78% agreement rate (no statistical significance test ...). And now we are sure that it is reality not an erroneous model! Plus they are illustrating the Rayleigh separation with spectroscopy, not what we are talking about. We are far from 5 sigma significance you hear about in present day physics.


The purpose of supplying the link was not to get into a pointless debate in an Astrobin forum about a specific paper, rather to make it clear to everyone reading that resolution increases achievable by deconvolution are not limited by diffraction which was what was being suggested. That is settled science that goes beyond one single paper written, incidentally, by no less an authority than one of the two originators of deconvolution. Someone with more time than either of us can probably provide a more recent review, but somehow, I doubt even that will convince those that don't want to be convinced! Personally, learning how deconvolution works, it makes all the sense to me that the writer would use convolved/deconvolved Hubble images as a training set.

Perhaps a valid complaint is the opaqueness of the algorithm since this is not open source code. I anticipate that as people like John Hayes and others compare their images against those taken with larger instruments, it will become more clear whether the resolution gains introduced here do or do not match reality.  By the way, I am curious,  is someone taking the time to compare IOTDs, TPs, or TPNs that use deconvolution against images taken with larger instruments to verify that their implementation of deconvolution didn't introduce false features?

Incidentally, I don't see why either seeing or diffraction should be set as some kind of artifical limit on what resolution improvement, using valid techniques, can be achieved from an image. That certainly does not seem to be the case for the established technique of deconvolution, so I am not sure why Croman should subject himself to that constraint.

It's not pointless I think, but you're free to consider it as you wish. And if you want to go beyond the diffraction limit, so far the standard was not what we are being served (neural network deconvolution), but the classical deconvolution methods.

What's pretty clear is that ground telescopes are mainly limited by seeing and space telescopes are limited by diffraction. The disagreement is not that deconvolution can go beyond diffraction limit, but that for[i] [/i]our ground telescope based images that we are deconvolving is the atmospheric blurring, not looking for super-resolution limited by photonic noise. The goal always seemed to be attaining diffraction limited images under pristine skies or using decon. 

When we apply deconvolution in PI we are not looking to : 1) fully compensate for atmospheric convolution + 2) go beyond the diffraction limit of the telescope. I could not find examples for scientific application of deconvolution to ground based telescopes for increasing their resolution beyond the diffraction limit (Keck included, I found an article about Io's activity, they applied a specific Mistral deconvolution, but no specification of a higher resolution than diffraction limited), while it was used for various space telescopes. If someone is aware of an example, it would be helpful.

People who are pleased what AI is doing to their images will be convinced anyway, irrespective of any debate. Real or unreal. And will be using it with or without moderation, linear data or not. The algorithm is opaque because that's a general problem of neural networks, they are black boxes. Not because it is not open source.

Comparisons with high resolution data are subjective and difficult. PSNR is used a metric to compare the differences. This is an interesting presentation, try to look and say which of the model galaxies is the "true" one visually. : https://lagrange.oca.eu/images/LAGRANGE/seminaires/2019/2019-04-30_Flamary.pdf
The difference with classic decon is that it produces obvious artifacts and much less sharpening, less likely to fool you. Bxt does not produce the same artifacts, it  is really impressive in what it achieves (better details than manual), why do you think even excellent PI users are having the impression that they need to reprocess their datasets? Because AI seems to outperform our lengthy processing in 10s.
Like
Bobinius 9.90
...
· 
Roman Pearah:
Ahem, I was just trying to clarify what I take Bogdan's claim to be, not expressing my opinion or saying anything about your explanation of NN training per se, much less misunderstanding it. I'm still on your side here.


I am starting to feel like John Connor, under attack by Skynet and its allies (my side / your side). Hopefully BlurXT won't become conscious too, lol.
Edited ...
Like
Bobinius 9.90
...
· 
John Hayes:
Bogdan Borz:
But the Hubble set (not to talk about JWST) contains information and resolving power far beyond what all amateur systems can produce even when used without atmospheric blurring. So that's indeed a question about the impact of this deep information on the NN model that infers the unblurred structures from our noisy data.

As long as the NN is trained properly, that's not a problem.  The training data should contain data modeled over a very wide range of blurring functions.  A NN is not an alignment tool that simply replaces the original data with the best fit data from Hubble!  The NN is configured to compute the features that are a best estimate of the irradiance distribution that produced the blurred pattern that it is given.  The sharpness of the output will depend on the noise level and sharpness of the input  data set.

John

The first phrase would mean that the level of information or detail in the ground truth image would be irrelevant. Which surprises me. The few NN articles or slides I saw use exactly the same image as ground truth or true reference, to be recovered after being blurred/noise injected. They either artificially simulated galaxies or took some DSS data, but the reference had exactly the same resolution and possible information as the blurred set.

Bogdan
Like
FloridaObserver 1.43
...
· 
·  3 likes
andrea tasselli:
Let's cut the chase and make everything artificial and AI generated. At least we'd save on the glass expenses...

*** The personal touch will never become obsolete.  Give me the same data as Russ or Adam Block, and I can guarantee that their images would win any poll in a side by side comparison.  These are just tools that give us inspiration to keep improving.  ***
Like
TimH
...
· 
·  1 like
I do agree .... countless variations are possible during processing -  some just down to individual taste  -  some 'good taste' ? -   BUT   there surely is a serious case to be made that AI-based software will begin to  limit the scope for objective improvement?

The near perfection of imaging that is now possible using some of RC-Astro tools  has  - I think  - some profound consequences for the way that - at least I  - will operate  in future in astro imaging.  i.e.  it really is a gamechanger.

To illustrate with a specific example.  A couple of months ago I imaged the soul nebula (IC1848) and was really quite happy with the final images.  For once  all the technical aspects seemed right  - accurate collimation - ~ round stars across the field etc etc and the PI processing tools  had delivered a fine result.   

So  the resulting IC1848  image was good but not so good as to deter me from ever wanting to return sometime to the same object  in future to seek improvement in sharpness and SNR.

However I then took exactly the same data set and processed it using the RC-Astro tool workflow.  i.e the three integrated NB images were first each deconvolved and star-reduced using BlurXterminator ,  background corrected using DBE and  then the SNR improved using the AI-based NoiseXterminator.   With the three images still linear the stars were then carefully removed using StarXterminator  (checking that there were no artifacts in the corresponding  starmasks) and,  the star-less images stretched up and combined using PixMath in the usual way.   Processing and addition of RGB stars (extracted from an OSC image) was then exactly as for the original image.

The two images from the same dataset are here  ---    https://www.astrobin.com/bpa8vm/F/   the original and then the RC-Astro processed image as the final image.

Doubtless others may see faults  in the final image and certainly as many choices as people to process the data according to taste.  But for  me - as long as I am using an AS1294MM camera. a PDS200  telescope and am under Bortle 6-7 skies in the UK  there just isn't anywhere to go with further IC1848 imaging.   There is no point in returning to it because the image is already way beyond the 80:20 rule wrt the possibility of any further improvement in sharpness or SNR at the same location and using the same equipment.  I could pick a better field or mosaic it but that is the limit.

So getting to near perfection does have a practical consequence and especially when combined with the profound point made by Alan Brunelle https://www.astrobin.com/search/?q=Alan+Brunelle above in this thread about our  having  a huge sky to image but nevertheless only a single perspective upon it  (and for some of us a perspective further limited by latitude, trees/ houses etc)


So personally I think that the new AI-based software is absolutely great --  but also that it really does come with some consequences for how we all might go about imaging in future?    

I think that the consequence for me (New Year resolution?)  is that I am going to seek to travel more to access different objects and parts of the sky plus use different instruments to change the scale?

Tim
Edited ...
Like
rockstarbill 11.02
...
· 
·  1 like
BXT is definitely a power tool that can run your data ragged if you aren't careful with it. I downloaded some Telescope Live data to try it out on some data from large scopes and it's far more sensitive on those images than something from a wide field scope. This notion floating around that everyone's data will look the same just doesn't seem feasible to me.
Like
rockstarbill 11.02
...
· 
Here is a comparison of some data and usage of BXT.

Image 1, 0.90 default BXT application: Tarantula Nebula (LRGB) ( Bill Long ) - AstroBin

Image 2, 0.45 BXT application (so exactly 1/2): Tarantula Nebula (LRGB) (rockstarbill) - Full resolution | AstroBin

Curious what folks think.
Like
dmsummers 6.80
...
· 
Hi Bill,  FWIW, I favor the stellar processing of the first image over the second, but favor the non-stellar processing of the 2nd image over the first.   Maybe it's just preference, but the first image's saturation seems excessive as compared to the second image.   The non-stellar regions seem to me to be more "natural" to my expectations than the harder edges of the first image.    Possibly just a personal preference....    Cheers,  Doug S.
Like
rockstarbill 11.02
...
· 
Doug Summers:
Hi Bill,  FWIW, I favor the stellar processing of the first image over the second, but favor the non-stellar processing of the 2nd image over the first.   Maybe it's just preference, but the first image's saturation seems excessive as compared to the second image.   The non-stellar regions seem to me to be more "natural" to my expectations than the harder edges of the first image.    Possibly just a personal preference....    Cheers,  Doug S.

I agree with you on that point. The most recent revision was what I thought to be a good middle ground. Without getting to deep into the image itself, the data was way over-exposed. The bundle of data I downloaded from Telescope Live was the 600 second sub data, and they have another bundle of the same target taken with 300 second subs instead. I did not know this before though. So there was a bit of reclamation of the data needed. Very similar to what @Adam Block talked about in his BXT video with the Omega Centauri image he had.

I may download the other set of data, because I feel really bad for the globs in that field of view that were dead on arrival in the subs and were not very recoverable. Compressing down the core took some work, but you can make out the core well, make out stars in the central portion of the image that make up a very dense (likely glob in the future) stellar collection. The interesting thing in terms of this discussion, is that I really do not see either version "cooked" by the BXT process. The sat you can blame on me, but the detail recovered by BXT's deconvolution process is not much different in the 0.9 vs 0.45 images.
Like
tly001 1.20
...
· 
Subframe selector requires Xterminator!
Like
StuartT 4.69
...
· 
So I have used BlurXT on a few images now, just with the default settings. All it seems to do is reduce the stars and tighten them up. That's nice, for sure. But it doesn't seem to do anything about sharpening up the nebula detail. I thought that was the main point?
If anything, it seems to reduce contrast and definition in the nebulosity!
Capture.JPG
Like
TimH
...
· 
·  1 like
Stuart Taylor:
All it seems to do is reduce the stars and tighten them up.


Just wondered if you measured the FWHM of the image and tried putting that in manually rather than rely on auto?  

I've tried it on quite a few type II regions etc and it really has worked pretty well  thus far  ---  but they were all cases where Lucy Richardson PI deconvolution also provided improvements -- it is just that BlurXt did it better and without as much time and effort.

As expected though -  it didn't do much on images where  the sampling was poor  and SNR was poor  --   and where LR deconvolution wouldn't have worked either.  That may be the issue here?

Tim
Like
AccidentalAstronomers 11.41
...
· 
I've spent the last month reprocessing just about all my images since we've had nothing but clouds here in Dallas. My experience is that the longer the focal length, and the more oversampled the image scale, the more sharpening I see. That's the same phenomenon I would see with traditional deconvolution, so it makes sense. Stuart, it looks like you have a pretty good image scale going there and have done a really good job capturing your data, so it doesn't surprise me that you don't see much difference. That's not a bad thing.
Like
StuartT 4.69
...
· 
Tim Hawkes:
Stuart Taylor:
All it seems to do is reduce the stars and tighten them up.


Just wondered if you measured the FWHM of the image and tried putting that in manually rather than rely on auto?  

I've tried it on quite a few type II regions etc and it really has worked pretty well  thus far  ---  but they were all cases where Lucy Richardson PI deconvolution also provided improvements -- it is just that BlurXt did it better and without as much time and effort.

As expected though -  it didn't do much on images where  the sampling was poor  and SNR was poor  --   and where LR deconvolution wouldn't have worked either.  That may be the issue here?

Tim

umm.. you've lost me there. I only see these four settings. Which one is FWHM? Do you mean the average FWHM of all the stars in the image? (as it's not the stars I am concerned about, it's the nebulosity). Or maybe I have the wrong end of the stick?

I don't think the sampling is poor. This is 0.96"/px on a night of pretty good seeing.
image.png
Like
 
Register or login to create to post a reply.