RCC: M82 in LRGB+Ha Requests for constructive critique · Linwood Ferguson · ... · 36 · 1208 · 5

Linwood 5.76
...
· 
I collected a lot of data for this, and found that the detail was not much improving. I think it is a sign of bad seeing during this period, the jet stream was frequently over Florida and guiding was poor.

It was stacked in PI using the Normalize Scale Gradient for weighting and drizzle integration (1:2).  RGB were combined along with Ha while linear, and L and Ha combined for a synthetic luminance.  Ha used EZ Deconv to bring out some detail, I skipped that for Luminance as it seemed to do more harm than good.  The color was archsinh stretched, the luminance masked stretched to then do the L combine into the RGB.  Local Histogram Normalization was lightly applied, photoshop finished off the colors a bit and downsample 2:1. 

My back yard is Bortle 6 pushing 7, so a lot of data was needed to bring out any detail, especially with the poor seeing. 

I'd welcome criticism and suggestions.

Image: https://www.astrobin.com/6z8nbm/
Edited ...
Like
andreatax 7.22
...
· 
If there is a thing I'd highlight more than anything else is that your image scale is waaaay too high. You're essentially wasting photons that way. The rest is a consequence of this.
Like
Linwood 5.76
...
· 
Thank you, Andrea.  Could you elaborate on how it hurts?  Or more precisely what you suggest? 

I have the camera I have.  While I have a focal reducer for the C11 HD I have found it introduces distortion, and despite trying all sorts of backfocus tweaks I cannot get rid of it, so I image at 2800mm.   Binning a CMOS camera is pointless, so I have the image scale I have.

Or are you simply saying I should have downsampled this more before presenting it?
Like
andreatax 7.22
...
· 
Well, if you were to bin it would still deliver 4 times the signal and half the photon noise (and also have a better dynamic range). It is not a pointless exercise but fair enough, you could do that in software (although I can't remember a function on PI that does it). Me think you'd have been better served with a much larger pixel pitch or at least the ASI2400. Be as it may, in your conditions (which are pretty much same as mine) you're hobbling the whole system with too high a sampling. 58 hours on a subject as bright as M82 and with so a disappointing result? The least I would do (if not binning and then maybe binning again) is downsampling everything once image calibration is done. Frankly, for the hassle to do that I'd just bin the camera as a default condition and get on with it. It would be still a very high image scale but better than the present situation. I do occasionally bin my CMOS camera and still perceive there is a marginal gain with faint subjects.
Like
Linwood 5.76
...
· 
My understanding of the math is that noise is exactly the same whether downsampled in the camera (binning, which for cmos is software) or done after the fact in Pixinsight; the electronics in CMOS don't permit the gain in SNR that you got in CCD. 

Now if I had bigger sensor sites for a lower pixel density I get that would be different, but I wanted a full frame camera (and it has a much more appropriate scale on my 4" refractor, at 1.434"/px). 

So I am struggling to understand how, without changing cameras, it actually makes a difference if I binned in camera?  It would speed up processing of course later, but that's a relatively minor detail.   And it means building separate dark libraries (not a big deal either of course). 

I could certainly downsample again, and again even of the final image, is that the net of your suggestion to make it less disappointing? 

Or are you really saying get a different camera?
Like
andreatax 7.22
...
· 
The way I understand it is that the "sky" noise (i.e., photon noise) is halved and read-out noise is doubled but you gain an extra db in dynamic range. But, regardless how you do and when you do it (before or after acquisition) I would certainly downsample and process the downsampled image asif it was actually binned in the first place. That is what I am suggesting given your current set-up (vis-a-vi the C11).

Yes, in an ideal world you wouldn't couple 2800 mm of focal length with a pixel 3.8um wide under significantly light polluted skies with marginal seeing. So, yeah, I'd buy another camera.

To put it in context: I had the pleasure of using a 1 mt (aperture) scope under one of the best skies in the world with average seeing I can only dream about in my place and even then, having the same image scale of yours, I felt I was overplaying my hand.
Edited ...
Like
Linwood 5.76
...
· 
The DR issue is interesting.  on the one hand the specs say bin 1 is full well of 51ke, and bin 2 (i.e. x 4) is 200ke, seeming to be slightly under 4:1.  Yet they further down list the dynamic range as 80db vs 86db. 

But I thought all that came out in the wash if you downsample after download as well, that whatever data was available per sensor site is still software combined, whether inside the camera or outside.

Is that not true?  Can you share the math that would show how it ends up being different, or point me to something that explains it.  I confess I am trusting others' statement of that fact, it is not something I derived (nor do I know how). 

And by the way--  ARE there full frame cameras that have materially lower pixel densities?   I just looked briefly at QHY, Altair and ASI and did not see one, all seem to have the same sony sensor.
Like
barnold84
...
· 
·  1 like
Hello,

The process in PI to „bin“ is called IntegerResample. As you have stated, for most CMOS sensors, there is no difference in binning on the camera or later on the PC. 
A 2x2 binning should improve DR by one bit, as Andrea has already said and as you can see from what you wrote: 86db vs. 80db is a factor of two (if they use the power definition of the Bel).

You should also consider that full well is measured and subject to tolerances and measurement errors. Unfortunately in most data sheets (not just for astro cameras) values are sometimes provided but usually never in combination with tolerances. Good engineering would include a statement about tolerances.
If they would provide these figures, I‘d make the bet that you wouldn’t wonder about 51k vs. 200k electrons as it would likely be swallowed by the tolerances.

Björn

PS: there exist CMOS sensors with large pixels. For example Canon‘s full frame sensor with 19um pixels. Search the web for product number 35MMFHDXSMA. Although if it’s reasonably suited for our application shall be a different question. 
For most of us, we‘ll have to deal with the small pixels, I fear.
Edited ...
Like
andreatax 7.22
...
· 
You could well measure these parameters on your own camera. I bet they did the same to came up with these numbers. Besides, a lot depends on what wizardy they decide to implement in the camera firmware, so who knows? But I'll think about. I guess that it all descends from the effective 4x FWC+. Although whatever is the final outcome I don't see what disadvantage that it would bring compared to CCD hardware binning. I mean, given that the advantage is 1/4x of the read noise of the four separate pixels for a CCD vs 2x for a CMOS (so effectively a 1:2 ratio for binned pixel) but the CMOS read noise can be 1/10x of a good CCD (never mind the crap that even Class 0 FF CCD sensor can came up with, they're terrible!), you would be no worse off and possibly well better off.

BTW, I'd use 100 gain with your sensor ALL the time.

As for other cameras more suited to your needs:

https://starizona.com/products/fli-proline-pl16803-monochrome

https://www.gxccd.com/art?id=607&lang=409
Like
kuechlew 7.75
...
· 
·  2 likes
I got the image scale issue but can someone please explain to an AP beginner what's "disappointing" about this image? I realise some blotchiness of the background - most likely some processing artefact and size of the stars is a matter of taste. Other than that and given my own experience on how difficult it is to handle long focal lengths I find this image quite impressive. 

Clear skies 
Wolfgang
Like
andreatax 7.22
...
· 
I got the image scale issue but can someone please explain to an AP beginner what's "disappointing" about this image? I realise some blotchiness of the background - most likely some processing artefact and size of the stars is a matter of taste. Other than that and given my own experience on how difficult it is to handle long focal lengths I find this image quite impressive. 

Clear skies 
Wolfgang

57 hours of imaging time for that with a 11"? I'd want something with better resolution and way more depth, regardless of anything else. This is one of the brightest galaxy in the whole sky!
Like
SemiPro 7.38
...
· 
·  4 likes
I got the image scale issue but can someone please explain to an AP beginner what's "disappointing" about this image? I realise some blotchiness of the background - most likely some processing artefact and size of the stars is a matter of taste. Other than that and given my own experience on how difficult it is to handle long focal lengths I find this image quite impressive. 

Clear skies

Screenshot 2022-03-09 114023.png

Just so everyone can kind of compare, here is a screenshot of all the pictures of, and around, M82 with at least 50 hours of integration on Astrobin.

So about the ASI 6200MM. You are blessed with the cream of the crop in terms of full-frame CMOS cameras. Unlike the previous generation, this camera has High Gain Mode, which is a fancy way of saying you are never going to use anything other than 100 gain. Lets check it out:
image.png
We can see that by using zero gain you get lots of noise but a high FW and max DR. At 100 gain, the noise drops off a cliff, and the FW and DR is still respectable. Totally worth the trade-off. 99% of your imaging should be done at 100 gain. If you are maxing out the pixels, with these new cameras and their low noise you mess with exposure times, not gain.

@andrea tasselli is right, this image is being killed by the pixel scale. I'd give the reducer another try. Now, I understand its tough trying to get a flat field with a full-frame camera. I have troubles with an APS-C sized chip. It's a battle that you are just going to have to accept if you want to use a full-frame camera and despite manufacturer claims sometimes it will just be impossible to get that perfectly flat field.

As for binning, you can just do that on the processing side of things by down-scaling your picture by 50% or something.  https://astronomy-imaging-camera.com/tutorials/everything-you-need-to-know-about-astrophotography-pixel-binning-the-fundamentals.html for more info on that.

In the long term I would encourage that you look for an imaging platform that gets you to at least .60 arcsec/pixel for galaxies and whatnot. Even if you have to downsize your telescope I think anything is preferable to dealing with F/10 and a 0.20 arcsec/pixel scale.

So in a nutshell:
Short Term - Kick the gain up to 100 and bin 2x2. That alone will go a looooooooooooooong way to improving your 11 EdgeHD shots.
Long Term - Explore alternate imaging platforms. Your life will be much easier when you are not imaging at 0.2 arcsec/pixel at F/10.
Like
Linwood 5.76
...
· 
Sorry, been away for a bit. 

OK, I get the 100 gain, and frankly I have no good reason for using zero except I started that way on someone's advice, and never changed.  I should build a revised  dark library and go to 100.   And will. 

And I get that it's a disappointing result.  I'm learning.  Thank you for helping.

But I'm stuck on the scale issue.  Can someone tell me the MATH of why it matters? 

Let's start with the assumption I am using this camera, not some new one, and start with the assumption I am staying at 2800mm.  Maybe I should use a FR, but help me understand without.

This is not a CCD.  Binning in camera gives exactly the same noise result as downsampling in software (my understanding anyway). 

And I get that I'm manipulating more data and so my computer works harder -- tough, let it sweat. 

How would binning 2x2 in camera have made this a better image?   How does that math work?   

Or are you saying that binning would be identical, but using a focal reducer would have made an actual difference.  I get that... certainly I would need a lot less time for the same SNR, and I do not understand fully how it would change, but I clearly understand THAT it would change.

Or are you just simply saying "you didn't downsample far enough before posting"?

Please... I'm trying to learn, so I am not trying to sound argumentative, but just saying "killed by image scale" is not helping me learn, any more than telling me it's disappointing.

Would binning in camera change anything (and if so, please share the math or a reference)?

If you are simply saying "downsample x2 again before posting" fine, please say that (or 4x or whatever). 

Help?
Like
SemiPro 7.38
...
· 
·  3 likes
From the Cloudy Night forums:
image.png

I mean there it is. In a nutshell, a smaller pixel scale means less SNR.

Now - yes we are talking CMOS cameras. When it comes to binning or down-sampling or whatever you want to call it, that happens on the software side. If you go to your image, it looks nice right? I think it does. Then you zoom in to the full-sized view and suddenly its blurry and noisy. By leaving it say at 50% or 40% of the original size you are combining many pixels into one, leaving a better looking image. So yes, we are telling you that you should reduce the image scale in post - processing for a better end result. Now, that can be seen as subjective but this is the critique forum so there ya go.

On the imaging side, hopefully it displays how a larger pixel size helps increase SNR. The lower the pixel scale, the less light is hitting each pixel and therefore the time you need to get a good SNR increases. Getting back to how it "killed" the image, its not so much that it killed the image itself, but it killed the potential of the integration time. Supposing you changed nothing but the camera and were able to use one that increased your pixel scale, then you would of collected more light as a whole and would of had better SNR.

Essentially we are telling you, in our subjective opinion, that you are not getting the bang for your buck at 50 hours of imaging time. If we can consider that all the ways to increase what you get out of this time (Camera with bigger pixels, using a reducer to reduce f/ratio, new telescope with a lower f/ratio) also increases the pixel scale, it might explain the hyper-focus we have on that.
Like
Linwood 5.76
...
· 
Thanks, @SemiPro, that helps and fits with what I understood.  Downsampling further prior to display is sensible.  

I get the idea that larger sensor sites can capture light with less noise (it is not obvious they can capture more light as the surface area of the sensor is unchanged, it depends on the physical construction and how much light falls on areas non-sensitive).  But a new camera is not high on my list right now, I am about to buy a 6" refractor and would rather spend the money on that (and at somewhere near 1000mm the scale is much more appropriate as well). 

I drove myself nuts for several nights adjusting backfocus on the focal reducer with the C11.  I do not know if I have a bad one, or just did it badly, but never got star shapes I liked.  It's usable, but requires a lot of cleanup.   An example here: https://www.astrobin.com/32xnwf/

But isn't that also the same impact as downsampling?   For very small objects, again ignoring processing time, doesn't the math come out the same if imaged at 2800mm and downsampled than if imaged at 2000mm and left alone?   Or is that actually different? 

FWIW I like that example better than my results on M82 also. 

But to be clear -- binning in camera (in CMOS) would have no effect different than subsequent downsample of the same amount.  Right? 

......  change of subject .... 

Given it is what it is at the moment in terms of data acquired, any recommendations on processing?
Like
SemiPro 7.38
...
· 
Linwood Ferguson:
Thanks, @SemiPro, that helps and fits with what I understood.  Downsampling further prior to display is sensible.  

I get the idea that larger sensor sites can capture light with less noise (it is not obvious they can capture more light as the surface area of the sensor is unchanged, it depends on the physical construction and how much light falls on areas non-sensitive).  But a new camera is not high on my list right now, I am about to buy a 6" refractor and would rather spend the money on that (and at somewhere near 1000mm the scale is much more appropriate as well). 

I drove myself nuts for several nights adjusting backfocus on the focal reducer with the C11.  I do not know if I have a bad one, or just did it badly, but never got star shapes I liked.  It's usable, but requires a lot of cleanup.   An example here: https://www.astrobin.com/32xnwf/

But isn't that also the same impact as downsampling?   For very small objects, again ignoring processing time, doesn't the math come out the same if imaged at 2800mm and downsampled than if imaged at 2000mm and left alone?   Or is that actually different? 

FWIW I like that example better than my results on M82 also. 

But to be clear -- binning in camera (in CMOS) would have no effect different than subsequent downsample of the same amount.  Right? 

......  change of subject .... 

Given it is what it is at the moment in terms of data acquired, any recommendations on processing?

Hmmm, yeah I see what you are getting at with the reducer. Consider checking collimination and perhaps tilt if you haven't done so already with the reducer installed.  If it was the back-focus alone then the star shapes would be warped evenly in all the corners.

I wouldn't say down-sampling is the same as chucking in a focal reducer or other mechanical solutions. You still captured all that data at f/10, but the gain you get from reducing down the telescope to f/7 is way more than just down-sampling in post-processing.  I don't THINK binning a CMOS would be better than just down-sampling in post just because of the design behind CMOS cameras but maybe someone else is better equipped for that question.

I'll leave any critiques about processing to someone else since I feel like I have said enough haha
Like
Linwood 5.76
...
· 
I certainly appreciate the feedback, thank you.
Like
barnold84
...
· 
From a distance, the reducer would lead to the same effect as if you were binning (both to the same image scale). Now the closer look: the reducer increases field of view, which binning does not. Reducer may introduce additional optical aberrations (I don't recall the residual aberrations of the EdgeHD), especially if spacing isn't correct.

Hence, if your field of view is acceptable for you, I'd bin the sensor instead of using a reducer.

Björn
Like
andreatax 7.22
...
· 
On the subject of CMOS binning:

It is crystal-clear that binning in a CMOS device does not add but average the signal and THAT after it has been digitized (i.e., passed through the ADC). This said we have to recognize that when we talk about digital information in reality we are talking about modulation of some physical signal, in our present case current voltage. The longer the signal goes through a physical medium (copper wires in our case) and is being processed by a digital device there is a fair chance that a small amount of noise is added along the way. Binning therefore would reduce this noise by a factor of 4 before it reaches you in the form of image on your computer. That is my understanding.

So binning in a CMOS gives you 3 advantages: Smaller digital foot-print to process with, higher DR and (possibly quite small) lower noise. Since you're oversampling your seeing by a huge amount just cut the chase and do everything in bin 2x2.

That doesn't solve your problem as for the M82 57-58 hours of imaging time already there. I repeat my suggestion to you that you would need to carry out you whole processing (besides the preliminary ImageCalibration step) in downsampled mode (2x2 or even 3x3). Or push it in the public domain and ask people how would THEY do it

Cheers
Like
barnold84
...
· 
·  1 like
andrea tasselli:
It is crystal-clear that binning in a CMOS device does not add but average the signal and THAT after it has been digitized (i.e., passed through the ADC). This said we have to recognize that when we talk about digital information in reality we are talking about modulation of some physical signal, in our present case current voltage. The longer the signal goes through a physical medium (copper wires in our case) and is being processed by a digital device there is a fair chance that a small amount of noise is added along the way. Binning therefore would reduce this noise by a factor of 4 before it reaches you in the form of image on your computer. That is my understanding.


As you say, CMOS binning is done after the ADC. Assuming that the rest of the processing pipeline doesn’t randomly flip bits (which is a negligible probability here on Earth), no additional noise will be introduced.
CMOS binning isn’t necessarily an average. For example, Altair doesn’t scale its 12 or 14 bit signal to 16 bit (like ZWO does). So, if you image unbinned with a 12bit Altair, the maximum digital number is 4095. If you bin 2x2, this number is 16383. Also averaging is nothing else than adding and then division by a constant. However, if one does average, one should use floating point data as one would indeed lose accuracy again and in fact introduce (totally unnecessary) „noise“.

For CMOS, a 2x2 binning reduces (read) noise by factor 2 while a CCD would reduce by 4.

Björn
Like
RichardHall 2.71
...
· 
·  1 like
Hi Linwood,

Kudos to you for having the patience to invest 50+ hours in any target, thats a heck of a lot of time, but worthwhile once to see how your target fairs after such a long time. I m also Currently processing M81/82 and the brightness of it is killing me, I m shooting F6.9 and my Luminance is crazily bright with 120s exposure time.

I'm shooting only at 910mm with a 2600mm which I'm happy with, but as our fellow Astrobinners say, I can confirm my images seem to take a leap in a better direction when i m shooting with a reducer and therfore a lower F ratio, not to mention the time reduction (technically i spend the same amount of time usually 2 to 3 nights on a single target) but the return of that is far superior than if i were shooting at F7 and the guidine is more forgiving. The only thing i dont like about it is of course I m not as deep as I would like to get.

In my image I m processing, it greatly benefitted from the following, which you may wish to try, if you havent already, esp. on those stars.

1) Removing the stars from your picture with starnetv2/starxterminator and work on them seperately. I combine RGB first non-linear, remove stars, then remove stars From luminace and do the final touches to get your luminace/HA data as good as you can without having to compromise on star size. Then when you are happy combine the L with the RGB,  I dont use the L stars after that, just the RGB ones, and then when i m happy I pixelmath the stars with the LRGB image. - Fellow astrobinners might completely disagree with my approach, and I m also still learning, but thats giving me for my tastes a more appealing picture were i am in this journey.
2) My M81/M82 picture i havent yet published is greatly improved by HDRmultiscaletransform, less so on M82 than M81 but I do see a small improvement.
3) I m a bigger fan of more blueish galaxies and redder Ha, but thats a taste thing, and believe me at the moment I am not happy how the colours are coming out at the moment on my own picture.

Granted, I m no expert, probably still a novice who just invested way to much to get the "best of the best", but There is also only so much you can do with postprocessing. If the right compromise isnt struck in the combination of telescope, camera then you ll always be dissappointed rather than pleasantly surprised.

Next suggestion, make your Tif files available for others to see how their result fairs to yours, If you feel you did a better job for your tastes then you are atleast maximizing your postprocessing skills. But you may also learn something new.

Hope this helps!
Like
andreatax 7.22
...
· 
For CMOS, a 2x2 binning reduces (read) noise by factor 2 while a CCD would reduce by 4.


AFAIK, in a CMOS read noise doubles with binning while photon noise halves.
Like
barnold84
...
· 
andrea tasselli:
For CMOS, a 2x2 binning reduces (read) noise by factor 2 while a CCD would reduce by 4.


AFAIK, in a CMOS read noise doubles with binning while photon noise halves.

It depends how one is calculating. The correct (invariant) statement would be the SNR improves by factor 2.
If one adds the pixels, the noise will double. If one averages, i.e. division by 4 (for 2x2), the noise halved (compared to the noise value for the unbinned pixel).

IMHO, noise actually becomes meaningful if put in context with the desired signal. Hence, looking at the SNR is probably the right way.
Edited ...
Like
Linwood 5.76
...
· 
@Richard, thank you for the processing comments.  I have on other images worked directly on stars by either separation or a de-emphasis process, but did not here.  There's a Dolphin in my gallery I spent waaaayyyy too many hours on doing that.   I probably should have here.   I may do another processing run later (though I spent enough time processing this, working out the benefits or lack in NSG vs PSF weighting, etc. that I need a break from it.  ).   I have also struggled with whether making Ha brighter/redder is good or bad, there's a point at which it looks a bit fake.  I'll go to bed happy with a color rendition, look at it in the morning, and hate it and redo it.    Anyway, thanks for the processing suggestions. 

It's too bad NASA can't put up MacBeth chart beside our targets so we can get the "right" colors.  Though we might find them boring. 

Andrew and Björn: I realize I may be tilting at windmills, and with the possible exception of DR which I am still trying to understand relative to a 14 bit DR in a 16 bit ADC, it still seems to me that the biggest argument for binning in camera vs afterwards lies in smaller files. 

One thing I have also seen argued is that downsampling after most processing (in particular weighting and deconvolution) is better as the additional resolution provides for more accurate PSF calculations, important for current PI weighting and critical for deconvolution.  Admittedly that was in the context of "slight" oversampling, but here is a recent posting by Juan (author of pixinsight): 

https://pixinsight.com/forum/index.php?threads/where-when-to-downsample-a-drizzle-inetgration.18132/

Here is what it comes down to in some ways (and again, with DR perhaps being the exception): If the noise ends up the same if downsampled later, and if binning in camera results in ANY potential loss of detail, it seems safest to me to downsample downstream in the process, rather than in camera.  If I capture binned 1x1 I can always reduce it; you can never put data back binned 2x2. 

But... I am FAR from sure I do understand the math.  And that's why conflicting info, like what I am inferring (perhaps erroneously) from comments like Juan's above, has me still reluctant to throw away the data inside the camera.

I am NOT arguing that I posted an image appropriately downsampled; please do not misunderstand, and thank you for that feedback.

I am striving to understand the math of why it would be a better image binned 2x2 in camera than downsampled later in post processing, specifically near the end, after integration and deconvolution.
Like
barnold84
...
· 
·  1 like
Linwood Ferguson:
I am striving to understand the math of why it would be a better image binned 2x2 in camera than downsampled later in post processing, specifically near the end, after integration and deconvolution.

Linwood,

Post processing involves amplifying weak signal. However, this also implies amplifying weak differences. Therefore, if you stretch a highly oversampled image, you will have a lot of noise variation. If you bin the data beforehand, you will be able to (better) determine the true (mean) value of signal and noise.

I made a numerical example to illustrate it. The following image will show you 20 pixels. The read noise (yellow) is 2 in average. The signal is 20 (in average). Due to the nature of statistics, you can see how strong the pixels (columns in the graph) vary.
The second graph shows a 4-binning, i.e. I am partitioning the 20 pixels into 5 super-pixels, where each super pixel consists of 4 pixels. As you can see, the read noise and signal should start to level out.

Unbinned:
Unbinned.png

Binned:
Binned.png

Imagine you would have stretched the data. You wouldn't be able to appropriately average the data and recover the signal appropriately as the statistics, i.e. the statistical distribution which drives the noise becomes totally skewed.

Therefore: downsample the lights after calibration. It's up to you if in the camera or on disk. Doing it on disks may be a choice if you want to use higher resolution on days with exceptional seeing (but still with a rather reasonable image scale in the beginning).

Hope that explains a bit.

Björn
Edited ...
Like
 
Register or login to create to post a reply.