Dual-Narrowband filter+OSC vs. Ha/O3 filters+Mono Comparison [Deep Sky] Acquisition techniques · Andre Vilhena · ... · 132 · 8028 · 29

jml79 3.87
...
· 
I would be terribly interested in a proper paper on this and I may look for one. Unfortunately I don’t have the money to have an identical mono and OSC camera. My thought process is that the real time difference shooting only HA and Oiii at a 1:2 ratio (as close to apples to apples as possible) would be about 35% faster in favour of mono but that’s just theory and doesn’t take into account the flexibility of mono or other advantages like shooting a luminance channel.
Edited ...
Like
neverfox 2.97
...
· 
·  2 likes
Arun H:
This is a misunderstanding of how noise and SNR works. Noise =uncertainty. Repeat the same acquisition again, and the same pixel will have a slightly different value on account of noise. Saying that demosaicing will fill in the remaining pixels with a similar statistical profile is similar to saying that one can fill in the value of the next coin toss based on the value of the current coin toss result. Demosaicing can certainly interpolate the value of an unsampled pixel, but the uncertainty caused by acquiring less light will not be eliminated. Were this not so, one could simply take the result of a mono camera break into four separate channels, and get four times the frames.


I think there's some fundamental misunderstanding of what I mean here. Rather than try to align on language or hash it out with analogies, let me just present the results of a controlled experiment that shows clearly what I'm claiming (and what I'm not) and that it's supported by actual data.

In PixInsight, I created a test image with a known uniform signal of 0.5. I simulated a mono image by adding a known amount of noise (Poisson noise and Gaussian noise with SD = 0.1). We can say this was any amount of sky time, so let's say it was 1 hour per channel (per filter). According to Juan, the way SNR is measured in the software is the ratio of the powers of signal and noise:

SNR = E(s^2) / E(n^2)

We normally have to estimate these, but in this experiment, we know them. So the SNR of the mono image is 0.25 / 0.01 or 25. PI agrees:

Screen Shot 2023-01-22 at 5.47.26 PM.png

Now, I use PixelMath to turn the mono image into an OSC image by blacking out the appropriate pixels on the appropriate channels to achieve and RGGB CFA pattern and then I debayer it (VNG). Note that it would represent 1/3 of the sky time to obtain data in this form because an OSC camera obtains it all at once rather than having to cycle through each filter. Because our signal was uniform, any deviation is noise. And guess what? The scaled noise evaluation is similar to (and even slightly better than) than the mono image:

Screen Shot 2023-01-22 at 5.46.54 PM.png

Screen Shot 2023-01-22 at 5.47.07 PM.png

The evaluated noise actually improved. But this is an estimate ignorant of the true signal. However, we can also confirm the amount of E(n^2) since we know the true signal by using Pixel math to look at the mean of an image produced with ($T - Signal)^2. This leads us to an SNR in the neighborhood of 30...in one hour.

Not only did this confirm what I suggested, it went beyond: the spatial sampling of color by the Bayer matrix, once debayered, does not, in and of itself, cause SNR to be reduced. Quite the contrary, the parallel nature of the capture of channels saves time. Interpolation was able to create a full image with a "similar statistical profile" to the image before debayering (which was similar to the mono image when only counting the "live" pixels in each channel).

And that's all I meant by that. This does not mean that mono images aren't better in other measures of quality or that there aren't real differences in QE or filter transmission that come into play to move things back in the other direction in practice (but of course is works the other way too because RGB filter sets often have gaps that Bayer filters don't etc), but all of that is unrelated to the claims people make about how "total photons are greater because you aren't skipping pixels so SNR is greater in less time" claims. The values of pixels aren't improved by capturing light in other pixels. Nothing in the OP of the link you posted contradicts that either (in fact it seems to agree). But if you think I missed an important part relevant to that matter, let me know. Also let me know if you want the Pix project of this experiment.
Edited ...
Like
Staring 4.40
...
· 
·  1 like
Roman Pearah:
Arun H:
This is a misunderstanding of how noise and SNR works. Noise =uncertainty. Repeat the same acquisition again, and the same pixel will have a slightly different value on account of noise. Saying that demosaicing will fill in the remaining pixels with a similar statistical profile is similar to saying that one can fill in the value of the next coin toss based on the value of the current coin toss result. Demosaicing can certainly interpolate the value of an unsampled pixel, but the uncertainty caused by acquiring less light will not be eliminated. Were this not so, one could simply take the result of a mono camera break into four separate channels, and get four times the frames.


I think there's some fundamental misunderstanding of what I mean here. Rather than try to align on language or hash it out with analogies, let me just present the results of a controlled experiment that shows clearly what I'm claiming (and what I'm not) and that it's supported by actual data.

In PixInsight, I created a test image with a known uniform signal of 0.5. I simulated a mono image by adding a known amount of noise (Poisson noise and Gaussian noise with SD = 0.1). We can say this was any amount of sky time, so let's say it was 1 hour per channel (per filter). According to Juan, the way SNR is measured in the software is the ratio of the powers of signal and noise:

SNR = E(s^2) / E(n^2)

We normally have to estimate these, but in this experiment, we know them. So the SNR of the mono image is 0.25 / 0.01 or 25. PI agrees:

Screen Shot 2023-01-22 at 5.47.26 PM.png

Now, I use PixelMath to turn the mono image into an OSC image by blacking out the appropriate pixels on the appropriate channels to achieve and RGGB CFA pattern and then I debayer it (VNG). Note that it would represent 1/3 of the sky time to obtain data in this form because an OSC camera obtains it all at once rather than having to cycle through each filter. Because our signal was uniform, any deviation is noise. And guess what? The scaled noise evaluation is similar to (and even slightly better than) than the mono image:

Screen Shot 2023-01-22 at 5.46.54 PM.png

Screen Shot 2023-01-22 at 5.47.07 PM.png

The evaluated noise actually improved. But this is an estimate ignorant of the true signal. However, we can also confirm the amount of E(n^2) since we know the true signal by using Pixel math to look at the mean of an image produced with ($T - Signal)^2. This leads us to an SNR in the neighborhood of 30...in one hour.

Not only did this confirm what I suggested, it went beyond: the spatial sampling of color by the Bayer matrix, once debayered, does not, in and of itself, cause SNR to be reduced. Quite the contrary, the parallel nature of the capture of channels saves time. Interpolation was able to create a full image with a "similar statistical profile" to the image before debayering (which was similar to the mono image when only counting the "live" pixels in each channel).

And that's all I meant by that. This does not mean that mono images aren't better in other measures of quality or that there aren't real differences in QE or filter transmission that come into play to move things back in the other direction in practice (but of course is works the other way too because RGB filter sets often have gaps that Bayer filters don't etc), but all of that is unrelated to the claims people make about how "total photons are greater because you aren't skipping pixels so SNR is greater in less time" claims. The values of pixels aren't improved by capturing light in other pixels. Nothing in the OP of the link you posted contradicts that either (in fact it seems to agree). But if you think I missed an important part relevant to that matter, let me know. Also let me know if you want the Pix project of this experiment.

You have a fallacy in your experiment: Your „OSC“ image has a decreased noise because what you did acts as a low-pass filter on your data. To simulate a comparable effect on the mono data you can as well down- and upsample it (by approx. a factor of 2). This is the strongest argument against OSC being more efficient than mono: You can just bin the mono data so it‘s as blurry as the OSC one and regain any SNR advantage of the „simultaneous channels“. The experiment also totally ignores the much reduced efficiency of double filtering the signal (with the Bayer matrix and the dual-narrowband filter).
Edited ...
Like
HegAstro 11.99
...
· 
·  2 likes
Roman Pearah:
In PixInsight, I created a test image with a known uniform signal of 0.5. I simulated a mono image by adding a known amount of noise (Poisson noise and Gaussian noise with SD = 0.1). We can say this was any amount of sky time, so let's say it was 1 hour per channel (per filter). According to Juan, the way SNR is measured in the software is the ratio of the powers of signal and noise:


First, there is a fundamental error here in how you have designed your experiment and it is this - the true signal coming from the sky is unknown and has to be estimated from the readings on the pixel. In other words, signal from the sky is not some known value with some Gaussian noise on top of it (as you have done), but a Poisson distribution whose standard deviation is the square root of the signal itself.
Roman Pearah:
The values of pixels aren't improved by capturing light in other pixels. Nothing in the OP of the link you posted contradicts that either (in fact it seems to agree).

The value of the sampled pixel follows the same Poisson distribution as every other sampled pixel. But a quad of pixels where each one is statistically independently sampled adds information in the case of the mono that does not exist in the case of the RGB sensor.

But as I pointed out before - if it were possible to interpolate unknown points and achieve the same overall SNR - what stops us from doing that for mono? Every pixel in a mono is a statistically independent sampling of the signal. So for every mono frame, I can break the frame into four statistically independent samplings (by taking every fourth pixel in a frame), apply interpolation to those, and get the same overall SNR in one fourth of the time!  Which is where your experiment falls apart - because you seem to have proven an obvious fallacy - which is that even mono frames can achieve the same SNR in a fraction of time actually needed!

Edit: I see that Torben made the exact same point!
Edited ...
Like
HegAstro 11.99
...
· 
·  1 like
Arun H:
I can break the frame into four statistically independent samplings (by taking every fourth pixel in a frame), apply interpolation to those, and get the same overall SNR in one fourth of the time!  Which is where your experiment falls apart - because you seem to have proven an obvious fallacy - which is that even mono frames can achieve the same SNR in a fraction of time actually needed!


In fact, why should I stop at 4? There is nothing magical about 4. I can divide by 8 and get the same SNR in 1/8th of the time, or by 16 and get it in one 16th of the time!
Like
Staring 4.40
...
· 
Arun H:
Arun H:
I can break the frame into four statistically independent samplings (by taking every fourth pixel in a frame), apply interpolation to those, and get the same overall SNR in one fourth of the time!  Which is where your experiment falls apart - because you seem to have proven an obvious fallacy - which is that even mono frames can achieve the same SNR in a fraction of time actually needed!


In fact, why should I stop at 4? There is nothing magical about 4. I can divide by 8 and get the same SNR in 1/8th of the time, or by 16 and get it in one 16th of the time!

Yes. Bin to increase SNR. Then use drizzle interpolation to regain detail and trade for SNR… no free lunch.
Like
andreatax 7.90
...
· 
Arun H:
But as I pointed out before - if it were possible to interpolate unknown points and achieve the same overall SNR - what stops us from doing that for mono? Every pixel in a mono is a statistically independent sampling of the signal. So for every mono frame, I can break the frame into four statistically independent samplings (by taking every fourth pixel in a frame), apply interpolation to those, and get the same overall SNR in one fourth of the time! Which is where your experiment falls apart - because you seem to have proven an obvious fallacy - which is that even mono frames can achieve the same SNR in a fraction of time actually needed!

This cannot be done and makes no sense whatsoever. The only real proof of the pudding is in eating it so maybe someone with a monochrome and OSC version of the same sensor can do the testing using a dual-band filter and actually measure things to prove or disprove the point. Or, flipping the thing on its head, take a shot through a mono, apply a OSC masking, interpolate the blanks and compare the SNRs.
Like
HegAstro 11.99
...
· 
andrea tasselli:
This cannot be done and makes no sense whatsoever.


Well, duh, of course it makes no sense. Which is the point Torben and I are making.

It is silliness and flies in the face of basic Poisson statistics and the central limit theorem in statistics  which dictates the uncertainty in the mean as being dependent on the number of statistically independent samplings. It is the very basis of statistics. These kind of arguments are like people trying to disprove the existence of perpetual motion machines.
Edited ...
Like
neverfox 2.97
...
· 
Arun H:
First, there is a fundamental error here in how you have designed your experiment and it is this - the true signal coming from the sky is unknown and has to be estimated from the readings on the pixel. In other words, signal from the sky is not some known value with some Gaussian noise on top of it (as you have done), but a Poisson distribution whose standard deviation is the square root of the signal itself.


That's not a flaw, that's a feature. Our lack of knowledge about true signal in practice means we have to rely on estimates but knowing the real signal allows us to measure the impact of downstream operations precisely and without any room for skepticism. I didn't just put Gaussian noise on top. I literally said I used a Poisson transformation first.
Arun H:
The value of the sampled pixel follows the same Poisson distribution as every other sampled pixel. But a quad of pixels where each one is statistically independently sampled adds information in the case of the mono that does not exist in the case of the RGB sensor.


I never denied this. I said this many times. It's the the whole point of my last paragraph. You have less information. You have less detail. You do not have lower SNR. That's all I ever asserted.
Torben van Hees:
The experiment also totally ignores the much reduced efficiency of double filtering the signal (with the Bayer matrix and the dual-narrowband filter


It ignores it because it's not about dual-narrowband. I never once made any claims about dual-narrowband imaging.
Torben van Hees:
You have a fallacy in your experiment: Your „OSC“ image has a decreased noise because what you did acts as a low-pass filter on your data. To simulate a comparable effect on the mono data you can as well down- and upsample it (by approx. a factor of 2). This is the strongest argument against OSC being more efficient than mono: You can just bin the mono data so it‘s as blurry as the OSC one and regain any SNR advantage of the „simultaneous channels“.


It's not a fallacy because of course there are reasons for SNR efficiency. I wasn't trying to neutralize them. I wanted the OSC to benefit from whatever loss-pass filtering etc occurs to prove something about SNR in isolation. I've consistently said that the tradeoff is that you're getting a blurred image due to interpolation. I never once said you'd get the similar SNR at the same level of detail. I've only said you'd get the similar SNR in less time. Yes, it matters that it's blurred (that's one reason why I shoot mono myself), but this was only to counter claims that people mistakenly make that the SNR is lower. People do make that claim and it's wrong. At the end of the day, you are both agreeing with me (properly understood) by the explanations you're offering for the result ("yes, it's similar/better in less time but...") It's better SNR in less time, period. All the other things that are traded off is changing the subject. We can talk about those but it's still changing the subject.
Edited ...
Like
HegAstro 11.99
...
· 
Roman Pearah:
It's not a fallacy because the claim was never that you get the SNR at the same level of detail. I've consistently said that the tradeoff is that you're getting a blurred image due to interpolation. I never once said you'd get the similar SNR at the same level of detail. I've only said you'd get the similar SNR in less time. Yes, it matters that it's blurred (that's one reason why I shoot mono myself), but this was only to counter claims that people mistakenly make that the SNR is lower. People do make that claim and it's wrong.


If the argument is a tradeoff of detail for SNR, as I repeatedly pointed out, I can do the exact same thing with mono. Get four equally blurred images in the same time as OSC or one blurred image in one fourth the time. This in fact - the trade off of resolution for SNR - is what happens in binning. You are making something very basic, which holds for both mono and OSC, seem as some advantage of OSC, when it isn't.
Like
neverfox 2.97
...
· 
Arun H:
In fact, why should I stop at 4? There is nothing magical about 4. I can divide by 8 and get the same SNR in 1/8th of the time, or by 16 and get it in one 16th of the time!


Why indeed? Like I've said many times, the SNR efficiency comes at a cost (it always does). But I'm only saying that the SNR efficiency is there. That the tradeoff is made. There are people (perhaps not you) who deny that, i.e. they think you have less detail and less SNR. So you're not one of those people. You get it. Great.
Like
neverfox 2.97
...
· 
Arun H:
If the argument is a tradeoff of detail for SNR, as I repeatedly pointed out, I can do the exact same thing with mono. Get four equally blurred images in the same time as OSC or one blurred image in one fourth the time. This in fact - the trade off of resolution for SNR - is what happens in binning. You are making something very basic, which holds for both mono and OSC, seem as some advantage of OSC, when it isn't.


Only because there is a contingent of people who vocally come out and say things like "the SNR of OSC is less because there are only 1/4 red pixels etc." They literally assume that means that the numerator of the SNR calculation has to be adjusted because they don't realize that interpolation can maintain the ratio (at a cost). I understand that you're not one of those people but they exist. In fact, my first response was to someone seemingly doing just that. In general, I think we're largely, if not entirely, on the same page.
Like
HegAstro 11.99
...
· 
·  1 like
Any discussion of SNR has to include scale.

At common scale and for fixed time, one and only one thing determines SNR, and that is number of photons captured. This is Poisson statistics 101.

As mentioned - there are situations in which an OSC will give better SNR than mono. One example is pure RGB imaging, where the OSC has the advantage of larger bandwidths of the color filters of the Bayer array than the mono filters we use (neglecting any detrimental effect of light pollution from that extra bandwidth).  When we add the effect of luminance imaging, the mono usually wins (depending on the ratio of lum to RGB).

For the case of dual narrowband, it will depend on filter efficiencies. Remember we are not interested here in the overall bandwidths; in fact, too wide a bandwidth is detrimental because it lets in light pollution. Rather, we are simply interested in peak transmissions at the common emission lines. And here, I do think mono has an advantage, because the Bayer array filters of an RGB sensor have peaks (other than perhaps for OIII), that do not coincide with the emission lines we are interested in. So that adds a penalty.
Like
HegAstro 11.99
...
· 
Here is an example of what I am talking about - the combined transmission of the L-eXtreme + Sony Red for an OSC:

image.png

You can see the peak transmission is 78% in the H-alpha - purely due to the fact that the Bayer array of the Sony sensor in the red does not peak at the H-alpha wavelength. A comparable Astrodon or Chroma filter has 99% transmission in H-alpha:

image.png

That's a 21% advantage for the mono in H-alpha capture. OIII numbers will be a little less biased in favor of the mono, but you get the idea.
Like
jml79 3.87
...
· 
Wow those Astrodon and Chroma filters are expensive. Between $1250 and $1400 EACH for emissions and $1300 for the LRGB set for 2".
Like
TimH
...
· 
·  1 like
It seems like there is general agreement that under certain circumstances - simple RGB imaging - an OSC does as well or better than a mono camera -- but that under quite a few other circumstances --e.g. when using LRGB rather than RGB imaging or  when using NB filters (the original OP of this thread) mono cameras will  edge it - or indeed offer substantial advantage.  I found the thread on CN that Arun linked to above a really useful starting point to think about it all quantitatively and to take into account the different characteristics of the Bayer matrix and mono camera filters for example

I'd just like to add a couple of slightly different points 

1) Somewhere above  Andrea  https://www.astrobin.com/users/andreatax/   slipped in the point that - actually -  most users of OSC cameras are probably not using VNG debayering at all  but are using the Bayer CFA 1X drizzle.  

I think that this is true and especially in the case of users like myself who are coping with Bortle 7 and above.  Under these circumstances (necessarily short frames) VNG debayering delivers poor OSC colour and  CFA 1X drizzle  is really significantly better.

I was interested in quantifying the price paid for this improved colour using the drizzle method  and wherein  the need for a calculated estimate of electrons in adjacent wells is circumvented.   

Below are comparative data on images of the same set of 400 frames either debayered and integrated or drizzle integrated.  Both linear images were treated minimally and  identically (ABE and PCC colour calibration) before statistical measurement.  The colour of the drizzled image is visibly better.  There appears to be no significant difference in the spatial resolution of the two images (FWHM estimates the same within error) but for faint features (I picked on a faint background galaxy)  the SNR of the Bayer integrated image  appears to be significantly better than that of the drizzled image  (I suppose it loses the benefit of the calculated virtual electrons of the Bayer matrix?).   For the blue and red channels the difference is about 1.5-1.7X - for the green channel it is about 1.3X.   This seems to make sense given that there are more green pixels.

So - in practice -  it seems to me that under high light pollution when using a CFA 1X drizzle  rather than debayering  it will be necessary to accumulate a longer total exposure time to get to the same SNR and detect fainter detail  than  otherwise -- i.e. debayering or using a mono camera ?   On the face of it even a  1.3-1.5X  SNR  difference is equivalent to needing total  exposure to be 2x as long in order to catch up .

So I'd add a caveat.  OSC cameras are as good or better than mono for RGB colour imaging unless you are under highly light poluuted skies and using CFA 1 X drizzling.


2)  My second point comes back to the advantages/ disadvantages of dual narrow band filters plus OSC versus mono cameras plus single NB filters.

I think it also important to consider that many of the deepsky objects are really RGB plus HA only -- i.e there is no real point in the OIII.  So for example starburst galaxies like M82, or with HA arms like M106  or many HII regions virtually devoid of OIII.  In these cases the valid comparison is really just a comparison between an OSC with say a 7 nm HA filter in front of it versus a mono camera with the same.  Here there can surely be no contest.  Aside from the inherent 20% loss from putting a filter over a filter the sparse red pixels will also matter greatly for the SNR  (although not the spatial resolution). The OSC will detect  fewer HA photons and be expected (I think) to yield an SNR less than a half of that of the mono camera in the same imaging time.

Anyway having a matched pair of mono and OSC cameras (ASI 294)  I am in the ideal position to do a controlled experiment (I can quickly swap cameras in the same imaging session) and to compare SNR  in HA on the same night for the same object and imaging time and so will report back on this if an opportunity arises.

image.png



image.png

image.png
Edited ...
Like
andreatax 7.90
...
· 
Tim Hawkes:
It seems like there is general agreement that under certain circumstances - simple RGB imaging - an OSC does as well or better than a mono camera -- but that under quite a few other circumstances --e.g. when using LRGB rather than RGB imaging or  when using NB filters (the original OP of this thread) mono cameras will  edge it - or indeed offer substantial advantage.  I found the thread on CN that Arun linked to above a really useful starting point to think about it all quantitatively and to take into account the different characteristics of the Bayer matrix and mono camera filters for example

I'd just like to add a couple of slightly different points 

1) Somewhere above  Andrea  https://www.astrobin.com/users/andreatax/   slipped in the point that - actually -  most users of OSC cameras are probably not using VNG debayering at all  but are using the Bayer CFA 1X drizzle.  

I think that this is true and especially in the case of users like myself who are coping with Bortle 7 and above.  Under these circumstances (necessarily short frames) VNG debayering delivers poor OSC colour and  CFA 1X drizzle  is really significantly better.

I was interested in quantifying the price paid for this improved colour using the drizzle method  and wherein  the need for a calculated estimate of electrons in adjacent wells is circumvented.   

Below are comparative data on images of the same set of 400 frames either debayered and integrated or drizzle integrated.  Both linear images were treated minimally and  identically (ABE and PCC colour calibration) before statistical measurement.  The colour of the drizzled image is visibly better.  There appears to be no significant difference in the spatial resolution of the two images (FWHM estimates the same within error) but for faint features (I picked on a faint background galaxy)  the SNR of the Bayer integrated image  appears to be significantly better than that of the drizzled image  (I suppose it loses the benefit of the calculated virtual electrons of the Bayer matrix?).   For the blue and red channels the difference is about 1.5-1.7X - for the green channel it is about 1.3X.   This seems to make sense given that there are more green pixels.

So - in practice -  it seems to me that under high light pollution when using a CFA 1X drizzle  rather than debayering  it will be necessary to accumulate a longer total exposure time to get to the same SNR and detect fainter detail  than  otherwise -- i.e. debayering or using a mono camera ?   On the face of it even a  1.3-1.5X  SNR  difference is equivalent to needing total  exposure to be 2x as long in order to catch up .

So I'd add a caveat.  OSC cameras are as good or better than mono for RGB colour imaging unless you are under highly light poluuted skies and using CFA 1 X drizzling.


2)  My second point comes back to the advantages/ disadvantages of dual narrow band filters plus OSC versus mono cameras plus single NB filters.

I think it also important to consider that many of the deepsky objects are really RGB plus HA only -- i.e there is no real point in the OIII.  So for example starburst galaxies like M82, or with HA arms like M106  or many HII regions virtually devoid of OIII.  In these cases the valid comparison is really just a comparison between an OSC with say a 7 nm HA filter in front of it versus a mono camera with the same.  Here there can surely be no contest.  Aside from the inherent 20% loss from putting a filter over a filter the sparse red pixels will also matter greatly for the SNR  (although not the spatial resolution). The OSC will detect  fewer HA photons and be expected (I think) to yield an SNR less than a half of that of the mono camera in the same imaging time.

Anyway having a matched pair of mono and OSC cameras (ASI 294)  I am in the ideal position to do a controlled experiment (I can quickly swap cameras in the same imaging session) and to compare SNR  in HA on the same night for the same object and imaging time and so will report back on this if an opportunity arises.

image.png



image.png

image.png

Tim,

I don't understand how you arrived to those numbers (the ones in colour above). AFAIK, PI doesn't provide absolute values of SNR (since the original signal is unknown) but only on relative improvement of SNR (since you can estimate the noise if you know its distribution function). I looked around and I fall to see a script that calculates the SNR of an image.
Like
neverfox 2.97
...
· 
andrea tasselli:
I looked around and I fall to see a script that calculates the SNR of an image.


From Juan at PI:
The noise evaluation scripts provide estimates of the standard deviation of the noise in the image, assuming a Gaussian noise distribution (which is a simplification, but a reasonably good approximation especially for integrated images). There are several interpretations of SNR; the one that we use in PixInsight is the ratio of the powers of signal and noise:

SNR = E(s^2)/E(n^2)

where s is the average signal and n is the average random noise. E() represents the expected (or mean) value. Assuming that the random noise has zero mean, the denominator can be replaced by the variance of the noise, or the square of the noise estimate that you get from the NoiseEvaluation script. The numerator poses a much more difficult problem, since a significant and robust estimate of the average signal is quite difficult to obtain. You can use the mean of squares (the Statistics tool provides this value) as a very rough approximation.
Edited ...
Like
neverfox 2.97
...
· 
andrea tasselli:
I don't understand how you arrived to those numbers


It looks like he's taking the difference in the means of the right image and the "noise" image on the left as signal and the standard deviation of the noise image as the noise.
Edited ...
Like
TimH
...
· 
·  1 like
Roman Pearah:
It looks like he's taking the difference in the means of the right image and the "noise" image on the left as signal and the standard deviation of the noise image as the noise.

Yes that's right Roman. - not trying to measure SNR of the entire image which is more complicated (could perhaps use subframe selector numbers?)  -  just the SNR of the small defined feature - the galaxy -  for which one can define which pixels have signal in and which in a close by areas do not and can be defined as representing the background.

i.e.  thinking that it is exactly the fainter features like this that you want to get to by going to longer total exposures.


BTW here is the more Global comparison in terms of the Subframe Selector measurements.  The relative PSF signal weights and PSF SNR comes out similarly - a factor of ~ 1.4 or so  in favour of the VNB debayered integration ?     Confusingly however the old method SNR figure is the other way around.

PSF signal weight is the most current and comprehensive neasure of signal quality although this link is already somewhat out of date .. .  https://pixinsight.com/forum/index.php?threads/how-to-use-psf-signal-weight.17600/#:~:text=PixInsight%20Staff,-Jan%2015%2C%202022&text=PSF%20Signal%20Weight%20(PSFSW)%20is,probably%20be%20excluded%20for%20integration.​​​​​​​
​​​​​​​




image.png


Tim
Edited ...
Like
TimH
...
· 
·  4 likes
Just following on from my last post that compared a Bayer CFA 1x drizzle integrated OSC image with a VNG debayered integrated image  a further experiment comparing MONO and OSC cameras for HA imaging that -- upon reflection - rather proves the obvious

Here are some actual data comparing the HA images produced by matching (same gains and pixel sizes)  OSC and MONO cameras  (an ASI 294 MM and an ASI 294 MC) both using the same 7 nm HA filter of the same object (IC1805) produced on the same night with the same telescope etc. and all within a period of 1.5h over which time the Bortle 7 (moon just set) conditions remained about constant.

From the ZWO data both cameras would be expected to have similar efficiency at 656 nm with an effective QE of ~ 70%

For each camera 11 x 3 min of data at gain 151 captured in Sharpcap -11 Mb bin1 image-- pre-processed and integrated in PixInsight using the appropriate cognate flats and darks.

The main conclusions were ..

1) In 33 min the mono camera produced an HA image that was obviously of much better quality than that from the OSC camera

2) The SNR  of a selected small feature of  the 1X drizzle MONO integrated image was almost exactly 2x  the SNR of the same feature in the 33 min Bayer CFA 1 x drizzle OSC image.   The spatial (FWHM) resolution of the two images was similar.

3) The SNR of the same small feature of  the VNG debayered integrated OSC image  was actually about the same as from the MONO image.  However the image was significantly more blurred - it was clear that this blurring arose from the way  VNG debayering was filling in the  intervening empty pixels with virtual data based on the statistically more limited set of real data.   So the apparent increase in SNR is paid for by blurring.

In conclusion then for higher resolution imaging of HA with an OSC camera it will be essential to use the Bayer CFA 1x drizzle method of integration rather than the VNG debayering route. 

For the same total exposure time the OSC camera Bayer CFA 1X drizzle  will produce an HA image that is as well spatially resolved as from an equivalent mono camera but only achieve about half the SNR.  This is  what would be expected  since only 1/4  of the pixels are red and in order to achieve the same SNR as a mono camera it would  be necessary to image with the OSC  for roughly 4x as long to achieve the same HA image quality.

So for objects where HA is the chief interest an OSC (mono or dual) NB filter combination really struggles to compare with a mono camera. 

Of course the equation changes for those many objects where the O3  is relatively weak  but is also of interest and where a dual band filter plus OSC works relatively better on the O3 than the HA --- but for better resolution (and for later use of Blur Xt to be that more  effective) the Bayer CFA 1x drizzle route should still outperform debayering -- and it is with the assumption of Bayer CFA 1x drizzling being employed that the calculations comparing mono and OSC camera should be most fairly made.


image.png



image.png

image.png

image.png

image.png
image.pngimage.pngimage.png
Edited ...
Like
HegAstro 11.99
...
· 
·  1 like
Tim Hawkes:
In 33 min the mono camera produced an HA image that was obviously of much better quality than that from the OSC camera


Thank you for taking the time to do this. These results immediately follow from Poisson statistics, but there remains (still) a lot of misconception here. Hopefully, people seeing actual results in practice will lead to more reliance on the math.

Once again - capturing photon signal  is key. Any method that improves on this - whether it is mono in the case of luminance or H-alpha, or OSC in the case of RGB imaging from a dark site, or increased aperture - should show an improvement. Higher transmission through filters should help also.
So the apparent increase in SNR is paid for by blurring.


This is exactly the problem in the simulation shared earlier by Roman. And besides, this same approach can be taken for mono by creating multiple, independent images and interpolating. Which will result in an increase in SNR at the cost of detail. Which, again, is what happens also with techniques like binning.
Edited ...
Like
neverfox 2.97
...
· 
Arun H:
This is exactly the problem in the simulation shared earlier by Roman.


I don't see it as a "problem in the simulation" in the sense that I believed that I had made it clear multiple times what trade-offs to expect. I only ever set out to measure SNR, however achieved, not SNR at the same clarity *because people regularly deny this* and claim you lose both SNR *and* clarity (or confuse the two). I demonstrated exactly what I set out to demonstrate. Is it obvious and trivial to those like you who have a deep understanding of things? Sure. It was never my intention to waste your time but I wasn't sure right away what perspective you were operating from. But it was fun and fruitful for me in any case, as are Tim's demonstrations.
Edited ...
Like
jml79 3.87
...
· 
That is fantastic work Tim. The only issue I have is it doesn’t quite show the whole picture. Although I can agree that you have shown OSC would never catch mono when imaging Ha, your work paints a worse case picture. Other than a mono image, you need at least 2 channels and that would mean 66 minutes of integration for the OSC vs 33 minutes for mono and that would reduce (but not eliminate) the lead mono has over OSC for this specific case. In the case of RGB imaging the gap can be even smaller. I think it’s important to show how to best use each system and not overstate the difference considering the huge cost difference. Please note I am only trying to be constructive.
Like
HegAstro 11.99
...
· 
·  1 like
Joe Linington:
Other than a mono image, you need at least 2 channels and that would mean 66 minutes of integration for the OSC vs 33 minutes for mono and that would reduce (but not eliminate) the lead mono has over OSC for this specific case.


Just to be clear - SNR scales with the square root of signal. If the OSC is half the SNR of the mono in the same integration time for H-alpha, it will require 4 times the integration time to catch up. 

A simple calculation would show:

1 hr with mono  = 4 hours with OSC for H-alpha
2 hr with mono = 4 hours with OSC for OIII (assuming that the OSC captures OIII on a pixel basis with similar efficiency as the mono using its green pixels).

So, effectively, I have the same combined signal in 3 hours with mono as 4 hours with OSC.

And the flexibility to divide my time in any manner I wish depending on target (versus forced to only dedicate 25% of my pixels at a time for H-alpha).
Edited ...
Like
 
Register or login to create to post a reply.