Short vs. Long Exposures [Deep Sky] Acquisition techniques · andrea tasselli · ... · 97 · 4117 · 34

TimH
...
· 
John Hayes:
Tim Hawkes:
Thanks for your reply and the learning.  I certainly didn't know most of that.  One further question though.  Surely part of the point of Blur Xt deconvolution is that it can beat the seeing - always provided that your sampling rate supports a higher resolution.   So in a short frame you get atmosphere -distorted star PSFs but distortions  that are at least distorted in a  reasonably consistent way within any given region of the frame and over a short time.  BlurXt (I presume) iteratively calculates the correct local cmpensatory correction  and then applies it.   So the question  is that while it is clearly always better to start from a near perfect image and to then apply deconvolution to -- in my experience at least -  BlurXt takes you a long way even when the star shapes are not perfect .  In my M51 picture above average Eccentricity was up at maybe 0.55 prior to correction and deconvolution.  Maybe consistency of blur is more important to the end product than lack of blur as a starting point to apply deconvolution to?  Tim

Russ had a genius idea for BXT and he had to solve a lot of the details to make it work as well as it does.   At a high level, the concept is actually pretty straight forward.  I should add here that  Russ hasn't given me any inside information but here's my guess about how he might have implemented it.  It is simply a neural network that is loaded with NxN patches of Hubble images that have been mathematically blurred (probably with just a Gaussian blur function).  N might be a value that ranges from 32 to maybe 512--depending on how Russ chose to set it up.  There might be anywhere from 300,000 to 1,000,000 samples loaded into the training set, which is then trained using the original blurred data to find the best match out of all of the samples.  The training can include a lot of different parameters including the amount of blurring, asymmetry in the blurring (smear), and noise levels.  When you sharpen your own image, the data is subdivided into NxN patches so that each patch in your data can be identified with the "mostly likely" fit to a solution.  Once identified, the information in that patch is replaced with the original image data that created the best-fit blurred data.  Note that this is not the same as simply inserting Hubble images directly into your image.  The image patches are small enough that the Hubble data serves mostly as a way of supplying a nearly limitless source of "sharpened patterns" that can be used to show what your more blurry data might look like without the blurring mechanism.  I believe that the process for de-blurring the stars is similar but it may be different enough that it runs as a separate process from the structure sharpening.  That's something that Russ would have to address.  I could imagine that the star correction NN could be loaded with mathematically computed Moffat data that has been filtered through a range of aberrations as well as image translations.  One of the tricky parts to all of this is to get everything normalized properly so that the results all fit together seamlessly.

So nothing in BXT is like the traditional deconvolution process that requires a PSF kernel.  BXT has the ability to solve for the seeing conditions, but Russ didn't choose to work that into the solution.  Regardless, BXT doesn't have to know anything about the seeing to work well.   It just uses mathematically blurred data and since the process is applied patch-wise across the field, it can effectively correct for field aberrations (that vary with position) as well as for motion blur.

John

Thanks so much for this likely explanation John. I had just assumed that it started with normal local deconvolution but used the NN to better define what the iteration should be minimised to. Your explanation is simpler and makes more sense.  I am still interested to know how factors such as sampling rate feed into the level of detail that BlurXt can get down to -- so I am just doing experiments on varying parameters out of interest to see what happens and what is critical for the final result. BXT almost always does better than normal deconvolution- except on one object - the cats eye nebula which I guess was too far out of Russ's training set?  Tim
Like
Gurney 0.90
...
· 
·  3 likes
Very interesting thread to read. Thank you OP.

For those who want to experiment with the different parameters influencing depth and SNR, @Steven Bellavia has created a bunch of spreadsheets for CMOS cameras that I find super useful: https://drive.google...U9CKj5GilnZW8VJ They show that A LOT of parameters technically influence the optimal exposure lengh (at constant total integration time): read/thermal/shot noise, gain, well depth, scope aperture, light pollution, filter bandwidth etc ...
Like
bellavia 0.90
...
· 
·  2 likes
Hi,
Just as a note, I have to "fix" the SNR calculation.  It was initially meant to be relative to a SINGLE FILTER.
That is, if you switch from a broadband filter to Ha, etc., it doesn't "work".
This is not a simple task.  And to make it immensely more complicated, are duo, trio and quad narrowband filters on OSC cameras.
I do hope to get to this soon.
Steve Bellavia
Like
ReadyForTheJetty 1.81
...
· 
·  2 likes
I notice pattern noise in the images, indicating that there's more at play than merely shortening the exposure times. Thank you for conducting the test, but it seems something is not quite right. I often take short exposures and have never encountered this issue. 

The principles of math and science behind image capture are straightforward: Short exposures are detrimental only if the read noise significantly contributes compared to the contribution from sky background noise.

Provided the sky background is sufficiently bright, the F-ratio is low, the pixel size is large, and camera is in high gain mode, very short exposures can be set without read noise being a significant factor—so insignificant that any difference should be nearly imperceptible. 

Thus, the impact depends on your read noise, sky background, F-ratio, and pixel size. Moreover, the pattern noise strongly suggests that factors beyond exposure time are being compared.

I typically shoot short exposures (4-8 seconds on an unguided system) and have dedicated hours to understanding the science and math of image capture, examining various exposure times. The math is unequivocal: Data analysis reveals that read noise is negligible in most scenarios I encounter. By maintaining a very low read noise level, short exposures are feasible without drawbacks, aside from managing the substantial data volume.

In my situation, with an F3.65 scope, a large pixel ASI2400MC camera in high gain mode, under Bortle 7 skies with broadband, I can employ a 2-second exposure while keeping the read noise contribution under 5%, with sky noise accounting for over 95%. This scenario, featuring a fast scope, a large pixel camera, and considerable light pollution, is extreme, but it illustrates that specific conditions are critically important. The necessary exposure time to maintain an insignificant read noise level can range from less than a second to over 10,000 seconds, depending on factors from B1 to B8 skies, 3nm filters to no filter, and small to large pixel cameras, F2 to F10 scopes, cameras in high gain versus gain 0, and you run the gamut on those factors.   All these factors are in play across the range used by modern astrophotographers.   So you have to do the math on a case by case basis to determine when it may matter.

 The only way you could generate such a large difference in noise is if either read noise was a huge contributor in your case, or there is something else going on.  Perhaps you were shooting in very dark skies, or with a very slow scope, etc....    You can certainly create cases where a short exposure time will hurt you immensely, but there is simply no one answer of "Short vs long".   Shoot 3nm filter from B3 skies with an F10 scope in a camera in low gain mode with 10 s exposures and read noise will destroy you.  Shoot broadband in B6 with an F2 scope with any modern CMOS camera in any gain mode with 10s exposures and you will be just fine, read noise is inconsequential.

Dr. Robin Glover's video is an excellent resource on the subject. Simply search for "Dr. Robin Glover optimum exposure time" to find it. I have developed my own read noise models and found that they align with Glover's. Additionally, my models are consistent with an online model I've evaluated, and they all concur on one point: With a fast telescope, moderate light pollution, and a low read noise camera, you can take significantly short exposures without adding extra noise to the final stack, compared to longer exposures with the same total integration time.

In both scenarios, the same number of photons is captured, provided the total integration time remains constant, regardless of whether the target is dim or bright. Once the read noise is minimized, these devices essentially become photon counting machines, and it doesn't matter whether you collect photons in many small buckets or a few large ones; when combined (stacked), the photon count remains the same and the camera's read noise impact on the final results are minimal in both cases.

The primary difference, other than stacking time, is that short exposures can offer a greater dynamic range in the stack if the gain and exposure time are balanced correctly—for example, resulting in fewer overexposed stars. However, if the exposure is too long, the brighter stars begin to saturate... but you all know that.

Another distinction is that, given the appropriate conditions and techniques, along with a large telescope, short exposures can yield slightly higher spatial resolution on bright features if you cull the worst exposures, although this is a big topic in itself.
Edited ...
Like
andreatax 7.90
...
· 
I disagree.
Like
ReadyForTheJetty 1.81
...
· 
·  1 like
Gael Gibert:
Very interesting thread to read. Thank you OP.

For those who want to experiment with the different parameters influencing depth and SNR, @Steven Bellavia has created a bunch of spreadsheets for CMOS cameras that I find super useful: https://drive.google...U9CKj5GilnZW8VJ They show that A LOT of parameters technically influence the optimal exposure lengh (at constant total integration time): read/thermal/shot noise, gain, well depth, scope aperture, light pollution, filter bandwidth etc ...

Nice spreadsheets Steven (and thanks for pointing us to them Gilbert).  I just put the parameters in for one of my configurations in (ASI2600MC, F3.65 scope, Bortle 7 skies, shooting broadband) and got the same results as my spreadsheets which was a great reality check.  For the record, here were the exposure times for a read noise swamp factor of 10:

image.png
I took the liberty of adding a sub-exposure in seconds on the right.

In broadband I usually shoot with gain 100, so sub-exposure times need to be more than 2.5 seconds to swamp the read noise.    LOL... these low read noise cameras are a kick in the pants!   Back in the old CCD days with 8-15e read noise when you absolutely needed long exposures, who would have ever thought we would be where we are today with such low read noise....

For narrowband my first filter was the IDAS NBZ which is similar to the Opt L-Enhance.    But I use my large pixel ASi2400MC and usually at gain 300 and for that, I get this table:

image.png

So I'm up to 9 seconds exposure times (at 300 gain) and 14 seconds at 140 gain, even with a moderately narrow filter. 

I noticed with the L-Ultimate, you have an effective bandpass of 2.3nm with an interesting formula on how you calculated that.  I have and use the L-Ultimate sometimes.  Could you explain how you derived this?     My read is it appears to have a ~3.5nm bandpass, but you may be accounting for some of the signal attenuation on the various color channels and from the filters peak transmission.
Edited ...
Like
TimH
...
· 
·  1 like
Steven Bellavia:
Hi,
Just as a note, I have to "fix" the SNR calculation.  It was initially meant to be relative to a SINGLE FILTER.
That is, if you switch from a broadband filter to Ha, etc., it doesn't "work".
This is not a simple task.  And to make it immensely more complicated, are duo, trio and quad narrowband filters on OSC cameras.
I do hope to get to this soon.
Steve Bellavia

Thank you Steven for the spreadsheet.  It is very useful even given some minor imperfections. Tim
Like
MaksPower 0.00
...
· 
Very interesting, Steven, thank you, it confirms the exposures I've found experimentally using the ASI2600MC DUO unfiltered.
I'm using an L-Pro filter as well.

I have to confess I am still amazed by what this camera can do with 300 - 600s subs at f/12.
Edited ...
Like
jrista 8.68
...
· 
·  2 likes
Steven Miller:
I notice pattern noise in the images, indicating that there's more at play than merely shortening the exposure times. Thank you for conducting the test, but it seems something is not quite right. I often take short exposures and have never encountered this issue. 

The principles of math and science behind image capture are straightforward: Short exposures are detrimental only if the read noise significantly contributes compared to the contribution from sky background noise.

Provided the sky background is sufficiently bright, the F-ratio is low, the pixel size is large, and camera is in high gain mode, very short exposures can be set without read noise being a significant factor—so insignificant that any difference should be nearly imperceptible. 

Thus, the impact depends on your read noise, sky background, F-ratio, and pixel size. Moreover, the pattern noise strongly suggests that factors beyond exposure time are being compared.

I typically shoot short exposures (4-8 seconds on an unguided system) and have dedicated hours to understanding the science and math of image capture, examining various exposure times. The math is unequivocal: Data analysis reveals that read noise is negligible in most scenarios I encounter. By maintaining a very low read noise level, short exposures are feasible without drawbacks, aside from managing the substantial data volume.

I would be careful about stating read noise would have no effect (i.e. be imperceptible, or very nearly so). Most cameras bottom out at around 1e- or so read noise, even at the highest gains. A high gain doesn't eliminate read noise, and usually you reach the minimum limits of read noise at some medium gain, and the curve flattens out and may only marginally improve at the highest gain. 

If you are imaging signals that cannot be revealed (i.e. overcome system noise) in a single exposure, at a given gain, exposure time and flux, then read noise definitely has an impact. Short exposures at high gain may resolve some signals, but plenty others will require many exposures to become even barely perceptible, and many more to become usefully perceptible. Read noise is the limiting factor there. If you are not calibrating or dithering, then FPN is also a factor and may well intrinsically limit your ability to improve SNR on faint signals after enough stacking, but FPN is at least correctable with some minor effort. 

Just because short exposures can resolve a bright signal and with sufficient stacking that signal can become useful, doesn't mean that read noise has no impact. Whether read noise has an impact or not requires knowing the signal of interest, or at least the faintest signal you wish to resolve. There is ALWAYS a fainter signal. Using short exposures to produce an image of the bright core structures of a PN is one thing...using short exposures to resolve the fainter structures, which are sometimes many stellar orders of magnitude fainter than the core, is a significantly more challenging task, and read noise is definitely a limiting factor and will definitely have a perceptible impact (when compared to longer exposures at a lower gain...and lower, not necessarily because of read noise, but because of dynamic range and the ABILITY to use longer exposures!)

For short exposures, even at maximum gain, even with read noise of say 0.75e-, read noise WILL be a factor that limits what you can do. Perhaps the limits don't matter to one individual or another, given their specific goals (i.e. signals of interest), but when it comes to resolving the faintest details possible, read noise at high gain (which has limited DR and thus implicitly limits exposure lengths, too!) will have a perceptible impact. You plain and simply end up with MORE TOTAL read noise in the long run with short exposures. Read noise compounds with sub count. More subs, more noise. This is the limitation of short exposures. In fact, FPN can become a more insidious limiting factor if you actually try to stack enough frames. Even with dark calibration, if the master dark is not IMPECCABLE and crafted from a significant amount of frames itself, the pattern of pixels in the master dark itself is a form of FPN and will eventually become a fundamental limiting factor on SNR. Short exposure/lucky imaging is always facing these challenges, and maximum gain isn't going to change that (at least, until read noise reaches zero...)
Like
Gurney 0.90
...
· 
·  2 likes
Jon Rista:
=14pxThere is ALWAYS a fainter signal.

I really like this sentence, on so many levels 
- Super relevant for our noise discussion
- Should be the tagline for large scopes, RASA/Hyperstars, remote observatories, and low read noise cameras
- A simple way to explain to your significant other WHY
Like
ReadyForTheJetty 1.81
...
· 
·  1 like
Jon Rista:
Steven Miller:
Provided the sky background is sufficiently bright, the F-ratio is low, the pixel size is large, and camera is in high gain mode, very short exposures can be set without read noise being a significant factor—so insignificant that any difference should be nearly imperceptible.



For short exposures, even at maximum gain, even with read noise of say 0.75e-, read noise WILL be a factor that limits what you can do.

Well gosh, I don't think I ever said read noise will be no factor but that it becomes very minor relative to sky noise.  The math indicates that if sky background noise still dominates, say it's 95% of your final stack noise and read noise is only 5% of your final stack noise, it doesn't matter how bright or dim your target is, read noise is still only 5% of the sky noise.  These two sources of noise are invariant to the target brightness.   The only thing target brightness does is change the amount of shot noise relative to sky background and read noise, but the read noise is still only 5% of the sky background and is swamped by this leaving the sky background noise as 95% of the unwanted noise.    Now if that last 5% of noise is critical, then you can get rid of that with about 10% of additional total integration time since final stack noise is proportionate to Sqrt(integration time).

Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?

Before starting my journey doing AP with shorter exposures, I immersed myself in the physics and math of noise and all this is quantifiable.  "Just do the math" as a math professor I know likes to say.  Here's a guy that does the math:

https://www.youtube.com/watch?v=3RH93UvP358&t=2742s

And he makes it clear how to calculate when it's a large factor and when it's trival.  If Dr. Robin Glover is wrong or is missing some big factor, could you explain why and how?

If there is something missing in the understanding of how read noise contributes it would be very useful to me and others to be able to calculate it if the current accepted practice is lacking.  Perhaps review the spreadsheets that Steve Bellavia posted above... are they wrong or missing something.  If so, what is the right spreadsheet?

All these people are putting this down on paper, this allows it to be peer reviewed.   I guess what I'm asking is if there is an equivalent paper, presentation, or spreadsheet that demonstrates the concepts you are talking about that can be peer reviewed.   Then we can all learn what you know, or at least review it.
Edited ...
Like
TimH
...
· 
·  1 like
Jon Rista:
Short exposures at high gain may resolve some signals, but plenty others will require many exposures to become even barely perceptible, and many more to become usefully perceptible. Read noise is the limiting factor there


Is that really true?  Why pick on the read noise? The limiting factor is surely the total noise not just the 5 to 10% element  that is read.  Certainly there are always fainter objects to find but you are not going to get to image them under Bortle 7 skies anyway.  I like a hybrid approach where short subs work fine under bright skies at home but to use longer subs under dark skies where it is really justified and those fainter objects become possible.
Edited ...
Like
jrista 8.68
...
· 
Tim Hawkes:
Jon Rista:
Short exposures at high gain may resolve some signals, but plenty others will require many exposures to become even barely perceptible, and many more to become usefully perceptible. Read noise is the limiting factor there


Is that really true?  Why pick on the read noise? The limiting factor is surely the total noise not just the 5 to 10% element  that is read.  Certainly there are always fainter objects to find but you are not going to get to image them under Bortle 7 skies anyway.  I like a hybrid approach where short subs work fine under bright skies at home but to use longer subs under dark skies where it is really justified and those fainter objects become possible.

IF you are actually exposing enough to render read noise to just 5%... That is really hard to do at a high gain. Remember, at very high gains you have very limited dynamic range. Eight stops isn't going to hold the same range of signal that 13 or 14 stops will. Imaging at very high gain is, IMHO, a handicap unless you have very specific goals in mind. With certain goals, then a very high gain could be useful, but as a general imaging technique...the simple fact of the matter is you get more total read noise at high gains with very short exposures.

Even IF you are swamping read noise such that it represents 5% of the noise in EACH FRAME...you STILL have LOTS MORE FRAMES (at least you would have to, if you were aiming to achieve similar SNR as with longer exposure imaging). Read noise compounds by sub count. No matter how you slice it, stacking a thousand 5 second subs is going to result in more read noise than stacking 50 longer subs. You get a unit of read noise per read, per sub. So, assuming you have say 1.2e- read noise, stack 1000 such frames, and you have SQRT(1000 * 1.2^2), or 38e-. Assuming you have 1.8e- read noise and stack just 50 such frames, you have SQRT(50 * 1.8^2) or 13e-. That's is far greater than an imperceptible difference. There are still the FPN factors to consider with stacking lots of short exposure frames as well. 

Don't get me wrong...I'm not advocating for exposures tens of minutes long... Just saying that high gain imaging with very short exposures is not necessarily all that good. The read noise is really not that much lower than at lower gains, say the commonly used unity gain, or the HCG switchover gains for cameras that have an HCG mode. High gain imaging is, considering just how much amplification is being done, not really all that "low" of read noise, very restricted on dynamic range, and those two things together will hamper your ability to get optimal exposures, not enhance it.
Like
jrista 8.68
...
· 
Steven Miller:
Jon Rista:
Steven Miller:
Provided the sky background is sufficiently bright, the F-ratio is low, the pixel size is large, and camera is in high gain mode, very short exposures can be set without read noise being a significant factor—so insignificant that any difference should be nearly imperceptible.



For short exposures, even at maximum gain, even with read noise of say 0.75e-, read noise WILL be a factor that limits what you can do.

Well gosh, I don't think I ever said read noise will be no factor but that it becomes very minor relative to sky noise.  The math indicates that if sky background noise still dominates, say it's 95% of your final stack noise and read noise is only 5% of your final stack noise, it doesn't matter how bright or dim your target is, read noise is still only 5% of the sky noise.  These two sources of noise are invariant to the target brightness.   The only thing target brightness does is change the amount of shot noise relative to sky background and read noise, but the read noise is still only 5% of the sky background and is swamped by this leaving the sky background noise as 95% of the unwanted noise.    Now if that last 5% of noise is critical, then you can get rid of that with about 10% of additional total integration time since final stack noise is proportionate to Sqrt(integration time).

Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?

Before starting my journey doing AP with shorter exposures, I immersed myself in the physics and math of noise and all this is quantifiable.  "Just do the math" as a math professor I know likes to say.  Here's a guy that does the math:

https://www.youtube.com/watch?v=3RH93UvP358&t=2742s

And he makes it clear how to calculate when it's a large factor and when it's trival.  If Dr. Robin Glover is wrong or is missing some big factor, could you explain why and how?

If there is something missing in the understanding of how read noise contributes it would be very useful to me and others to be able to calculate it if the current accepted practice is lacking.  Perhaps review the spreadsheets that Steve Bellavia posted above... are they wrong or missing something.  If so, what is the right spreadsheet?

All these people are putting this down on paper, this allows it to be peer reviewed.   I guess what I'm asking is if there is an equivalent paper, presentation, or spreadsheet that demonstrates the concepts you are talking about that can be peer reviewed.   Then we can all learn what you know, or at least review it.

Its more complex than just achieving 95% criterion. You are stacking more subs...or at least, I assume you are (if you aren't, then you are going to be severely lacking in signal strength relative to using longer subs.)

Just to make sure its very clear. Read noise compounds with SUB COUNT. More subs, more read noise. If you are indeed using just 4 second subs...and lets just for the moment say you are indeed swamping the read noise by 10x (10xRN^2, which would get you to 95%)... One, I suspect you have very high light pollution... Two, at maximum gain, you are going to be severely dynamic range limited. So, to actually swamp the read noise with background sky by 10x, you run a much higher risk of clipping brighter signals. Matter of physics here...the maximum voltage supported by the pixel circuitry is fixed, so when you amplify the pixel signal a lot more, you simply can't have as strong a source signal. 

But even IF you ARE indeed reaching 95% criterion...you are going to have to stack a heck of a lot more subs, to build up enough signal to produce a comparable image to using a lower gain with longer subs. I'm not saying 10 minutes here, a few minutes even at a lower gain. Read noise compounds with sub count. So, if you stack 1000 subs, at say 1.2e- read noise, then you are going to end up with a lot more read noise in total, than if you stack 50 subs with 1.8e- read noise. The difference between read noise at a middling gain and at the maximum gain is usually not all that much these days. Heck, lets say we had 1e- at max, and 2e- at an HCG mode. Max gain is probably going to be around 8 stops, if even, while the HCG mode it'll be something more like 13.5 stops or so. The HCG mode is going to allow for much better exposures...but, they aren't exactly going to be "long"...60-120 seconds maybe. I've exaggerated the differences in read noise here. Exposure difference is going to be something like 30x. So, lets say we acquire 50 subs at the HCG mode gain, then we would need 30x that at the max gain, or 1500. 

So we have 1500 subs with 1e- read noise, and 50 subs with 2e- read noise. Total noise at max gain is 39e-, while total noise at the HCG mode is 14e-. This is assuming 95% with both gains (which IMO is a lot harder to do well when your dynamic range is as limited as it is at max gain.) No matter how you slice it, stacking lots of short subs is going to give you more total read noise. And its even worse with actual cameras, such as say the QHY600. In its high gain mode, it has 1.57e- read noise at the HCG cutover gain, and about 1.2e- at the maximum gain. It has over 13.6 stops of dynamic range at the HCG mode, and 8.1 stops at max gain. Its no contest, IMO...the high gain imaging with "lowest read noise" is going to be worse than...well heck, any kind of imaging at the HCG mode. Even if you wanted to use very short exposures, the dynamic range advantage of HCG mode over the max gain mode is HUGE. You could increase exposure by a few seconds, stack a few less subs, and I suspect you would have better results. However stacking LOTS of subs, is where read noise shows its nasty side...READ NOISE COMPOUNDS WITH SUB COUNT, not time. 

Very short exposure imaging has its application, for sure. I'm not saying it doesn't have its uses. But, since it compounds with sub count, if you are using such short exposures that you need to stack TONS of subs, then you will plain and simply have more read noise in the end, even if its at the minimum level for any given camera, because you have to stack so many more. Now, comparing minimum gain to maximum gain, minimum gain may not have much if any advantage over high gain imaging...it would probably depend on the camera, but the benefits of min vs. max are going to be different than "optimal gain" vs. max gain. There is usually going to be a gain where dynamic range vs. read noise reaches an optimal level, and that is really the gain that will deliver the best results most of the time, for general purpose imaging at least. For cameras that have an HCG mode, that is usually the sweet spot. Other cameras, though, especially those with 12 and 14 bit ADCs, will often have an optimal gain at something less than minimum and well more than maximum. The ASI1600, for example, was optimal at a gain setting of 78. Which was lower than unity, but still higher than minimum...this offered maximum DR (as limited by the ADC) over low read noise (not minimum, but it was still optimal.) 

Anyway... Its a matter of sub count and limited dynamic range, when you start talking about imaging at really high gain settings. Which I guess, is more what I was getting at...its not just the read noise and whether its swamped...its the limited DR, which will also "force" a shorter exposure, the use of very short exposures, and the stacking of LOTS of subs. More noise is more noise, 39e- total read noise is a lot more than 14e- total read noise (or whatever it ends up coming out to, you would need to calculate it for your given system or systems.) Just because read noise is lowest, doesn't mean the gain setting is optimal, or even particularly viable. Even reducing gain a little below "max gain" on most cameras, will usually deliver good improvement in dynamic range. For almost any camera with an HCG mode, read noise changes so little between the HCG cutover gain and max gain, that the max gain and most higher gain settings offer little to no benefit due to loss of dynamic range. 1.5e- vs. 1.2e-...that difference is FAR less meaningful than 13.6 stops vs. 8.1 stops. The loss of dynamic range here is massive, while the change in read noise is almost meaningless. I think far more can be done with any camera, at the most optimal gain setting (and perhaps some optimal exposure times), than simply shoving it out to max gain (or even close to it) in order to achieve minimum read noise. There is a lot more to it than just read noise, and the consequences to hardware DR at very high gain settings can be highly detrimental.
Like
jrista 8.68
...
· 
·  1 like
Gael Gibert:
Jon Rista:
=14pxThere is ALWAYS a fainter signal.

I really like this sentence, on so many levels 
- Super relevant for our noise discussion
- Should be the tagline for large scopes, RASA/Hyperstars, remote observatories, and low read noise cameras
- A simple way to explain to your significant other WHY

Yup! The last one there is gold.
Like
ReadyForTheJetty 1.81
...
· 
Jon Rista:
Tim Hawkes:
Jon Rista:
Short exposures at high gain may resolve some signals, but plenty others will require many exposures to become even barely perceptible, and many more to become usefully perceptible. Read noise is the limiting factor there


Is that really true?  Why pick on the read noise? The limiting factor is surely the total noise not just the 5 to 10% element  that is read.  Certainly there are always fainter objects to find but you are not going to get to image them under Bortle 7 skies anyway.  I like a hybrid approach where short subs work fine under bright skies at home but to use longer subs under dark skies where it is really justified and those fainter objects become possible.

IF you are actually exposing enough to render read noise to just 5%... That is really hard to do at a high gain. Remember, at very high gains you have very limited dynamic range. Eight stops isn't going to hold the same range of signal that 13 or 14 stops will. Imaging at very high gain is, IMHO, a handicap unless you have very specific goals in mind. With certain goals, then a very high gain could be useful, but as a general imaging technique...the simple fact of the matter is you get more total read noise at high gains with very short exposures.

Even IF you are swamping read noise such that it represents 5% of the noise in EACH FRAME...you STILL have LOTS MORE FRAMES (at least you would have to, if you were aiming to achieve similar SNR as with longer exposure imaging). Read noise compounds by sub count. No matter how you slice it, stacking a thousand 5 second subs is going to result in more read noise than stacking 50 longer subs. You get a unit of read noise per read, per sub. So, assuming you have say 1.2e- read noise, stack 1000 such frames, and you have SQRT(1000 * 1.2^2), or 38e-. Assuming you have 1.8e- read noise and stack just 50 such frames, you have SQRT(50 * 1.8^2) or 13e-. That's is far greater than an imperceptible difference. There are still the FPN factors to consider with stacking lots of short exposure frames as well. 

Don't get me wrong...I'm not advocating for exposures tens of minutes long... Just saying that high gain imaging with very short exposures is not necessarily all that good. The read noise is really not that much lower than at lower gains, say the commonly used unity gain, or the HCG switchover gains for cameras that have an HCG mode. High gain imaging is, considering just how much amplification is being done, not really all that "low" of read noise, very restricted on dynamic range, and those two things together will hamper your ability to get optimal exposures, not enhance it.

So we are not talking about the quality in each exposure but the total quality for the equivalent amount of integration time.   So the DR stops of an individual frame doesn't matter because they increase based on Log2(N).   So if you shoot exposures 2x shorter you lose a DR stop, but if you integrate those 2 exposures you gain a Log2(2) = 1 DR stop, so you are back to the same.  It didn't matter.


"So, assuming you have say 1.2e- read noise, stack 1000 such frames, and you have SQRT(1000 * 1.2^2), or 38e-. Assuming you have 1.8e- read noise and stack just 50 such frames, you have SQRT(50 * 1.8^2) or 13e-. That's is far greater than an imperceptible difference. There are still the FPN factors to consider with stacking lots of short exposure frames as well. "

Thanks for posting an example.  This will help to work through one and we can take this and work it:

So I stated that if you ensured read noise was a small contributor to the total noise and was way below sky background noise, then it will also contribute a small amount to the stack.  A common goal and rule of thumb is to keep read noise to 5% addition over and above sky noise.  And with today's low read noise cameras, fast optics, and high gain mode, this is entirely possible with fairly short exposures in most situations.    So let's use this basic 5% addition.  

Here is what that looks like in the case you outlined:

You gave the example with 1.2e read noise and we are shooting short exposures but still contributing 5% additional noise from camera reads, and since noise adds in quadrature, that means:

Sqrt(Read_Noise ^2 + Sky_Noise^2) = 1.05 * Sky_Noise

So solving for Sky Noise... first square both sides:

(Read_Noise ^2 + Sky_Noise^2) = (1.05 * Sky_Noise)^2


Then....

Read_Noise^2 = (1.05*Sky_Noise)^2 - Sky_Noise^2

Then...

Read_Noise^2 = 1.1025* Sky_Noise^2 -  Sky_Noise^2  = 0.1025*Sky_Noise^2

Sqrt both sides:

Read_Noise = 0.32 * Sky_Noise

So  Sky_Noise = 3.12*Read_Noise.        People who do this often will recognize this... you want sky noise to be just above 3x read noise to get to 5%.

So if read noise is 1.2e, sky noise is 3.12*1.2 =  3.75e

Now do 1000 frames with only sky noise and ZERO read noise:

Sqrt(1000*3.75^2) = 118.6    This is the perfect camera... no noise at all.

And 1000 frames with Sky noise AND the 1.2e Read noise:

Sqrt(1000*1.2^2 + 1000*3.75^2) = 124.5   (5% more noise than 118.6 as expected... the math works!)

And now let's look at longer exposures so we only have 50 but read noise is 1.8.   This is 20x the number of exposures so sky noise per exposure will now be Sqrt(20) times higher or:

3.75 * Sqrt(20) = 16.77      Sky noise per longer exposure

So now let's do the total noise for this stack of 50 images:

Sqrt(50*1.8^2 + 50*16.77^2) = 119.26   (about 4.4% more noise than the short exposures and only about 0.5% above the "perfect" camera).

So you have an improvement, for sure.    In fact to reduce the noise of the short exposure images down to your longer exposures, you need to reduce the noise by 4.4% and we know noise reduces by the Sqrt(N) or number of exposures.  So to get 1.044 to 1 we need (1.044)^2 = 9% more exposure time.

So yes, there is a difference, you need 9% more exposure time.  Similar to what I had stated earlier... perhaps this is what you are referring to...  this last 8 or 9% of your imaging productivity?

I was curious how this 9% difference in noise in terms of shooting at a lower Bortle number.    This always helps put it in perspective for me:

Well, for the range of Bortle 7 to Bortle 3, each Bortle is, very roughly, 2.5x darker.   And sky noise is Sqrt(2.5) = 1.58x less sky noise.  So a 4.4% reduction in sky noise is less than a tenth of that or, very roughly, like shooting in skies that are 0.08 Bortle darker.

So not nothing for sure but was kind of the difference I was trying to communicate.   It's pretty small in light of there things like if this allows you to keep 9% more subs its a wash.  Of if those subs were a tiny bit sharper, perhaps it's a win.  

Now certainly if the OP was shooting in B1 or a very narrow filter and with a super scope, or some similar situation, you can easily create a scenario where you aren't at 5% read noise but you're at 40% read noise and then it should be very visible and you'll need 2x the integration time to get that noise tamped back down.
Edited ...
Like
TimH
...
· 
Jon Rista:
Read noise compounds with SUB COUNT.


That is true of course.  But all the rest of the noise also compounds with sub count because the total exposure time also increases proportionally -- no matter if it all happens in one sub or a hundred.  So for example in numbers ..

If the noise in a 10s sub comprises on average 1 ADU of read noise and say 20 ADU of sky noise  then a stack of 500 would comprise 500 of read noise and 10000 ADU of sky noise .  The read noise is still only ca. 5% of the total noise.   

A single 5000s exposure would theoretically be better -- but not by much -  it would comprise 10001 ADU of noise as opposed to 10500 ADU of noise.  A relatively trivial addition.       

Ps. Above calculation is wrong in fact. I forgot to add the noise in quadrature. But the overall point is valid . Steve Miller below has it correct
Edited ...
Like
jrista 8.68
...
· 
Steven Miller:
Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?


I wanted to more explicitly address this. To really simplify it...its just about TOTAL noise.

Now, to be fair, I guess I am making some assumptions. Maybe the most critical would be, I am assuming your goal is to produce an image of equivalent (or as close as you can get) result with high gain short exposure imaging, as with imaging with longer exposures at a lower (and specifically, more OPTIMAL) gain.

Given these assumptions, then, it is a NECESSITY to acquire and stack MORE subs with short exposure high gain imaging, than...well, lets just call it "normal" exposure optimal gain imaging. The key here is sub count. Read noise compounds with sub count, not time. Shot noise, either from photons or dark current, compounds with time. Even FPN, compounds with time. Read noise, however, compounds over the number of subs integrated. Why? Because every individual sub, even at 95% criterion, has ONE UNIT of read noise. More subs, more units, more read noise. 

Another assumption I guess I made is that, you are either using an optimal gain (i.e. the HCG, or high conversion gain mode of any camera that has it, or...the gain at which maximum DR is first reached, while read noise is the lowest possible for that dynamic range, for any camera that does not have an HCG mode) or the maximum gain (which I think is what you mentioned in your prior post.) Minimum gain is trickier, minimum gain on a lot of CMOS cameras, particularly those with lower bit depths, is not all that useful as you often end up with a lot more read noise (due to quantization noise thanks to lower precision ADC) than if you did a bit of experimentation and found the most "optimal" gain (where DR is maximum, but read noise is lowest for that gain.) With these assumptions, the difference in read noise between the "optimal" gain setting and the "max" gain setting, is usually not all that large...in my experience, with most CMOS cameras, the difference is usually less than one electron. Comparing these two gains is the most useful comparison, IMO. More useful than say comparing minimum gain to maximum gain. In fact, I'd say that a lot of the time, neither gain is particularly useful outside of special circumstances or unique goals. True lucky imaging, for example, is a prime example of a specialized goal where maximum gain could be quite useful (doubtful that you would need more than 8-9 stops of dynamic range!)
Steven Miller:
Before starting my journey doing AP with shorter exposures, I immersed myself in the physics and math of noise and all this is quantifiable.  "Just do the math" as a math professor I know likes to say.  Here's a guy that does the math:

https://www.youtube.com/watch?v=3RH93UvP358&t=2742s

And he makes it clear how to calculate when it's a large factor and when it's trival.  If Dr. Robin Glover is wrong or is missing some big factor, could you explain why and how?

If there is something missing in the understanding of how read noise contributes it would be very useful to me and others to be able to calculate it if the current accepted practice is lacking.  Perhaps review the spreadsheets that Steve Bellavia posted above... are they wrong or missing something.  If so, what is the right spreadsheet?

All these people are putting this down on paper, this allows it to be peer reviewed.   I guess what I'm asking is if there is an equivalent paper, presentation, or spreadsheet that demonstrates the concepts you are talking about that can be peer reviewed.   Then we can all learn what you know, or at least review it.

I am not saying that anything Steven, who I've conversed with a lot over the years when I was over on Cloudy Nights, is wrong. In fact, I love his spreadsheets, he put in a tremendous amount of work into them and I think they are extremely useful for people. IIRC, however, their primary benefit is to help people figure out the key sub exposure factors for a given system. Individual sub factors. I don't think that any of the theory you rely on is "wrong", I think its just that there is one key thing about very short exposure imaging that is often missed, and that is how read noise compounds with sub count.

Stack 1000 subs, 1500 subs, 20000 subs...you are getting a unit of read noise with each and every single one of them. At those sub count levels, it doesn't much matter if you are at minimum read noise or not, you are going to end up with a lot of total read noise in the end, compared to using the most optimal gain for any given camera (along with appropriate sub exposure lengths for that optimal gain.)

The math is pretty simple. More noise, same signal, lower SNR. Weaker signal, same noise, lower SNR. The formula for determining the total amount of read noise you will have for a given stack, is simply this:

NRtotal = SQRT(Csubs * Nread^2)

Csubs = count of subs
Nread = RMS of read noise

If you assume you can achieve 10xRN^2 (95%) criterion with any given gain, then the difference is not about how much you swamped the read noise, but how may subs you stacked, and the TOTAL read noise you have accumulated in that stack. Unless your optimal gain has significantly more read noise (possible, some cameras may still have too much read noise at lower gains) than the maximum or very high gain you are comparing with, then the fact that the high gain has "minimum" read noise will usually not matter here... Even a difference of an electron, or two, or possibly even more, is not enough to overcome the sheer volume of subs that is usually required for very short exposure imaging (imaging with just a few seconds, and then stacking hundreds to thousands.) 

Feel free to check my math out here. It may well be that whatever two gains you have been comparing for yourself, might result in a negligible difference in total read noise. If it does, then I would shift your gaze to dynamic range, and consider other factors of image quality, and see if the short exposure image is indeed still better overall, than longer exposure images at a lower gain. In any case, its about TOTAL read noise, in the entire stack, not just  the relative read noise in each individual sub. If your short exposure high gain imaging results in stacking a ton more subs, its probable that you are ending up with more total read noise...which in the end, is most likely going to mean more total noise...which means your SNR will be lower (than if you used longer subs at the most optimal gain.)
Edited ...
Like
jrista 8.68
...
· 
Tim Hawkes:
Jon Rista:
Read noise compounds with SUB COUNT.


That is true of course.  But all the rest of the noise also compounds with sub count because the total exposure time also increases proportionally -- no matter if it all happens in one sub or a hundred.  So for example in numbers ..

If the noise in a 10s sub comprises on average 1 ADU of read noise and say 20 ADU of sky noise  then a stack of 500 would comprise 500 of read noise and 10000 ADU of sky noise .  The read noise is still only ca. 5% of the total noise.   

A single 5000s exposure would theoretically be better -- but not by much -  it would comprise 10001 ADU of noise as opposed to 10500 ADU of noise.  A relatively trivial difference

You are using ADU here. This is going to confuse you, as ADU are gain-dependent. You should stick with electrons (e-) because they are an absolute measure, not a relative measure. One ADU at a high gain could represent a tiny fraction of an electron, while one ADU at a lower gain could represent one electron, or more than one electron. Its extremely confusing to use ADU for comparisons like this, which is why I strictly stick to electron counts. An electron is an electron, period, regardless of gain. 

Another thing you are doing here, is comparing sky noise.... Sky noise is a time factor. It grows with time. Doesn't matter how much you slice it up, fat slices, thin slices, for a given total amount of TIME, the sky noise is going to be the same. HOWEVER, with fat slices, you are going to have fewer units of read noise to compound, than with thin slices. Read noise will compound more with thin slices than with fat (i.e. lots of short vs. fewer longer exposures). 

In other words, for a given amount of total exposure time, you will have the same amount of sky, dark, and object SHOT noise, regardless of exposure length. However the deepest stack will have the greatest read noise, and thus the lowest SNR.
Like
ReadyForTheJetty 1.81
...
· 
Jon Rista:
Steven Miller:
Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?


Given these assumptions, then, it is a NECESSITY to acquire and stack MORE subs with short exposure high gain imaging, than...well, lets just call it "normal" exposure optimal gain imaging. The key here is sub count. Read noise compounds with sub count, not time. Shot noise, either from photons or dark current, compounds with time. Even FPN, compounds with time. Read noise, however, compounds over the number of subs integrated. Why? Because every individual sub, even at 95% criterion, has ONE UNIT of read noise. More subs, more units, more read noise.

Just checking the basics here:

I think we all agree that if you take shorter subs it is a necessity to stack more of them to get the same amount of total integration time.  I thought what were were talking about was seeing integration time constant on only varying the length of the subs?

If read noise adds 5% noise contribution of one sub, it also adds 5% noise contribution to the stack of those subs.    Do you agree with that?
Like
jrista 8.68
...
· 
Steven Miller:
So you have an improvement, for sure.    In fact to reduce the noise of the short exposure images down to your longer exposures, you need to reduce the noise by 4.4% and we know noise reduces by the Sqrt(N) or number of exposures.  So to get 1.044 to 1 we need (1.044)^2 = 9% more exposure time.

So yes, there is a difference, you need 9% more exposure time.  Similar to what I had stated earlier... perhaps this is what you are referring to...  this last 8 or 9% of your imaging productivity?

I was curious how this 9% difference in noise in terms of shooting at a lower Bortle number.    This always helps put it in perspective for me:

Well, for the range of Bortle 7 to Bortle 3, each Bortle is, very roughly, 2.5x darker.   And sky noise is Sqrt(2.5) = 1.58x less sky noise.  So a 4.4% reduction in sky noise is less than a tenth of that or, very roughly, like shooting in skies that are 0.08 Bortle darker.

So not nothing for sure but was kind of the difference I was trying to communicate.   It's pretty small in light of there things like if this allows you to keep 9% more subs its a wash.  Of if those subs were a tiny bit sharper, perhaps it's a win.  

Now certainly if the OP was shooting in B1 or a very narrow filter and with a super scope, or some similar situation, you can easily create a scenario where you aren't at 5% read noise but you're at 40% read noise and then it should be very visible and you'll need 2x the integration time to get that noise tamped back down.

Well, how long would it take to acquire 9% more? Lets simplify and just say 10% more. How much longer does it take to acquire that additional 10%? Including all the various overhead costs...driving to a dark site, driving back, setting up, calibrating your system. Then the inter-frame overhead...dithering, focusing, etc. Then lets factor in sub loss...there is always some amount of sub loss. Maybe 3%? So you need to acquire 13% more data now... And there is all that additional overhead... So over an hour here, maybe two. 

Now, 9% isn't terrible, it could be a lot worse. However, what is the effort to acquire that additional 9% (which is just pure signal that we are talking about here, nothing else.) Time to acquire is the REAL cost of having more read noise. There are actually some good analyses on this buried in cloudy nights post archives...from years past. It depends on the camera and just how much the read noise differs, but usually the difference between stacking lots more short exposures vs. stacking a more reasonable number of longer exposures was measured in additional HOURS of time to acquire the additional data. 

Now, lets say you aren't dithering...this is often the first thing to go when people move to using shorter exposures. Ditching dithering means you will suffer more from signal quality issues. Even with dark calibration, undithered but DRIFTING subs will usually produce some form of walking noise (raining noise, correlated noise) which looks terrible. ITs strongly patterned, easy for the human eye and mind to pick up. This is an overall IQ (image quality) factor. If you stop dithering, its likely that your IQ will drop compared to properly dithered longer subs. I've even come across more and more threads these days of people doing short exposure imaging and skipping calibration altogether! That is sure to produce poorer quality results. 

It takes a lot of time to produce a decent image, regardless...so sadly, there are not that many direct comparison examples of long vs. short exposures final images to demonstrate the differences. I've been at this for a decade now...and its pretty darn rare that a short exposure image really catches my eye compared to longer exposure images. Sky limited or not. Occasionally someone who has a really refined short-exposure processing workflow, such as exaxe, will share really good short (or even lucky) exposure images (often tens of thousands of subs deep!!! And undoubtedly FPN limited.) Even then, though, the signal DEPTH is relatively shallow, compared to longer exposure images of the same total integration time. 

So, what is that additional 9% requirement REALLY costing you...in terms of actual time you would have to spend acquiring said data? Another two, three hours? That is non-trivial time to me, I guess. That's HOURS of my life (and missed sleep if I'm at a dark site!! ) And even with that additional 9% of the required data...how does the short exposure high gain image compare to longer exposures at an optimal gain? (There are a lot of factors there that could be considered...not just how much read noise you have; quality of noise, depth of signal, faintest signal detected, quality of fine structure, etc. etc.)
Like
jrista 8.68
...
· 
Steven Miller:
Jon Rista:
Steven Miller:
Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?


Given these assumptions, then, it is a NECESSITY to acquire and stack MORE subs with short exposure high gain imaging, than...well, lets just call it "normal" exposure optimal gain imaging. The key here is sub count. Read noise compounds with sub count, not time. Shot noise, either from photons or dark current, compounds with time. Even FPN, compounds with time. Read noise, however, compounds over the number of subs integrated. Why? Because every individual sub, even at 95% criterion, has ONE UNIT of read noise. More subs, more units, more read noise.

Just checking the basics here:

I think we all agree that if you take shorter subs it is a necessity to stack more of them to get the same amount of total integration time.  I thought what were were talking about was seeing integration time constant on only varying the length of the subs?

If read noise adds 5% noise contribution of one sub, it also adds 5% noise contribution to the stack of those subs.    Do you agree with that?

If the only thing that varies here is the number of subs integrated, then yes, absolutely.

If additional variables change...then, perhaps, perhaps not... The difference between 8 stops and 14 stops is huge. I think this is a key factor. You can fit a heck of a lot more signal range in 14 stops. 

You mentioned stacking stubs and recovering DR........ I've always kind of taken issue with that concept. Dynamic range is a very explicit factor when it comes to hardware. You have a fixed voltage range, it cannot be enlarged. If your pixel signal would produce a voltage larger than allowed, you are stuck at maximum voltage. Simple. 

What is DR in an image? I know what SNR is in an image... What is DR? What is your maximum value? Technically speaking, if you really wanted to, you could expand "dynamic range" infinitely with digital data, simply by increasing the maximum value you can represent, without changing any existing signal values. I have a hard time with the concept of "expanding DR" by stacking alone. I'm perfectly fine with the descriptions of how SNR improves with stacking, but improvement in DR is a sketchy concept (at least in the digital domain.) 

Stacking two subs improves SNR. I'm not sure that I would say it increases dynamic range, though. Not like the shift from a mere 8 stops to 14 stops in hardware, anyway. What are the consequences of imaging with just 8 stops of DR, while ALSO aiming to achieve that 10xRN^2 criterion? Are you sacrificing some of your brighter signals in order to swamp read noise by the desired amount? (In my own experience, this is usually a requirement, but I don't know exactly what you are doing or how, with what scope, etc. Using a high gain with a hyperstar would be a vastly greater challenge than using high gain with say an f/8 refractor with a smaller aperture.)

This is where some actual experimentation and evaluation of the results is needed. I chose the numbers I did in my previous posts, because they provided a "neat" example... What could we actually do in the real world, though? How much longer could we expose at the HCG mode of say a QHY600, vs. its maximum gain setting? Are we already over-dedicating the available dynamic range at max gain, with 4 second (IIRC?) subs? Is the scope in use helping with stellar saturation rates (larger f-ratio combined with smaller aperture has a huge impact on stellar flux rates...greatly reducing them compared to the flux rate of background sky, which could make high gain imaging easier with less risk of clipping.) Thus are we clipping good signal that would NOT be clipped at the HCG gain with 120 second subs? I sadly don't have a QHY600 yet to experiment with. I'm betting reality isn't quite as neat as the theorymongering.
Like
ReadyForTheJetty 1.81
...
· 
Jon Rista:
Steven Miller:
Jon Rista:
Steven Miller:
Is that what we are talking about, getting rid of that last 5% and saving ourselves about 10% integration time?

Beyond that, I'm not sure what you are saying:

Are you saying a dimmer target makes the sky noise lower than the read noise or the read noise somehow grows to become a larger percentage of the total unwanted noise?   

If so, I just really want to see an analysis on this.  Perhaps you know of something you can point to.

Or if you are saying something else, like more integration time doesn't matter with shorter exposures, could you explain the fundamentals behind what you are saying, cite a resource that shows the analysis and math?


Given these assumptions, then, it is a NECESSITY to acquire and stack MORE subs with short exposure high gain imaging, than...well, lets just call it "normal" exposure optimal gain imaging. The key here is sub count. Read noise compounds with sub count, not time. Shot noise, either from photons or dark current, compounds with time. Even FPN, compounds with time. Read noise, however, compounds over the number of subs integrated. Why? Because every individual sub, even at 95% criterion, has ONE UNIT of read noise. More subs, more units, more read noise.

Just checking the basics here:

I think we all agree that if you take shorter subs it is a necessity to stack more of them to get the same amount of total integration time.  I thought what were were talking about was seeing integration time constant on only varying the length of the subs?

If read noise adds 5% noise contribution of one sub, it also adds 5% noise contribution to the stack of those subs.    Do you agree with that?



You mentioned stacking stubs and recovering DR........ I've always kind of taken issue with that concept.

Dynamic range can be referred to as simply the ration of the brightest and darkest element in an image.  But with respect to the number of useful tones or digital levels, dynamic range isn’t primarily about the maximum and minimum values (a low 8-bit dynamic range image can still have a very high max and a very low min, but just very few intermediate tones), it’s the number of realized levels in an image.   In other words it’s the range of tones between the high and low point in an image.  

http://preservationtutorial.library.cornell.edu/tutorial/intro/intro-05.html

It’s the primary reason that stacking needs to occur at higher bit depths than the original images.  

It’s the fundamental way Planetary images (like me) take tens of thousands of 8-bit (even less than 8-bit sometimes) super grainy exposures and then stack them into a very smooth 16-bit result.  If that isn’t a clear demonstration of how stacking increases the number of useful levels I just don’t know what to say.
Edited ...
Like
dkamen 6.89
...
· 
That's what I was going to say, planetary imagers do thousands of 8-bit images at pretty high gains, they don't end up with crazy amounts of read noise. 

Of course signal is pretty high to begin with and the background is rendered as (and supposed to be) completely black, without any dust or glow. 

Most of the discussion about read noise vs sky background noise is relevant only to the sky background (and maybe the faintest parts of a DSO). Brighter regions are much less impacted by either.
Edited ...
Like
jrista 8.68
...
· 
·  1 like
Steven Miller:
Jon Rista:
You mentioned stacking stubs and recovering DR........ I've always kind of taken issue with that concept.

Dynamic range can be referred to as simply the ration of the brightest and darkest element in an image.  But with respect to the number of useful tones or digital levels, dynamic range isn’t primarily about the maximum and minimum values (a low 8-bit dynamic range image can still have a very high max and a very low min, but just very few intermediate tones), it’s the number of realized levels in an image.   In other words it’s the range of tones between the high and low point in an image.  

http://preservationtutorial.library.cornell.edu/tutorial/intro/intro-05.html

It’s the primary reason that stacking needs to occur at higher bit depths than the original images.  

It’s the fundamental way Planetary images (like me) take tens of thousands of 8-bit (even less than 8-bit sometimes) super grainy exposures and then stack them into a very smooth 16-bit result.  If that isn’t a clear demonstration of how stacking increases the number of useful levels I just don’t know what to say.

With planetary you are working with extremely bright signals, in relative terms, and signals that don't require as much dynamic range. 

I agree that bit depth improves with stacking, but again, I don't really see that as the same as DR.

DR is the ratio of the brightest POSSIBLE signal, to the noise floor. Hence why its possible to expand dynamic range simply by increasing the numeric range in the digital domain, and why I don't really consider it a useful factor.  Bit depth is a bit more restricted and defined. 

Anyway, I don't want to take the topic too far off topic here. The key to me, is that the change in DR between say an HCG mode or the most optimal gain and the maximum gain, is HUGE, while the change in read noise is usually quite minimal. The loss in DR forces you into a corner, where you must choose whether to preserve bright signal range, or possibly sacrifice it in order to gain faint signal strength. Since read noise generally doesn't change much from optimal/HCG to max gains, but dynamic range does, the point I was trying to make earlier is that its not necessarily best (or even viable) to image at the maximum gain, just because it has the lowest read noise. Its not that simple. A difference of 0.3e- read noise should generally mean that you could effectively do exactly the same thing at the optimal gain (again, I'm referencing the QHY600 here, but there are many other cameras now with very similar noise characteristics) without really losing anything, while gaining something significant. If you are sufficiently swamping read noise at maximum gain with 4 second exposures, then, based on read noise alone, you should be able to use just 6-7 second exposures at the HCG gain and have the same SNR, but VASTLY more dynamic range. 

For one, the fact that, based on read noise alone, the difference between the QHY600 max vs. hcg gain exposures is a mere 2-3 seconds, tells me that either these exposures are under-exposed, or background sky levels are quite high, possibly even both. It also tells me that imaging with short exposures at the HCG mode is bound to deliver better results regardless, since you shouldn't be clipping anything (or far less at the very least)...whereas at the maximum gain, given my own experiences imaging with half a dozen different cameras at high gain, SOMETHING MUST be getting clipped (even with narrow band, I clip stars at maximum gains without even trying, and it usually only takes a few seconds). 

If we want to get into DR improvement, stacking 1000 6 second subs from the HCG mode should produce better results than stacking 1000 4 second subs from max gain. You would have 13.6 stops per sub from HCG gain, and a mere 8.1 from max gain, so your starting point for "increasing DR" is a lot better with the HCG subs...whereas you would have to stack a lot more at max gain to even overcome the gap, let alone improve things beyond it. You stand to gain a lot more in real world terms at the HCG mode. You don't have the minimum possible read noise...but at the HCG mode you have the most optimal camera configuration you could hope for, unless you have particularly unique and specific needs.
Edited ...
Like
 
Register or login to create to post a reply.