Optimal exposure time [Deep Sky] Acquisition techniques · AndyM274 · ... · 18 · 1144 · 2

AndyM274 0.00
...
· 
·  1 like
Hi guys,
Have recently transitioned from running a twin rig 70mm triplet refractor with an Altair Astro 26C & 26M to just mono.
 Basically changed my whole setup - now running an Altair 70EDQ with the 26M & a 7 pos filter wheel containing Both NB & LRGB.
I ran the SharpCap sensor analysis earlier today & also managed to get the optimum exposure calculator running in between the clag.  I can’t quite believe what the data is suggesting:
basically it’s telling me that my optimum exposure times for LRGB is 8.5sec at 200 gain & 148sec for NB filters
just seems inherently wrong - any advice gratefully received 
cheers
AndyC0829164-F6F2-46D8-A05F-A807EE1805E3.png5CF220F0-D8C6-40A8-9001-1B489071A982.png
Like
jonnybravo0311 7.83
...
· 
·  2 likes
Optimal exposure calculators give you the minimum exposure time you'll need to exceed the "swamp" factor they use. Usually it's 10x. I'm not sure exactly what Dr Glover used in his implementation of SharpCap. He might discuss it in his video. If you haven't yet, I highly suggest giving it a watch. Ruzeen posted it on his YT channel: https://www.youtube.com/watch?v=3RH93UvP358&t=1s
Like
barnold84 10.79
...
· 
·  2 likes
Jonny Bravo:
Optimal exposure calculators give you the minimum exposure time you'll need to exceed the "swamp" factor they use. Usually it's 10x. I'm not sure exactly what Dr Glover used in his implementation of SharpCap. He might discuss it in his video. If you haven't yet, I highly suggest giving it a watch. Ruzeen posted it on his YT channel: https://www.youtube.com/watch?v=3RH93UvP358&t=1s

*The limit is adjustable. There’s a drop-down field labeled „read noise limit“.
Like
barnold84 10.79
...
· 
·  3 likes
Hi Andy,

The data seems plausible, assuming that you have a fairly bright sky. Since we have a brightly illuminated moon, the measurement of the sky flux is probably biased. If you lived under a Bortle 2 sky, you‘ll get quite higher values for the recommended exposure time.

However, as a general remark: at gain 200, the read noise is pretty low (seems to be around 1.5e) and therefore the minimum exposure time becomes much lower, compared to cameras with much higher read noise (the relationship is quadratic).

Björn
Like
AndyM274 0.00
...
· 
Hi Bjorn,
 My sky is Bortle 7 so is fairly bright unfortunately. 
have re-ran the sensor analysis but this time selected a different driver to enable me to select the ultra low noise driver on the ASCOM driver
 will how I get on
Like
Avjunky 0.90
...
· 
·  3 likes
I agree  with what Jonny Bravo said.  The term “Optimal exposure” as is applied in Sharpcap (and NINA) is a misnomer.  It really should be called a “minimal acceptable exposure”, in that it gives the shortest exposure time that will illuminate the image sufficient to flood the desired/targeted noise levels.  Increasing the exposure beyond the so-called optimal duration implies a negative impact to the image when in fact the SNR continues to improve.  That’s why it’s a misnomer.  

What I’d love to see added to these tools is a maximum exposure calculation which would be the longest exposure possible that doesn’t saturate the sensor (ignoring stars).  The maximum exposure number would give the highest SNR for the equipment and sky conditions (more optimal lol).  An acceptable exposure then would be in between these two numbers, with the upper number being a better goal.  

This all applies to diffuse stuff - nebulas, non core galaxies, etc.  Stars on the other hand are so much brighter than diffuse stuff that they need a different discussion and their own exposure settings.

 One question to the poster - How narrow are your narrowband filters?  When I ran Sharpcap and NINA in my urban backyard, NINA came up with 9s for wide band (similar to yours) but my 5nm NB filters still had a very long “optimal exposure” time (I think 20 minutes if I recall correctly), much longer than yours.  

As an aside that is applicable to folks who shoot NB images blended with LRGB stars- since LRGB exposures are so much shorter than NB, I’ve started to shoot stars in wide band at gain 0 in order to get the benefits of maximum well depth.  My exposure times need to be tested manually and set such that individual stars are not saturated (which prevents star colors from turning white).  For NB I use the gain setting suggested that gives the best dynamic range and lowest read noise.  For an IMX455 sensor, this works out to a gain of 100.  I keep my subs at 15 min so that I collect enough subs to do sigma rejection and this also reduces impacts from clouds.  In my case 15min NB exposures are still shorter than the calculated optimal exposure number.
Like
AstroNikko 3.61
...
· 
·  1 like
The reason it's referred to as the "optimal exposure" instead of the minimum exposure is because beyond that point the SNR is subject to the law of diminishing returns. The rate at which you see improvements in SNR flattens out quickly. As you increase the length of your exposure, you also increase the chances of it being degraded by poor seeing, weather conditions, tracking/guiding issues, and artificial light sources.

Might be best to think about it from a lucky imaging perspective. Shorter exposures allow a better chance at isolating moments where seeing conditions are best. With shorter exposures, you can discard more of your subframes while retaining the best of your data. It's another way of improving sampling, with shorter frames leading to a higher frequency sample rate.

Keep in mind, you still have to aim for the same amount of total integration time as you would with longer exposures. This leads to the need for more storage space, but not nearly as much as traditional high frame-rate lucky imaging.

Another thing to keep in mind is that the "optimal exposure" value will tend to vary from night to night, and maybe even from hour to hour. To minimize the amount of work you have to do with respect to calibration frames, it can be helpful to round up to the nearest exposure time in your darks library.
Edited ...
Like
StuartT 4.69
...
· 
·  1 like
yes, I must admit that I have never found these tools to be very useful. I generally centre my chosen target, check focus, then try out a few exposures (depending on how bright the object is) and when looks right I just use that in the sequence. It's all very scientific 
Like
Avjunky 0.90
...
· 
·  1 like
Yup @AstroNikko, I agree with everything that you’ve said.  But I still think that calling it an “optimal exposure” is a misnomer because it leads to the perception that longer than optimal exposures could be detrimental to the image, when in fact the SNR continues to improve when one moves past the optimal exposure (even though it’s on the flat/diminishing side of the curve).  

There is a point where SNR stops improving though and this happens when the non-star part of the image begins to saturate.  This point is the upper limit while the “optimal exposure” point provides the lower limit.  That’s why I’d like to see both numbers presented so that users could better understand the range and set accordingly, while also taking into account the other factors that you mention (guiding errors, clouds, etc).  Calling the lower limit “minimum acceptable exposure” also makes it clear that going below this number can lead to bad (unacceptable and easily preventable noise).

Short exposure CMOS imaging is getting a lot of attention and I see many people making the mistake that it’s pointless to expose beyond the lower limit and also mistakenly applying short exposures to NB imaging. I think the naming convention contributes to this problem.
Like
AndyM274 0.00
...
· 
@Mark Petersen it said that my NB should be 147sec.
Back in the day I was schooled by some of this that transitioned from film astrophotograpy into the modded DSLR space who were adamant that longer singers were better.
 Then I found Dr Glover’s talk on this very subject. It went against everything I was taught previously but there you go.
i guess at worst case I could go longer for each LRGB channel for the nebula & bin the stars off them & collect stars separately at the recommended exposure levels
Like
AstroNikko 3.61
...
· 
·  1 like
Mark Petersen:
There is a point where SNR stops improving though and this happens when the non-star part of the image begins to saturate.  This point is the upper limit while the “optimal exposure” point provides the lower limit.  That’s why I’d like to see both numbers presented so that users could better understand the range and set accordingly, while also taking into account the other factors that you mention (guiding errors, clouds, etc).  Calling the lower limit “minimum acceptable exposure” also makes it clear that going below this number can lead to bad (unacceptable and easily preventable noise).

Been trying to reconcile your reasoning with my current understanding of the optimal exposure time.  As I see it, the problem with making saturation of the background the upper limit for an "acceptable exposure" time is that by that time you have long oversaturated the brightest parts of your image.

If my understanding is correct, what the "optimal exposure" is intended to do is establish the point at which you begin to see data in the background above the noise, and at which the brightest points of light are sufficiently saturated. Beyond that, bright points of light would continue to bloat and blow out.

It seems there's little point in exposing longer than the "optimal exposure" than is convenient, because the results after stacking are basically the same with respect to the amount of light gathered when targeting the same total integration time.

I went poking around to find out more about the "read noise limit" because I didn't quite understand its impact on the Smart Histogram results, and found this post in the SharpCap forum. In a response to that thread, Dr. Robin Glover linked to a lengthy thread where he basically breaks down how the Smart Histogram works. It's worth a read, especially after watching his talks on Deep Sky Astrophotography with CMOS Cameras and Choosing the right gain for Deep Sky imaging with CMOS cameras.
Like
skybob727 6.08
...
· 
·  1 like
Stuart Taylor:
yes, I must admit that I have never found these tools to be very useful. I generally centre my chosen target, check focus, then try out a few exposures (depending on how bright the object is) and when looks right I just use that in the sequence. It's all very scientific 

I also think this is the best way and is what I always do. Within just a few minutes I can tell just how long I can go before I saturate a bright nebula or a galaxy core, stars are done separately to add later. I hear so much about how little time some of you have to image, I don't know how long these exposure calculators take, but sounds they take valuable image time away from your imaging night.
Like
AstroNikko 3.61
...
· 
To run the Smart Histogram, usually only takes a few minutes to spit out a result.

Can take a bit of work ahead of time though, as the Smart Histogram requires a sensor analysis of your camera. SharpCap comes preloaded with sensor analysis results for some of the more popular cameras, but not for my Player One cameras or my QHY268C. For the QHY268C, I needed to run the sensor analysis for each of the read modes.
Like
mgutierrez 1.43
...
· 
Björn:
Hi Andy,

The data seems plausible, assuming that you have a fairly bright sky. Since we have a brightly illuminated moon, the measurement of the sky flux is probably biased. If you lived under a Bortle 2 sky, you‘ll get quite higher values for the recommended exposure time.

However, as a general remark: at gain 200, the read noise is pretty low (seems to be around 1.5e) and therefore the minimum exposure time becomes much lower, compared to cameras with much higher read noise (the relationship is quadratic).

Björn

completely agree
Like
Avjunky 0.90
...
· 
·  1 like
@Nikkolai Davenport Thanks for the links, I'll give them a deep dive later today.   First let me say that I'm a fan of Dr. Glover's efforts. I've watched both of the video presentations you linked and examined his sensor analysis tool in Sharpcap and it's great stuff.   It's technically sound and is a fantastic resource.  It enables users to come up with an exposure that will consistently deliver good results across a mix of equipment and sky gradients.  It also allows users to get the most out of their equipment.  So my comments are not to be taken as criticism of the analysis and results.  My sole point was that the terminology of optimum exposure could be improved.  

I did skim through the summary at the end in the lengthy thread and noticed that the takeaway is the same as in the video - That a 5% noise target gives a very good exposure duration that gets the user to the stable (less noisy) part of the SNR curve.  He does go on to show that there are improvements beyond that point.   My point is that since going beyond that point yields improvements then calling the noisier, shorter, exposure "optimum" is confusing.  Since the user is in effect choosing their acceptable noise level (5%) then calling it minimum acceptable is arguably a better term.  Optimum to me means that it works like say the electrical timing in a car engine - going below this number has disadvantages as does going above this number.  But here optimum gives good results but going longer gives better results (even if the better results are small and diminishing).  

It's also important to recognize that as Dr. Glover also mentions that this analysis is targeted towards the dark part of an image that is noise limited.  Shot noise and read noise are much less important in the bright part of an image such as we find with stars, here the noise is in the noise lol.  Since stars can be processed separately courtesy of new tools like Starnet and StarXterminator they can be acquired separately and the analysis is different.  Here maximizing dynamic range is key.  Similarly, as Dr. Glover mentioned, with NB imaging long exposures are still needed and with say 3nm filters, the upper exposure limit will typically be lower than the "optimal exposure limit".  

Having said all of that -

re: "As I see it, the problem with making saturation of the background the upper limit for an "acceptable exposure" time is that by that time you have long oversaturated the brightest parts of your image". 

Well not exactly because I suggested ignoring stars since these can be processed separately and then stopping before saturation of any of the diffuse stuff.  

re: "It seems there's little point in exposing longer than the "optimal exposure" than is convenient".   

I don't disagree.  But in his hobby we often see examples where folks go to great lengths to get the best possible image of an object.  Including multi-year 40+ hr long exposures.  I wouldn't say that those who go overkill are necessarily wrong in doing so.
Like
Avjunky 0.90
...
· 
Bob Lockwood:
Stuart Taylor:
yes, I must admit that I have never found these tools to be very useful. I generally centre my chosen target, check focus, then try out a few exposures (depending on how bright the object is) and when looks right I just use that in the sequence. It's all very scientific 

I also think this is the best way and is what I always do. Within just a few minutes I can tell just how long I can go before I saturate a bright nebula or a galaxy core, stars are done separately to add later. I hear so much about how little time some of you have to image, I don't know how long these exposure calculators take, but sounds they take valuable image time away from your imaging night.

If it works, it works lol.  The value in your case isn't to improve on your already excellent results, but it would be informational.  I'd be curious to see what it comes up with across your varied setups.  Especially your ICX CMOS camera and your CCD.  For what it's worth, it's just a short exposure to measure the sky gradient.  So the process is very quick.  You'll need some sensor information before doing this (read noise at your gain setting), this can be obtained from Sharpcap or the manufacturer's spec.
Like
barnold84 10.79
...
· 
·  2 likes
Mark Petersen:
I don't disagree. But in his hobby we often see examples where folks go to great lengths to get the best possible image of an object. Including multi-year 40+ hr long exposures. I wouldn't say that those who go overkill are necessarily wrong in doing so.

In his presentations, Robin Glover emphasizes that his goal is to maximise dynamic range (w.r.t. to the captured data - the dynamic range of a sensor is exposure-time independent and given by full-well depth over read noise). Just a general remark to avoid possible confusions.

Why do those much longer exposure work as well?
Simply because most objects wouldn't need that dynamic range. For most deep sky imaging, only stars saturate. For example, doubling the "optimal" exspoure time will add several more saturated stars but for most objects it wouldn't clip anything on the object.
However, for bright objects (e.g., Andromeda Galaxy, Orion Nebula), the optimal exposure time makes much more sense: you image the data in such a way that the read noise in the sky background is "controlled" but without overexposing the brightest parts of the objects (HOPEFULLY!). As we all know, M42 is a good candidate where this is likely not working. As soon as the dynamic range of the object exceeds the dynamic range of the sensor, the method will overexpose the brighter parts.

Björn
Like
Avjunky 0.90
...
· 
·  1 like
Björn:
Mark Petersen:
I don't disagree. But in his hobby we often see examples where folks go to great lengths to get the best possible image of an object. Including multi-year 40+ hr long exposures. I wouldn't say that those who go overkill are necessarily wrong in doing so.

In his presentations, Robin Glover emphasizes that his goal is to maximise dynamic range (w.r.t. to the captured data - the dynamic range of a sensor is exposure-time independent and given by full-well depth over read noise). Just a general remark to avoid possible confusions.

Hi @Björn  Hmmm my take away is a little different.  His treatise is all about read noise and when he talks about maximizing dynamic range it's given the constraint of minimizing read noise first.  As you correctly point out the maximum dynamic range of a sensor happens at gain 0 which utilizes the full well depth.  But Dr. Glover never suggests using gain 0.  He goes more into gain settings in his second video, but notice that he never even shows charts and graphs at gain 0 (most start at gain 100 and go up) and I assume he does this because the read noise at gain 0 would be generally awful.  

If we look at a specific example - an IMX455 (which has very low read noise).   It's maximum dynamic range (~14 stops) happens at gain 0, also where it has its largest well depth at 51Ke, but it has the highest read noise here at 3.5e (3.5^2 = 12.25).   But at gain 100, the read noise drops to 1.5e (1.5^2=2.25), almost a 6x improvement in noise contribution (read noise contribution is squared),  it's full well depth is smaller at ~18Ke and here it's dynamic range is only slightly lower than gain 0 (~13.5 stops).   So for dark objects where noise is a consideration, gain 100 is where most folks will target with their sub exposures.  But with bright objects like stars, noise sources - shot noise and read noise is much less of a concern.  This is for the same reason why we don't see noise in Flats.  But here maximum dynamic range is everything and so using gain 0 makes sense.  

"Why do those much longer exposure work as well?"

Well Dr. Glover mentions that exposures that are longer than his optimal calculation will be better, albeit with diminishing returns that become influenced by other factors such as guiding and weather.  But I think the main reason folks use longer exposures is because 1) NB requires it and 2) it's a hold-over from CCD imaging.   Dr. Glover makes this point best in his presentation - "This explains at a stroke why extremely long sub-exposures have become common in deep sky imaging - it's because high read noise CCD cameras need them! It also makes it clear that if you have a low read noise camera and you use those very long exposures then you are still paying the price for a problem you don't have to solve!"
Like
Avjunky 0.90
...
· 
I should have also mentioned the 3rd reason why folks take longer exposures and this goes back to my earlier point.  They are chasing the diminishing returns part of the graph to get the best possible / deepest image achievable.  These are the folks with expensive mounts where tracking and guiding is much less of an issue, where weather and seeing allow it and the sky gradient is less of a problem.
Like
 
Register or login to create to post a reply.