![]() ...
·
![]()
·
1
like
|
---|
You table applies only to your specific combination of aperture/sensor so you can't drive final conclusion based on that. But, yes, in general for AP we work in the strongly non-linear space and a moderate amount of highlight clipping isn't seen as a bad thing all told if applies to stars (and maybe some galacti nuclei). But, if you were to do photometry (and to a much lesser extent spectroscopy), you would relay on both linearity and maximum FWC+. |
![]() ...
·
![]()
·
3
likes
|
---|
Christian, The dynamic range is what matters. As you've noted, you can have the same DR with different FWDs if you can tune the read noise. Apparently through dynamic range our goal is to set different spatial locations into relationship (e.g., brightness of star A vs. brightness of star B or spot of a nebula) and if we want to capture that relationship properly w.r.t. intensities, then our measurement system (sensor + all down-stream electronics and software) needs to be able to 1. capture both without clipping anything and 2. have the weakest signal at a level such that it can be appropriately measured against the noise. (obviously, for nice color images, stretching the image is throwing the "exact" measurement out of the window unless you'd know the intensity transformation and calculate backwards... which I wouldn't). Another way to think about it: why do you expose for several minutes? Because you're hunting the faint stuff (gas shell of a planetary nebula). Since you may only have a few electrons per hour (!!) generated in your pixel looking at the faint part of the field of view, the noise contributions (e.g., read noise) may drown your signal (not to mention all the other things drowning faint signal like light pollution and full moon). Unfortunately the pixels looking at the bright star near by got filled with photons and electrons. So if your goal was to get the faint signal (which fights against the noise "at the bottom of the well") and still put it in a "realistic" relationship (intensity wise) with the bright star it only helps to make the well deeper. The alternative approach would be to use exposures which are short to not saturate the star and capture an enormous amount of frames. Nevertheless, due to the many readouts, the SNR for the faint object will take more integration time to reach the same SNR that you would achieve if you stick with the long exposure times. Regarding spectroscopy: A spectrometer will disperse the signal of the object over the sensor and by that even a saturated star can produce a dim spectra. Also here the dynamic range plays a role as one want's to measure the relative intensities of the spectrum which is now dispersed over the sensor, leading to basically the same idea as in the second paragraph. Björn |
![]() ...
·
![]() |
---|
This is very interesting. I use a 294m most often in Bin1 with a very shallow well depth. I have to use 45s Lum exposures to have less than 200 pixels at peak. At 1 minute it goes up to over 700 pixels. I would love to be able to take 2 minute subs. I’m currently at 911 lum subs (91mb each) on my current project trying to bring out some faint dust. |
![]() ...
·
![]()
·
4
likes
|
---|
I almost never care about saturated stars unless I want to do photometry. I (and arguably many other astrophotographers) want to image faint nebulas, and this requires extreme contrast stretching during the processing. Under such stretching, even stars that are far away from saturation in the raw files can have white cores in the final picture. So why bother? Some people worry that saturated stars will not show color. This is rarely the case. A saturated core will not have color, of course. However, for every saturated core pixel, there are many more pixels around it that are not saturated. Those outer pixels can still show beautiful color. Indeed, a star with colorful outer pixels and with less color in the core, nearly white pixels is how our eyes perceive color (when brightness becomes very high, color fades away). Some processing try to make core of bright stars as colorful as their outer fainter pixels. That's never my cup of tea. BTW, CMOS pixels are getting smaller and smaller. A natural consequence is that their full well also decreases. However, the full well per unit area does not necessarily decrease. When a pixel is smaller, the number of photons it receives under fixed exposure time also decreases. So it doesn't necessarily become easier to saturate. Indeed, from photometry's point of view, small pixels with shallower full well may help to confine saturation to a smaller area (e.g., a saturated 9 um pixel replaced by a saturated 4 um pixel). Then the larger unsaturated area of a star can be used to reconstruct the information in the saturated area, making a saturated core less a problem. I know the OP is comparing different setting of the same camera. Here I just want to remind people that when comparing different cameras, don't just look at the face value of full well potential. Pixel size also matters if saturation is a concern. |
![]() ...
·
![]() |
---|
Well, the scope is outside now and I’ve switched from 45s at gain 200 (basically 0 on the asi version) to 180s at gain 1600 (asi gain 120). 10 hours of 45s subs has only got me the faintest impression of some dust so time to try something different and learn something. If this works out I may have to reshoot the RGB as well. But on the bright side I have 10 hours of lum and 4.5 hrs of RGB with perfect stars and galaxy cores (M81/82) that I can mix in to the final image. |
![]() ...
·
![]()
·
2
likes
|
---|
Hello, maybe I missed to point out that I only do "pretty picture" astrophotography and no photometry - probably like most of us. And of course my results only apply to my individual setup - although your setup might be somewhat similar, deriving similar conclusions. No intention to draw a general conclusion for everybody! @Joe Linington : You might want to use a dual-approach: 1. Take long subs at high gain for the faint nebula (long meaning as long as your mount, guiding and light pollution can handle). Remove the stars from that image during post-processing. 2. Use the short subs for the stars only and insert those into your deep image. Maybe 2 minute subs are too short for getting the IFN around M81, unless you are using a hyperstar f/2 telescope. You might want to check out my Leo Triplet wiith tidal tail: https://www.astrobin.com/2vwq9x/ I used even 10-minute subs here with my f/5.6 setup (no saturation of the galaxies at this exposure length and High Gain Mode / Gain 56!). Another - purely practical - thing about short exposures: You'll end up with a terribly large amount of data. One sub of my QHY600 comprises 120MB and for standard deep sky I shoot somewhere around 30x 2-minute exposurse R, G and B each plus 60-100 3-minute exposures L. So alone with that I'll end up with 22GB of data. Multiply that by 3 for a short exposure approach and you will fill up a 1TB hard drive within a year. Not to mention the much longer processing time in stacking / pre processing.... CS Chris |
![]() ...
·
![]()
·
1
like
|
---|
Hi Chris, I might have missed this, but how are you determining that a star is saturated? Is this where the first pixel in the star core is saturated, or are you considering a percentage of the entire disk is saturated? It would be interesting to know if a higher FWC produces stars with a smaller diameter of saturation. In which case it might be more meaningful to the astro-imager. You touched on data management, and for me I try to take the longest exposures I can to reduce the number of subs I need to store and process as well as to minimize the lost sky time due to downloading and dither. I've always assumed that using EXT Mode and Gain 0 was best for broadband and HighGain at Gain 56 was best for narrowband. (QHY600). My assumptions might be unfounded though, as you point out. |
![]() ...
·
![]() |
---|
Chris White- Overcast Observatory: Chris, in Fitswork - which is a free software - you can draw a rectangle over a star or certain area and ask for pixel evaluation. It provides the minimum and maximum pixel values, as well as a mean and the standard deviation. So I did not look for a certain portion of the stars' PSF to be clipped, but I simply checked for the maximum value in my area of interest. Since I had calibrated all my frames before, there were no hotpixels left. Values are provided in the original 16bit scheme - maximum possible value 65353. CS Chris |
![]() ...
·
![]() |
---|
Christian Koll: Doesn't this answer the title question from one perspective? I also think it would be better to consider readout mode 3 - Extend Fullwell 2CMSIT. Nearly the same max full well but lower read noise, which make Photographic Mode rather pointless to ever pick for AP. In any case, goal number one is to try to take the shortest sky-limited sub-exposure to put yourself on the maximal SNR-per-unit-time path for faint detail while not wasting any DR by saturating pixels for no good reason. That will always be proportional to the read noise squared given the sky conditions. Therefore, when comparing two options for full well/read noise, the increase in FWC would need to be great enough to make up for the the the increase in read noise squared and the relative gain, all else being equal, to result in actual improved DR. At 5.9e- read noise for mode 3, gain 0 vs 1.7e- at mode 1, gain 56, my subs in mode 3 would need to be at least 12x longer. 7145 e- would saturate a pixel in mode 1 (gain is 0.33e-/ADU). Since mode 3 gain 0 is 1.3e-/ADU, then you start to beat mode 1 if the FWC is greater than 7145 / 1.3 * 12 = 65954 e-. Now be warned: I have a QHY268M and mode 3, gain 0 actually saturates short of 2^16-1 for some reason. I have to raise the gain to 22 (unity) to avoid this issue. That means the maximum full well is really only 65535 in practice. I'm not sure if that happens with the 600M, but if it does then it's a wash (even a slight loss) because the read noise is basically flat in mode 3 despite gain. In short, if the increase in FWC is > RN_high^2 / RN_low^2 * gain_low / gain_high, then you're better of switching to the higher FWC, higher read noise option. I mentioned before that Mode 0 - Photograhic Mode was pointless. That's because this formula would imply that it needs to have 75% more FWC that mode 3 @ gain 0 to make sense and it clearly doesn't. It would need 34% more than mode 3 @ unity and it doesn't. That said, practical considerations also matter. I might be willing to accept less dynamic range in trade for fewer subs, which makes read out mode 3 attractive, if sub-optimal from a DR+sky-limited perspective. With a fast system, High Gain Mode would lead to of thousands of subs at the minimally sky-limited sub-exposure time and integration could take an unbearably long time and a lot of resources. |
![]() ...
·
![]()
·
1
like
|
---|
Nice experiment, Christian. Always very informative to see actual data on topics that seem at the surface pretty straightforward. From your first sheet, it appears that Gain 0 takes 4 times as long to saturate than Gain 56. This is in line with expectations. The pixels don't reduce in capacity, they are just reaching their maximum ADU value sooner, because the signal gets amplified. That's good news for weak signals, bad news for star centers. But the most important parameter is Dynamic Range. Modern sensors switch to different analogue circuitry on the chip at the High Conversion Gain mode, which causes a sudden drop in read-noise. DR is a function of FWC and read-noise, and the result is an almost equal DR compared to gain 0. So with almost no penalty you can switch between Gain 0 (good for high signal) and Gain 56 (good for low signal). That is what makes these modern sensors so interesting. Which gain to use depends on your target. For star clusters I typically use Gain 0, whereas for a faint galaxy I use the equivalent of Gain 56. Things get different when you're comparing different systems. When I started this hobby I bought the ASI1600. That has FWC of 20k. My much newer ASI6200 camera has FWC of 51k. That is a 2.5 fold increase in FWC, and that is a pure win, no trade-offs. So all the values you mention get better by a factor of 2.5. Another aspect to take into account is pixel-scale. Your system has a pixel scale of almost 1.5. If you put the same camera on a 1000mm scope, the pixel scale is about 0.8. Your scope concentrates about 4 times as much sky on the same pixel and thus 4 times more pixel-saturation. That is why it's called a 'faster' scope. That's good for faint signal, not so good for star centers. So an ASI6200/QHY600 on a 1000mm scope at Gain 0 will saturate stars 4*4*2.5 = 40x slower than an ASI1600 on a 550mm scope at Gain56. Both are very common scenarios, but with very different outcomes. A whole different question is how bad it is to have a center pixel of a star being clipped. The brighter the star, the more pixels it will cover and they will show a gaussian distribution in brightness around the center point. The way we experience that star is through the whole disc. So as long as the majority of pixels in that disk are not saturated, we're good and we can process, deconvolve, colorise, etc. So to answer your question, yes FWC does matter. More is always better, but a clipped star center is difficult to completely eliminate. But not all systems are created equal and clipping does not have to occur in seconds. |
![]() ...
·
![]() |
---|
Christian Koll: So it sounds like you are just looking for the amount of time for the first pixel in a star core to saturate. If so, it's ignoring how much of the core is ultimately clipped with normal DSO imaging. In my experience smaller FWC has a larger area within the star that is clipped. My 6120 had only 9000e FWC and it clipped stars like it was its job with certain scopes. The same scopes with a deeper well camera clipped far less of the star. The same would apply to different gains and modes that impact FWC. It would be an easy test to see if your calculated conclusions produce a satisfactory result. If so, you have your answer. Of course this is a subjective criteria. |
![]() ...
·
![]() |
---|
I'm not certain where / when you were told that a read mode with a high full well capacity was preferable. For broadband imaging, I tend to prefer whatever mode provides the greatest dynamic range, not whichever has the highest full well capacity. In the case of the QHY600, that is Mode 3 / Gain 0. As you pointed out, though, dynamic range is very similar among Mode 0 / Gain 0 ("Photographic Mode"), Mode 3 / Gain 0 ("Extended Fullwell 2CMS"), and Mode 1 / Gain 56 ("High Gain Mode"). Any of those three would provide comparable results for broadband imaging, just with different sub exposure durations. I tend to choose Mode 3 / Gain 0 simply to reduce the total number of subs I'm taking since I'm imaging at f/3.8 and in High Gain Mode I would have an awful lot of very short exposures using more storage and requiring additional processing time. For narrowband, I lean the other way. I use Mode 1 / Gain 56 since I'm only just starting to swamp read noise at five minutes in that mode, and would need fifteen minute or longer exposures to keep read noise from damaging image quality if I used one of the higher full well capacity modes. Aside from narrowband, I can choose a sub exposure duration which saturates whatever number of stars I deem "acceptable" for a given subject and get substantially similar results with Mode 0, Mode 1, or Mode 3. Technically, there is a slight edge in dynamic range with Mode 3, but it's pretty small. |