Bias versus flat dark frames [Deep Sky] Processing techniques · Andy Wray · ... · 38 · 2040 · 5

andymw 11.01
...
· 
I'm relatively new to all the pre-processing stuff, but I had one question:

Do most people on here take flat dark frames and darks for the lights and not use bias frames at all?  That seems to be what I read on-line and seems to make sense, but just thought I would ask?
Edited ...
Like
si-cho
...
· 
·  2 likes
Hi, When I use my DSLR camera I do use Bias (and No DarkFlats), but with dedicated (cooled) OSC's I only use Darks, Flats and DarkFlats. For me it works, so I don't change my routine. Also I have read a number of opinions, so finally I use what it works best for me, no questioning any further the process.
Like
andreatax 7.42
...
· 
·  3 likes
I use bias frames for all my work, always had, always will. If you are using flat-darks then probably it would work the same for a CMOS camera. That won't do with CCDs though. You'll need biases.
Like
si-cho
...
· 
andrea tasselli:
That won't do with CCDs though. You'll need biases


There is a point here, I am close (still in doubt) to get a CCD camera, and as you said, it will need bias. My issue is that the model I have in mind (QHY10), it works as in the old days, you must have a clear routine in the order to connect the camera and reversing the order to disconnect...so it seems you must not make mistakes with these sensors, plus having the bias as well in your library for calibration.
Like
dkamen 6.89
...
· 
·  2 likes
There is no difference between flat darks and bias with my D7500. They produce pretty much the same master sub which by the way is very close to zero almost everywhere and has no noticeable effect on flat field correction and I feel a little silly for using it but it doesn't cost me anything.

There is a difference with the ZWO 178MM and MC, where I take dark flats.

The thing is flats usually are short. Less than a second. For such small durations thermal signal is not really a thing and both kinds of subs contain essentially the same readout noise, which by the way is quite high on CCDs.

The longer your flats though, the more sense it makes to use dark flats because dark current, hot pixels and amp glows become significant. If your flats are longer than 3 seconds (not unusual in narrowband or if your flat source is too dim) you will probably start seeing those things and it's time for dark flats. This regardless of sensor tech (CCD or CMOS), although on CCDs read noise is higher so simple bias frames typically work for longer flats than CMOS.

In some CMOS cameras including my ZWO 178 the circuitry does not behave consistently at too short exposures so you naturally gravitate towards longer flats (and dark flats).

You can test what works best for you. 

Cheers,
Dimitris
Like
Krizan 5.73
...
· 
It depends on the sensor/chip. Typically Sony HAD CCD chips to not need Dark Frame Subtraction at all.  Dark frame subtraction can actually add noise to  the sub frames with those chips. Bias frame subtraction is used only for all types of sub frames with those chips.   I have used Starlight Express CCD cameras since 2005.  The older Sony chips such as the HX-916 did require Dark Frame Subtraction, but from the H9 on they did not.

Kodak CCD chips have alway required  Dark Frame Subtraction with the image subs.  I now use a Starlight Xpress Trius SX46 (Kodak KAF 12600).  I use Dark Frame Subtraction on Image subs but not Flat Frames.  Dark Noise has always been higher with the Kodak.  They are far inferior to the Sony CCD, but juist bigger. If you wanted a large CCD chip, you had to go Kodak.  Now the Sony IMX455 CMOS is changing all that.

However, as the post above states, Flat frames are usually shorter than a second and do not require a Dark Frame Subtraction.  At least with the Sony Had CCD chips despite the higher read noise than a CMOS.  The Bias takes care of that.

As for as CMOS, I have no experience other that DSLR and planetary video.

Lynn K.
Like
Menthos 0.00
...
· 
On both my ASI294MC-Pro and my ASI2600MM pro, it makes no difference if I use bias of dark flats.
Edited ...
Like
Linwood 5.76
...
· 
·  3 likes
There is an aspect to this which may be getting lost. 

A Bias and a Dark Flat is essentially the same thing on modern CMOS cameras, since flats are taken at short exposures, short enough there is little difference in noise.

But the USE of a bias and a dark flat may be different, especially with regard to lights and dark scaling.  Dark Scaling (sometimes called "optimize" (as a really bad term)) is when you match a dark at (say) 600 seconds with a light at 300 seconds by dividing it in half.  Except to just divide the noise in half, you have to subtract a bias first (from both light and dark).  Think of it as sort of the intercept point on the Y axis of a linear equation -- to make the dark exposure and noise proportional the line has to pass through the origin, and this is what the bias does. 

I have had awful luck trying to use dark frame scaling.  It is much easier to just build a dark library with all the exposure times you use (conversely only use times you have darks for).  Without optimization/scaling, there is no need for bias, generally (I have also heard of people doing temperature scaling and using bias for that, though I have no experience, so I do not want to say "no" need firmly, but mostly none). 

Just remember, for short exposures on cmos cameras, bias and dark flats are essentially identical frames -- the difference is only in how they are applied in calibration.

That also merits mentioning this: Subtracting a bias from a (almost identical) dark flat yields way too close to zero to be meaningful, since a large portion get truncated to zero. Which is why you hear people saying to not use bias and darks to calibrate flats at the same time, even though they then use bias and darks for lights.  This presumes (and is not always true) that the light is a long enough exposure that the corresponding dark DN values are much larger than the bias so the math works.

Bias: Just say no (for modern cameras).
Like
andreatax 7.42
...
· 
Linwood Ferguson:
Bias: Just say no (for modern cameras).

Modern CCD camera won't oblige I'm afraid, they would still need biases.
Like
Linwood 5.76
...
· 
andrea tasselli:
Linwood Ferguson:
Bias: Just say no (for modern cameras).

Modern CCD camera won't oblige I'm afraid, they would still need biases.

Sorry, should have said CMOS.  

That presumes of course there are modern CCD cameras.       <<< Joke, joke. <<< don't send the drones after me!
Like
andreatax 7.42
...
· 
Ah! They were already spinning ready for take off...
Like
tim@the-hutchison-family.net 12.30
...
· 
·  29 likes
I almost didn't want to post this message because I'm sure it will draw some negative responses and strong opinions. But in the end I just wanted to make sure that everyone has a clear understanding of what is going on.

I am an electrical engineer and have experience with electronics. Perhaps that credential will lend some credibility to this post.

There are multiple types of noise that we deal with in ap, but here we are talking about 2 types of noise. 

Bias noise is noise introduced into the image by the camera itself. Typically the electronics of the camera introduce interference into the circuitry, resulting in noise when the signal is read. 

Dark current noise is caused by heat that builds up in the sensor as it is being read. This noise is dependant on the time of the exposure (more heat build up over time) as well as the ambient conditions. 

There are other types of noise, but we'll leave them out of this discussion.

Both of these types of noise are present in every image that we take.  EVERY image! 

What is interesting is that, for most cc'd cameras, the bias is largely the same across the range of operation, and that the dark current tends to scale very linearly with exposure time, if held at a consistent ambient temperature.  Cmos behaves differently. The dark current doesn't scale linearly, it's more exponential, and the bias isn't always consistent either. 

Regarding CCDs, because of the behavior of the bias and dark current, it became commonplace to extract the bias signal in a separate master so that it could be subtracted from the master dark. Once you subtract the master bias from the master dark, the master dark is left with only the dark current noise, that can be scaled based on exposure time. So, a single master dark could be used on both the light frame and flat frame. You would subtract the bias, and the scaled, bias subtracted master dark, from the light frame and from the flat frame. It was kind of a processing shortcut made possible by the way the CCD sensors behave.

CMOS sensors (the majority of CMOS sensors) are not so well behaved. Both the changes in bias for different exposure, brightness, etc. and the non linear nature of the changes in dark current for different exposure times made this impossible.

So for CMOS, it became common and recommended to take a flat dark where the temperature and exposure length matched you flat frame and use it to calibrate your flat frame. And take a dark where the temperature and exposure match that of your light frame and use that to calibrate your lights, and take no bias frames at all for either. This does not mean that there is no bias correction! It just means that the bias AND the dark current for that particular exposure/temperature are present in the master dark associated with each type of sub.

Taking and building a master flat dark to calibrate your flats, and taking and building a master dark to match your lights and ignoring bias completely will always work for both CMOS and CCD sensors. Again, this is because the appropriate bias signal will always be present in the master dark. By ignoring the bias frame we are just using that bias signal that is already present in the master dark.

It is very important, if you do this, to NOT scale the master darks. Turn off the "optimize" setting in PixInsight and make sure you are using a master dark that was built from frames taken at the same temperature and exposure as the type of frames you are processing.

It is my opinion that this is the appropriate way to calibrate most CMOS sensor images (I would say ALL but someone will find an edge case that I don't know about, so I'll say MOST). It will also work just fine for CCD sensor images, but it is also appropriate for CCD subs to take the shortcut, use bias frames, and scale the darks. 

I hope that is helpful and clear. 

Best.
Tim
Like
dkamen 6.89
...
· 
·  2 likes
It says something about the level of discussion on Internet forums that anyone would hesitate posting something like this for fear of causing strong negative responses
Like
kuechlew 7.75
...
· 
·  2 likes
Tim Hutchison:
...

I hope that is helpful and clear. 

Best.
Tim

Best explanation of image calibration I ever read. As a starting AP I hadn't understood the point of dark flats, now it clicked. Thanks a lot for this contribution.
It says something about the level of discussion on Internet forums that anyone would hesitate posting something like this for fear of causing strong negative responses

Fortunately I learned to know astrobin as a very mature learning environment where we don't engage in ego fights. In particular for a beginner the advice here is priceless. For sure I'll increase my membership level in the near future.

Best regards
Wolfgang
Like
andreatax 7.42
...
· 
From the point of practicality having Master Bias, Master Dark and Flats make sense to me. First because I can decouple fixed bias pattern from variable dark current and amplicator's glows, if present. With line and point defects in CCDs I can isolate them from the darks and have the rather expensive (to produce) master dark library free from revisions (at any fixed temperature point) once the bias pattern changes. And secondly because I can scale darks which otherwise I couldn't do if ever in need.
Like
tim@the-hutchison-family.net 12.30
...
· 
·  1 like
Thanks a lot for this contribution.


I'm glad you found this helpful. 

Best.
Tim
Like
jhayes_tucson 22.40
...
· 
·  7 likes
Tim Hutchison:
I almost didn't want to post this message because I'm sure it will draw some negative responses and strong opinions. But in the end I just wanted to make sure that everyone has a clear understanding of what is going on.

I am an electrical engineer and have experience with electronics. Perhaps that credential will lend some credibility to this post.

There are multiple types of noise that we deal with in ap, but here we are talking about 2 types of noise. 

Bias noise is noise introduced into the image by the camera itself. Typically the electronics of the camera introduce interference into the circuitry, resulting in noise when the signal is read. 

Dark current noise is caused by heat that builds up in the sensor as it is being read. This noise is dependant on the time of the exposure (more heat build up over time) as well as the ambient conditions. 

There are other types of noise, but we'll leave them out of this discussion.

Both of these types of noise are present in every image that we take.  EVERY image! 

What is interesting is that, for most cc'd cameras, the bias is largely the same across the range of operation, and that the dark current tends to scale very linearly with exposure time, if held at a consistent ambient temperature.  Cmos behaves differently. The dark current doesn't scale linearly, it's more exponential, and the bias isn't always consistent either. 

Regarding CCDs, because of the behavior of the bias and dark current, it became commonplace to extract the bias signal in a separate master so that it could be subtracted from the master dark. Once you subtract the master bias from the master dark, the master dark is left with only the dark current noise, that can be scaled based on exposure time. So, a single master dark could be used on both the light frame and flat frame. You would subtract the bias, and the scaled, bias subtracted master dark, from the light frame and from the flat frame. It was kind of a processing shortcut made possible by the way the CCD sensors behave.

CMOS sensors (the majority of CMOS sensors) are not so well behaved. Both the changes in bias for different exposure, brightness, etc. and the non linear nature of the changes in dark current for different exposure times made this impossible.

So for CMOS, it became common and recommended to take a flat dark where the temperature and exposure length matched you flat frame and use it to calibrate your flat frame. And take a dark where the temperature and exposure match that of your light frame and use that to calibrate your lights, and take no bias frames at all for either. This does not mean that there is no bias correction! It just means that the bias AND the dark current for that particular exposure/temperature are present in the master dark associated with each type of sub.

Taking and building a master flat dark to calibrate your flats, and taking and building a master dark to match your lights and ignoring bias completely will always work for both CMOS and CCD sensors. Again, this is because the appropriate bias signal will always be present in the master dark. By ignoring the bias frame we are just using that bias signal that is already present in the master dark.

It is very important, if you do this, to NOT scale the master darks. Turn off the "optimize" setting in PixInsight and make sure you are using a master dark that was built from frames taken at the same temperature and exposure as the type of frames you are processing.

It is my opinion that this is the appropriate way to calibrate most CMOS sensor images (I would say ALL but someone will find an edge case that I don't know about, so I'll say MOST). It will also work just fine for CCD sensor images, but it is also appropriate for CCD subs to take the shortcut, use bias frames, and scale the darks. 

I hope that is helpful and clear. 

Best.
Tim

Tim,
I want to clarify and correct a few points here.

First, there are two things that are important to understand when it comes to calibrating an image.  The first is signal and the second is noise--and it is VERY important to understand the difference and to use the correct terms.  Signal is what we get when we average many measurements.  The signal that we are most concerned about is the one that comes from the object itself, but it important to understand that we also have other (unwanted) signals mixed into a raw image including both dark and bias signals.  The second thing that we get is noise.  Noise IS NOT unwanted signal!  Noise is the variation in signal that we measure about the average across many measurements and it is characterized by the standard deviation of the distribution.  Noise is almost always a by-product of signal (read noise is a notable exception.)  It come from quantum nature of light and small particles and it follows Poisson statistics.   So, be careful to distinguish between signals and noise!  Calling everything "noise" is not only confusing, it is incorrect.  Signals and noise are very different and do not behave mathematically in the same way, which leads to my next point.

Remember that signals can add, subtract, multiply, and divide; but when you add or subtract signals, noise can only add in quadrature.  (This is relatively easy to show mathematically but that's unnecessary here).  I should also point out that when you multiply or divide signals, noise always increases--but by adding reciprocals in quadrature.

When we measure bias, we get the offset that is set by the electronics in the camera that are typically there to keep the lowest possible output above zero.  That offset (which is a signal) creates a little bit of noise.  We also get a contribution from read noise and a few other more esoteric sources in a CMOS device.  So, removing bias serves to subtract any electronic offset from your data--at the expense of increasing the noise in the result by a little bit.  The bias offset may be important in the calibration process since offsets can play havoc when you divide by the flat data.  A lot of folks get away without removing bias offset when calibrating but that's only because the bias offset is so small that it doesn't matter.  If the bias signal is not very close to zero, it is not true that "ignoring bias completely will always work for both CMOS and CCD sensors".  Yes, the bias signal is in both the image data and the master dark data, but it is also in the master flat data.  When you subtract the dark data from the image, you do indeed remove the bias offset; however, it is still in the flat data, which divides the dark subtracted image.

Regardless of how it might vary between sensors, the most important thing to understand about dark current is that it is very repeatable for any given exposure time and temperature.  When you do dark calibration, you are subtracting the dark signal and adding noise to the result.  Dark correction only works well when you match both temperature and exposure to your light data.  That can be difficult for DSLR users.

Finally, I am not familiar with the notion that CMOS changes bias levels with exposure time and that runs completely counter to the very definition of bias signal.  Bias signal is defined as the signal that you measure with an exposure time vanishingly small (i.e. zero).   As I've said, in most cases (when the electronics are properly set up to have near zero offset), a bias frame mostly shows read noise.  I think that it is correct to say that dark current and bias levels can vary with the mode, gain, and offset values that you select in a CMOS camera.  It is almost NEVER going to work if you take data at one setting and then take calibration data using another.  THAT is a big no-no with a CMOS camera!  If basic CMOS characteristics such bias levels, linearity, or gain were to vary with exposure time, CMOS would be useless as an imaging device for astronomy.

Flat data can be taken using relatively short exposures with both CCD and CMOS cameras.  In general, it is possible to take flat data using short enough exposures (1-8 seconds) that flat-dark data is not needed. (Remember that dark current is proportional to exposure time.)  Some flat panels aren't bright enough to take narrow-band flat data with exposures less than a minute or two and in that case, flat darks may become more important; although, I've never seen a problem with it.  I also want to add that flat calibration is extremely important with CMOS sensors.  Flat correction corrects for vignetting and radiometric light fall off; however don't forget that it also corrects for PRNU, which looks like spatial noise that is linear with signal strength.  PRNU is due simply to the variation in responsivity between pixels across the sensor and because a CMOS device has a separate amp for every pixel, it can be a serious issue with some sensors.  I don't know about modern sensors, but older ones corrected for this effect during read out using on-board "trim" data stored in a LUT.

John
Edited ...
Like
mc0676 1.20
...
· 
With ASI2600MC i've used only bias, without any problem.
With the 2600MM the same thing but i've tested dark flat the same and i can confirm that there no NO differences.
Like
kuechlew 7.75
...
· 
·  2 likes
John Hayes:
...
Taking flats using the same exposure time as the lights makes absolutely no sense to me.
...

Dear John,

Tim didn't claim to take flats with the same exposure time as lights.
Tim Hutchison:
...

So for CMOS, it became common and recommended to take a flat dark where the temperature and exposure length matched you flat frame and use it to calibrate your flat frame. And take a dark where the temperature and exposure match that of your light frame and use that to calibrate your lights, and take no bias frames at all for either. 

...

What he writes makes perfect sense to me.

Best regards
​​​​​​​Wolfgang
Like
jhayes_tucson 22.40
...
· 
·  3 likes
Wolfgang,
That's for catching my error.  In re-reading it, I can see that I misunderstood Tim's discussion about taking flat darks--and I've corrected my comments  (Sorry Tim!)  BTW, flat darks aren't just required just for CMOS.  They are also required for CCD data whenever a longer exposure is required--as is the case when taking twilight flats.

John
Edited ...
Like
HegAstro 11.91
...
· 
John's response is excellent. The purpose of dark frame subtraction is to subtract mean values of the dark current. It is impossible to subtract out noise, because noise is, by definition, random and unpredictable.

The whole purpose of calibration in astrophotography is to convert your data into pure linear form -> x amount of photons incident on the sensor translates, in an average sense, to A*x ADU units, where A is some constant.
Flat correction corrects for vignetting and radiometric light fall off; however don't forget that it also corrects for PRNU, which looks like spatial noise that is linear with signal strength.  PRNU is due simply to the variation in responsivity between pixels across the sensor and because a CMOS device has a separate amp for every pixel, it can be a serious issue with some sensors.

A good example of this is the ZWO  ASI 294MM sensor. Take a look at an H-alpha flat, and what looks like PRNU is clearly seen.
Like
Linwood 5.76
...
· 
·  1 like
So John (or whomever would like to take a shot at it), since we are a bit down a rabbit hole anyway, perhaps this question is relevant.

Quite a few people have made the case recently, some very smart, experienced people, that recent CMOS cameras (notably my ASI6200MM Pro) have such low noise that we should skip darks completely (and bias).  The argument goes something like this: subtracting low noise from low noise is actually increasing the noise in the result, and you are better off stacking the lights only calibrated with flats, and not dark or bias. 

I have not done any experimentation, but that pretty knowledgeable people make the argument has been giving me pause whether keeping up a dark library is even worth it. 

FWIW I mostly shoot between 120s and 300s. 

Linwood
Like
andreatax 7.42
...
· 
·  1 like
The short answer is that is a false flag. As mentioned above by John and others you subtract mean values, no noise is or should be involved. And if you're smart you create your master bias with loads of frames (I do with 50 at least) and use the SuperBias process in PI to get rid of even the smallest vestiges of what could be construed as "noise". The key thing here is that noise should be kept at bay by using the right metods in creating the master dark library and most importantly in your baseline flats frames, since you're dividing your light frames by them hence the reduction in noise is of paramount importance.
Like
dkamen 6.89
...
· 
·  2 likes
John Hayes:


Finally, I am not familiar with the notion that CMOS changes bias levels with exposure time and that runs completely counter to the very definition of bias signal.  Bias signal is defined as the signal that you measure with an exposure time vanishingly small (i.e. zero).   As I've said, in most cases (when the electronics are properly set up to have near zero offset), a bias frame mostly shows read noise.

Hi John,

it is correct that bias signal is defined like so, but the definition doesn't tell us much on its own. For starters, it is not possible to expose for zero seconds (also by definition) so we have to approximate the bias signal with the shortest exposure possible on a camera, typically a few milliseconds. The question is whether subtracting the value of that "ultra short dark frame" (because that's what this is) makes sense.

The underlying assumption is that undesired signal (we are ignoring the associated noise) has two components, the bias signal which is more or less constant and the dark current/thermal signal which is a monotonically increasing function of exposure time (linear for a CCD, nonlinear for a CMOS).  
U = B+D(t)

where U = undesired signal, B = bias signal and D(t) = time dependent signal (dark signal + thermal stuff). Or, to put it more accutately, since our "bias" signal is in fact the shortest possible Dark signal, it is
U = D0+D(t)

In reality, there are a number of reasons why a complex half analog/half digital circuit like a CMOS can be non linear even at the pixel level. For example, it might be that a component simply needs some  time T to produce a "stable" readout, otherwise junk from its initialisation still lurks around. That applies to every circuit, but even more so in CMOS because every pixel has its own circuitry and they all  feed a central one for further processing, so there are simply too many degrees of freedom, too many nonlinearities. If we symbolise the "short" part of the exposure (to the left of T) as t1 and the "longer" remainder of the exposure (to the right of T) as t2, then actually we have:

U = D1(t1) + D2(t2)

Where D1 is a strongly nonlinear function and D2 is linear (or at least monotonically increasing). 

The core issue is that traditional bias frames, by definition, are of the D1 type. I was actually curious whether this is just a myth but I realised that it is easy to confirm.

Here is a plot I just did of mean values in ASI178MC dark frames (stacks of 15 subs), of 0.001, 0.02, 0.05, 0.1, 0.2, 1, 2.5 and 3.5 seconds taken at the same temparature with the same gain (300) and offset (200):

Screenshot_20211206_202612.png

It is obvious that things are not linear at all, let alone monotonically increasing, until about 1 second. If your flats are half a second and you subtract a bias taken at 0.001 seconds, you are basically subtracting a signal that is 25% higher than the actual undesired signal in your flats. If you use 0.02 seconds the difference in the offset signal is -5%, although the thermal signal and the dark current remain practically zero. Between 0.1 and 0.2 seconds, the undesired signal is increasing faster than it does between 1 and 2 seconds, and between 0.2 and 1 seconds it is slightly decreasing.

Does this mean the CMOS technology used by the ASI178MC is useless? Obviously not, I mean people take pretty good pictures with that and similar cameras (including scientific ones). It simply means that the complex variation of the undesired signal  cannot be described by the traditional B+D(t) function (especially for sub-second exposures), and you simply need to think of it as U(t) and correct it by taking dark subs that have exactly the same duration as your flats. It also means that if you want things to be less noisy and more predictable, you generally want to use exposures longer than a second or two for your flats.

Kindest regards,
Dimitris
Like
andymw 11.01
...
· 
OK, my original question has obviously generated quite a few different views.  I plan to now take dark flats aligned with my flats and to not use bias frames for them.  Not for any scientific reason, but just because I can get my head around matching darks with flats and lights. The KISS method I guess.  FWIW:  I've turned my LED tracing panel that I use for flats down to its lowest brightness to try and get my exposures out of that awkward region that my CMOS camera has at fast exposures.
Edited ...
Like
 
Register or login to create to post a reply.