How many people drizzle their data and, if so, why and how? [Deep Sky] Processing techniques · Andy Wray · ... · 33 · 1912 · 15

andymw 11.01
...
· 
·  1 like
I'm just looking at drizzling for the first time.  I just wondered how many on here actually use it?  Does it help with mono images?  Do I actually need to dither as well?

Any thoughts would be appreciated.
Like
dkoslicki 1.51
...
· 
·  2 likes
I drizzle every image I take with my F-7, 714mm scope and 3.69 micron, 12.5mm X 10mm sensor (so pixel scale 1.07"/pixel). Here's why I drizzle: the following is a stack of a recent image I took without drizzle (only stretched and cropped)image.png

And here is the exact same stack but with drizzling set to 2x and droplet size of 1 (which I have determined to be a nice balance between sharpness and noise):
image.png

So a touch more noise, but much clearer, sharper results.

As I understand it, you do need to dither but I personally have never experimented with drizzle+no dither. Perhaps my finicky mount and usually bad seeing would be enough, but it really costs nothing to have dithering turned on.
Edited ...
Like
Anderl 3.81
...
· 
Another question regarding this topic.
cfa drizzle osc images for better color? 
seems a few people do that. Is there really a benefit?
Like
Kanadalainen 6.10
...
· 
David Koslicki:
I drizzle every image I take with my F-7, 714mm scope and 3.69 micron, 12.5mm X 10mm sensor (so pixel scale 1.07"/pixel). Here's why I drizzle: the following is a stack of a recent image I took without drizzle (only stretched and cropped)image.png

And here is the exact same stack but with drizzling set to 2x and droplet size of 1 (which I have determined to be a nice balance between sharpness and noise):
image.png

So a touch more noise, but much clearer, sharper results.

As I understand it, you do need to dither but I personally have never experimented with drizzle+no dither. Perhaps my finicky mount and usually bad seeing would be enough, but it really costs nothing to have dithering turned on.

Great thread topic!  I do dither in my subs, but have never stacked with drizzling.  I use APP for stacking and really want to try this. 

Thanks and clear skies,

Ian
Like
stevendevet 6.77
...
· 
·  2 likes
I don't, because my sampling is good for my current setup. And results have been mixed when drizzling was applied, so.. I don't bother, I don't feel I get a lot of extra detail, and it does take a lot more time and space to do all of it. 

But, I'm working on a rig, that would have under sampling.  (basically, the pixel size is too big to capture fine details)
I will look how the data turns out, but will most likely be drizzling that data to combat the under-sampling. 


And yes, Drizzling requires dithering. it uses the shift of pixels and the image that dithering does, to calculate what data "should" be there and then fill it in. So without dithering, you're not providing the software with enough data to do the drizzle and you might not get good results.
Automatic dithering is best. Or, do it manually with a little manual shift after 4-5 shots, or even some drift in guiding might already be enough.


But, looking at your setup, the 1600MM and the 200PDS, your sampling is good.
So there might not be a "requirement" for you to drizzle. But, others, with experience, might be able to provide better suggestions.
image.png
Edited ...
Like
dkoslicki 1.51
...
· 
·  1 like
@Ian Dixon I actually use APP too! In case it helps, I've found that the type of kernel you pick (topHat, point, square, Gauss) doesn't matter too much, as long as you don't pick point. I've tried varying the droplet size and found that smaller droplets = sharper but more fine scale noise. I haven't checked the theory though to see if this is just particular to my setup, or a general fact.
Like
andreatax 7.46
...
· 
·  2 likes
Another question regarding this topic.
cfa drizzle osc images for better color? 
seems a few people do that. Is there really a benefit?

In general is advisable to drizzle CFA images at 1x to have better details but at the expense of sligthtly lower SNR. Natural (i.e. drift) or imposed dithering is obviously a requirement. It is also the case if you're undersampling your seeing by a meaningful factor.
Like
Kanadalainen 6.10
...
· 
·  1 like
@David Koslicki thank you very much.  I am going to try this with some recent M104 data (sampling at .92 arc sec/pixel) in which I used a 120 mm Esprit apo @ 840 mm focal length and my 2600 mc pro with 3.76 uM pixels. 

I have lots of other data with the same camera and my C8edge which is 2032 mm - my sampling is .38 arc sec/pixel - and so I am oversampling with this rig.  

Would drizzling help me in this case (or inject pain )?

Thanks,

Ian
Edited ...
Like
Kanadalainen 6.10
...
· 
David Koslicki:
@Ian Dixon I actually use APP too! In case it helps, I've found that the type of kernel you pick (topHat, point, square, Gauss) doesn't matter too much, as long as you don't pick point. I've tried varying the droplet size and found that smaller droplets = sharper but more fine scale noise. I haven't checked the theory though to see if this is just particular to my setup, or a general fact.

Thanks!
Like
DarkStar 18.84
...
· 
·  7 likes
Drizzling makes only sense for undersampled data. Drizzle should reduce the stepping effect you see on undersampled stars. Drizzle adds no information at all. It is just smothering “gaps” on an edge of three neighboring pixels based on anti-alazing algorithms. Dithering makes no difference. 
The closer you work to oversampling drizzling gets conter productive, because in the end you have to downsample again and you only blow up processing time and storage by factor 4. 

to sum up: drizzle does not add any information, it only smoothens undersampled structures.
Edited ...
Like
dkoslicki 1.51
...
· 
·  3 likes
@Ian Dixon I doubt that drizzle will gain you much when sampling at 0.38"/pixel. Ball parking it: I imagine your tracking would need to be under ~0.4 arc seconds RMS and excellent seeing for you to gain anything from drizzle, and then only if you are imaging something really small that you want to bring out some detail. My rational is that if your tracking accuracy is any worse than this, then the photons emitted from one point source of light are already falling on multiple pixels in each exposure. From what I've looked into, drizzle is for the opposite problem: multiple point sources of light falling on a single pixel.

My usual strategy, however, is to always test theory with data. Since it only costs hard drive space and CPU cycles, it never hurts to try and see what the results are! When I do this, it often helps improve my understanding of the underlying processing algorithms.
Like
Kanadalainen 6.10
...
· 
·  2 likes
Ruediger:
Drizzling makes only sense for undersampled data. Drizzle should reduce the stepping effect you see on undersampled stars. Drizzle adds no information at all. It is just smothering “gaps” on three combined pixels based on anti-alazing algorithms. Dithering makes no difference. 
The closer you work to oversampling drizzling gets conter productive, because in the end you have to downsample again. 
to sum up: drizzle does not add any information, it only smoothens undersampled structures.

Thanks @Ruediger - got it
Like
dkoslicki 1.51
...
· 
·  2 likes
@Ruediger From reading the drizzle paper (https://iopscience.iop.org/article/10.1086/338393/pdf), it's more complicated than a simple anti-aliasing algorithm. While that's the most visible feature (smoothing out the jagged edges), since the shift of the image (dither) is being used to infer what smaller pixels (the drizzle "drops") would have measured, you really are dividing the input from a single pixel between several output pixels (see Section 7 of that paper). So you really are teasing out extra information. To see this in action, find a pair of stars/objects that are really close to each other and notice how the resolution changes. Using a different crop of the image I posted above, note how the faint smudge on the lower right of this star is better resolved when drizzled (hence, more than just smoothing edges):
No drizzle
image.png
Drizzle
image.png

Also, drizzle will definitely not work if images are not dithered or somehow moved from sub to sub. The algorithm would have nothing to work with then.
Like
andymw 11.01
...
· 
I tried drizzling tonight on my latest image:



M81 and M82 using very old PixelMath


What I have noticed is that without drizzling I had really nasty sharp edges and artefacts on the stars and with drizzling they are not too bad (still a bit rough).  Is this one thing that drizzling can help with?
Like
CCDMike 5.02
...
· 
·  2 likes
Happy to share some experience here with 1.000mm FL and an ASI295mc.
This combination is wether an over- nor an undersampling.
I did it in APP, mostly with 2x drizzling and dropplet 0.5.
I found out that it sometimes helps a little bit in resolution but with the cost of noise.
As some others stated here the processing time and file sizes are another disadvantage.
I assume that you just really benefit from drizzling with undersampled data as many others said here before.
You will find nice explanations about pros and cons from Mabula and other guys in the web which I highly recommend.

But if you have the time it's worth it to push some buttons compare the results with your own eyes😉

Good luck
Mike
Like
DarkStar 18.84
...
· 
David Koslicki:
@Ruediger From reading the drizzle paper (https://iopscience.iop.org/article/10.1086/338393/pdf), it's more complicated than a simple anti-aliasing algorithm. While that's the most visible feature (smoothing out the jagged edges), since the shift of the image (dither) is being used to infer what smaller pixels (the drizzle "drops") would have measured, you really are dividing the input from a single pixel between several output pixels (see Section 7 of that paper). So you really are teasing out extra information. To see this in action, find a pair of stars/objects that are really close to each other and notice how the resolution changes. Using a different crop of the image I posted above, note how the faint smudge on the lower right of this star is better resolved when drizzled (hence, more than just smoothing edges):
No drizzle
image.png
Drizzle
image.png

Also, drizzle will definitely not work if images are not dithered or somehow moved from sub to sub. The algorithm would have nothing to work with then.

Hi David,

Many thanks for your replay. There is a misunderstanding what the term “information” means. There is no way to generate additional “information” by any algorithm. You can only generate a smothening effect, but actually no information. Drizzling is a pure visual effect. As you prove in your example, showing a star which shows the effect of under sampling. Your second star shown is blurred or smoothened shape at that zoom level. 
You can achieve the illusion just by zooming in infinitely: you always get an under sampling effect, which you could mitigate with drizzle. 

Drizzle was originally developed for cameras with low resolution compared to screen resolution or print media. If your camera is already providing high resolution close to or even beyond oversampling drizzle won’t improve your image. Or other way round: it makes no sense to drizzle a full frame image and then scale it down to 50% in order to post on AB. Or convert it even to jpg🫤
Edited ...
Like
rveregin 6.65
...
· 
·  3 likes
Hopefully I can clarify a few points with respect to dithering and drizzle, my comments apply to both mono and color CMOS and CCD cameras.

Dithering: a random (ideally) movement of the camera sensor pixels relative to the image target, done between subexposures. For deepsky targets you should be doing this whether or not you drizzle. If you don't dither then you will see background noise artifacts due to pattern noise from your sensor. If you are guiding since image moves very little you will see the pattern noise burned in your final image. If not guiding, where there will be some drift between frames, you need to dither at least occasionally, otherwise you end up with a pattern noise called walking noise, since non-guided drift is mostly in a defined non-random direction. Again, all this is independent of whether you want to drizzle in processing. It is important you dither.

Drizzling: Processing step that basically maps the original coarser subexposure grid with its pixel values to a finer pixel grid that has 4X more pixels (for 2x drizzle), or 9X more pixels (for 3x drizzle). In this way you actually do potentially capture more image detail. The method was specifically developed for the case of undersampling, when your pixels are too large for your plate scale and seeing. See [url=http:// https://www.jstor.org/stable/10.1086/338393?seq=5]Fruchter and Hook[/url] for the math behind this transformation. In a nutshell this trick can recover detail that was lost in each subframe, but can be recovered by adding the contributions of all the shifted subframes. Note for lucky imaging for planetary work images are often oversampled to improve the image resolution and even there drizzle can help increase the detail captured.  So a case can be made for deepsky work with shorter lucky imaging that drizzle should help even if you are oversampled. My own personal experience, non-guided lucky imaging (less than about 20 to 30 seconds), is that 2X drizzle reduces my FWHM even though I am in theory at the sweet spot for sampling without drizzle (plate scale at 0.5 arc-sec, so x 2 to 3 for Nyquist sampling gives me 1 to 1.5 arc-sec, good enough for all but the absolute best seeing). Drizzle is very helpful to bring out fine detail, such as in planetary nebula. In my tests I see no impact on S/N of 2X drizzle, and did not find 3X drizzle improved my resolution over 2x. So I do use drizzle when I can. I use DeepSkyStacker for drizzle. The only reason I don't drizzle is if I am using the full frame of my 50 mb raw images, then DSS crashes on my computer. The issue is the 4X increase in pixels, so a 50 mb image becomes 200mb in the stacking! So for drizzle I either capture a smaller frame or set a smaller frame size in DSS.

In short. Dither. Try with and without drizzle processing on different targets and different nights and compare. Everyones setup and conditions are different, I really encourage you to experiment with it, and learn if and when to use it.

Clear skies and fast CPUs
Rick
Like
dkoslicki 1.51
...
· 
@Ruediger Fair enough: my interpretation of "information" is the Shannon entropy-esque definition. Hence a single (undrizzled) pixel with normalized value 1 will have entropy 0 while four (drizzled) pixels with, say, normalized values {1/2, 1/3, 2/3, 1/4} will have entropy Log[4], hence more information
Like
RogerN123456 4.57
...
· 
·  1 like
I usually drizzle, particularly for smaller galaxies and for the stars that I will replace the narrowband ones with.  Usually I drizzle x2; I have a native 1.41"/pixel so that takes it to about 0.7"/pixel which gives me visible benefits. I also find the drizzled noise easier to deal with as it is finer.  Occasionally I drizzle x3 for tiny stuff - e.g. the waterbug galaxy and friends showed improvement at 3x over 2x :   https://www.astrobin.com/itma6w/?nc=user
Like
frederic.auchere 3.61
...
· 
·  1 like
You'll get a description of the algorithm in this paper, and here as part of the documentation of the DrizzlePac software developed for the HST. In a nutshell, the principle is to resample each subframe on a finer grid and to replace each original pixel by a smaller 'droplet', the simplest case being a smaller square, but other shapes (e.g. a Gaussian) are possible. The result for a single frame is an image filled with droplets with gaps in between (depending on how small the droplets are compared to original pixels). When stacking enough dithered frames, the gaps are eventually all filled in by a 'drizzle' of droplets. @David Koslicki, it is normal that SNR decreases with smaller droplets: on average, fewer droplets contribute to the signal at any given point of the resampled final image.

The idea of filling in gaps by dithering can be extended to the de-bayer process. Each color plane of a raw RGB image is made of pixels & gaps. The gaps in each plane can be filled in with dithered frames, and the process is referred to as 'bayer-drizzle'. In that case, if the objective is not to increase the resolution (but it can), the native sampling can be used with on droplet shrinkage.

I always dither my frames anyway, so bayer-drizzle comes for free. A resolution increase can also be achieved if the images are undersampled. Here is an example to showcase my own results



The AZI178 barely samples my Samyang135 (5.6" measured PSF for 3.7" pixels, see revision B) at the center of the field of view. The image was bayer-drizzled with x4 resampling and 0.5 square droplets. I'm using a home-brewed Python program that is limited to square droplets, so I never experimented with other shapes.

Another example with x4 resampling and 0.5 droplet size (left) from a simulated sequence of 200 dithered images of a resolution target (right). Note that the input was actually RGB, hence the subtle color artefacts in the drizzled image.

drizzle_astrobin.png

CS,

Frédéric
Like
VicV 3.77
...
· 
·  2 likes
The algorithm was designed to recover resolution from a set of dithered, undersampled images. 
I think the example star image shown above by David is already well sampled and not suited for Drizzle processing. 

Dithering works really well when I work with short-focal length setups such as camera lenses.
The images below are from a 50 mm f/1.8 Canon lens with ASI1600MM (effective 15"/pixel resolution). This type of data is significantly undersampled and stars are typically just one or two pixels wide and appear very blocky/square. With 2X Drizzle, the stars appear much more round and you gain a small amount of resolution. Upscaling with the widely-used Lanczos3 algorithm results in dark artefacts around the blocky stars and no improvement in resolution.

image.png

Dithered image files are huge and post-processing these files is very taxing on most systems. That's why I typically downsample the 2X Drizzle master file to the original resolution. You get much nicer star shapes without an increase in file size. 

image.png
Edited ...
Like
DarkStar 18.84
...
· 
David Koslicki:
@Ruediger Fair enough: my interpretation of "information" is the Shannon entropy-esque definition. Hence a single (undrizzled) pixel with normalized value 1 will have entropy 0 while four (drizzled) pixels with, say, normalized values {1/2, 1/3, 2/3, 1/4} will have entropy Log[4], hence more information

Hi David,
I am argumenting based on the same definition based on Shannon Theorem: Since the information comes from a predictable algorithm, the  probability=1. Hence information equals zero. You cannot generate more information than contained in the raw data. That would violate the information theory that an information sink contains more information than the source. 

But forgive me if I am wrong, my studies of information theory are 30 years in the past. 🫣
Maybe we can agree on an empiric approach? just try it out and do what looks better 🤔
Edited ...
Like
rveregin 6.65
...
· 
·  3 likes
Ruediger:
David Koslicki:
@Ruediger Fair enough: my interpretation of "information" is the Shannon entropy-esque definition. Hence a single (undrizzled) pixel with normalized value 1 will have entropy 0 while four (drizzled) pixels with, say, normalized values {1/2, 1/3, 2/3, 1/4} will have entropy Log[4], hence more information

Hi David,
I am argumenting based on the same definition based on Shannon Theorem: Since the information comes from a predictable algorithm, the  probability=1. Hence information equals zero. You cannot generate more information than contained in the raw data. That would violate the information theory that an information sink contains more information than the source. 

But forgive me if I am wrong, my studies of information theory are 30 years in the past. 🫣
Maybe we can agree on an empiric approach? just try it out and do what looks better 🤔

There is no information created by drizzling, information is only recovered. The information at the sub pixel level is contained in the total set of data, the collection of subexposures that are randomly moved with respect to each other. By simply adding them pixel by pixel we loose the extra information that the random motion produced. In drizzling we have the opportunity to recover this lost information. A too simple analogy would be sieving a sample of two different sized powders--if we use a large sieve so all the particle go through it looks like all the particles are the same size. If we use a smaller sieve we see some go through, some stay behind. We have more information on the sample, but did not create information.
Like
kuechlew 7.75
...
· 
Is my understanding that by 2x drizzle you reduce SNR by a factor of 2 because you only have 1/4th of the signal per "drizzle pixel" on average correct or am I missing something?

Thank you for starting this discussion. Highly interesting for me since I'm currently still creating very wide field work with camera lenses which is significantly undersampled. Will certainly give drizzling a try.

Clear skies
Wolfgang
Like
tboyd1802 3.34
...
· 
·  1 like
For what it's worth. I always dither my datasets and produce drizzled and non-drizzled versions of the raw stack. Unless the stack is under sampled, I almost always use the non-drizzled stack. To test for under sampling I use the PI FWHMEccentricity script and examine the Median FWHM value. If it's less than 2 you're under sampled.

If you are a PI user and you don't have a copy of Inside PixInsight by Warren Keller, I would strongly recommend it.
Like
 
Register or login to create to post a reply.