Pixel Scale Advice Appreciated Generic equipment discussions · DanRossi · ... · 9 · 541 · 0

DanRossi 4.72
...
Hello!

I've found conflicting information on what's considered oversampling, good sampling, and undersampling.

Mainly for oversampling, Astronomy Tools says that between 1.0"- 2.0"/pixel is good, under 1.0" is oversampling, and over 2.0" is undersampling.

I found another article from a retailer that said anything under 2.0"/pixel is oversampling.

I know that in general that between 2.0"-4.0" is still good, no matter what.

I have two scopes (William Optics Z103, RedCat51) and the ASI533MC Pro.  Using the Redcat I get great images...no complaints. Using the Z103 the stacked images are consistently turning out noisy with soft stars.  AstronomyTools says this setup is 1.37"/pixel (with the 0.8x reducer/flattener), and I'm starting to think this setup is just oversampled.  I've used the Z103 with a DSLR a couple of times and those images worked out OK, so I'm narrowing down the conclusion to oversampling.

Any thoughts? At this point I'm becoming very reluctant to use the Z103 with the ASI553.

Thanks!
Like
morefield 11.07
...
·  5 likes
Over-sampling depends on your seeing and any other limitation in your system’s ability to resolve detail.  With my Planewave CDK14 which is kept at a remote site with seeing down to about 1.0”, I found that my 0.73” image scale is quite under-sampled.   When I moved to a smaller image scale of 0.60” the FWHM of my best subs dropped by about 0.20” to around 1.6”.   At 0.40” I got down to FWHM of 1.5” but at 0.30” it was no better.  So I started hitting some limitations beyond seeing when I got below about 1/3 of the seeing.

If the smallest resolvable detail is 1” (that’s what 1” seeing is supposed to mean), you can’t capture that detail with a 1” image scale.   Nyquist’s theory suggests you need two samples to define the crest and trough of a wave.  This was talking about audio waves but applies to light waves as well. And some have suggested that three samples a better for photography.   So assuming your system’s resolving power is limited by seeing and guiding, focus, aperture, etc;  the right image scale is probably your seeing divided by either two or three - somewhere in between.

My experience shooting with my FSQ106 at a scale of 1.46” bears this out.  With good seeing I get subs that are measured at a FWHM of just below 2 pixels at times. But I believe that’s just some sloppy measuring when the number goes below 2 pixels.  This system would be better off with a smaller image scale.  But, with only a 106mm aperture seeing may not always be the limiting factor.   So my guess is that around 1” image scale would be best with the FSQ106.

TL;DR: for a small system in average seeing divide your seeing by 2, for a large system in great seeing divide your seeing by 3 to wrench out the last bit of resolution.
Like
andreatax 7.42
...
·  1 like
I'd say that 3 pixels per FWHM could be considered good sampling and above and below over- and under-sampling. With OSCs the thing is bit more complicated in that you'd partially compensate for un-even sampling in the colour aray by oversampling the PDF. In that case I'd probably go for 4 pixels per FWHM.
Like
DanRossi 4.72
...
That's a great point about OSC's....the RGGB array....of course!
Like
ODRedwine 1.51
...
·  2 likes
If you assume a Gaussian spread function, Gaussian blur followed by  down sampling should get very close to critical sampling noise and resolution when using an over-sampled imaging system.  The trick is to do this before the stretch, while the noise is still linear.
Edited ...
Like
jhayes_tucson 22.40
...
·  13 likes
There are a number of considerations to determine the optimum sampling rate and it helps to understand some of the basic factors that go into the rules of thumb that often get tossed about.

1) Start with the fact that an optical system all on it’s own is a bandwidth limited system.  Spatial frequencies (which look like cosinusoidal waves in the image plane of the form I(1 + B cos(wx)) where B is the optical contrast and w is the spatial frequency) are limited simply by the focal ratio of the optical system.  No spatial frequency above 1/(F*lamda) can get be modulated by the optical system.  At that frequency, there are 4.88 samples across the Airy disk—and that’s a hard limit for all telescopes.  A small complicating factor is that we don’t actually sample the image with point sensors and when you throw in the fact that pixels are generally square, things get modified a bit.   Explaining how that complication works gets a little messy and the changes are mostly minor so for most practical purposes we simply use the Nyquist limit of 4.88 samples across the Airy disk.

2) Unless you are working with a telescope in space, the atmosphere adds an additional limitation to the maximum spatial frequency that you can detect.  On an instantaneous level, the atmosphere simply distorts the wavefront at the entrance pupil.   The time averaged effect of the distortion is what blurs the image increasing the size and form of the point spread function of the whole system (telescope + atmosphere).  The net result is that the typical atmospheric conditions play a major role in determining how small the time average star image is in the image plane.  That’s why it’s hard to come up with a single, clean number for how to best sample the image—and that’s where rules of thumb become useful.  The important thing to understand is that smaller telescopes can actually achieve diffraction limited performance under reasonably good seeing so there’s a dividing point where you compute the size of the PSF using the diffraction limit and where you use the seeing limit.  In general, if the aperture of your scope is less than about 200 mm and you have “pretty good seeing” (<~2 arc-sec) you can generally get away with using the diffraction limited PSF to determine the blur diameter.   For larger apertures, you should estimate the diameter of the PSF using a conservative estimate of the best seeing conditions.  In general, setting the sampling anywhere between 2-3 samples across the blur diameter generally works pretty well.

3) CMOS cameras are now available with really small pixels so why not simply get a camera with the smallest possible pixels?  The problem with that approach is that the signal strength is determined by the area of the pixel multiplied by the responsivity of the sensor (times the incident optical irradiance.)  Making the pixels smaller, decreases the output and that’s the trade off.  In order to maximize the signal, you want the largest possible sensor, but you don’t want to make it so small that you gain nothing in terms of resolving detail in the image.  And this is where the rule of thumb of using 2-3 samples across the blur diameter comes from.  This isn’t a hard number.  The rule of thumb provides a range of “pretty good” configurations—and that’s a good place to start.

John
Like
RAD
...
John Hayes:
The rule of thumb provides a range of “pretty good” configurations—and that’s a good place to start.


*So is this why when I image at 2.47 arcsec/pix I can easily achieve a decent FWHM using 2 pixels across the blur diameter, but when imaging at .77 arcsec/pix I need tio use 3 pixels across the blur diameter (which I assume is the airy disc).  If I use 3 when shooting at 2.47 arcsec/pix I get a FWHM of 7--which is not keeper material.  If I use 2 I get a FWHM of 4.9--which at that pixel scale is not too bad--as it is 2 pixels across the FWHM.

Anyway--I am trying to come to terms with what is a decent FWHM.  When I shoot at .77, there is no way I can use 2--for that would mean a FWHM of 1.5...Not in my sky!.  But 2.5 I do often enough so that I usually keep anything less than 3.

But maybe these issues are unrelated and in that case, I am still wondering why my FWHM varies depending on pixel scale.
Like
morefield 11.07
...
John Hayes:
And this is where the rule of thumb of using 2-3 samples across the blur diameter comes from.


John,

What's the best way to determine the blur diameter?  Is there a formula?  I'm imagining it is sort of a waterfall of limitations such as the seeing, aperture, focus (hopefully this can be ignored), and Strehl of the optics.  But is it simply the worst of the four on that list or do they interact in some way?

Thanks for your expertise!

Kevin
Like
jhayes_tucson 22.40
...
·  4 likes
Kevin Morefield:
John Hayes:
And this is where the rule of thumb of using 2-3 samples across the blur diameter comes from.


John,

What's the best way to determine the blur diameter?  Is there a formula?  I'm imagining it is sort of a waterfall of limitations such as the seeing, aperture, focus (hopefully this can be ignored), and Strehl of the optics.  But is it simply the worst of the four on that list or do they interact in some way?

Thanks for your expertise!

Kevin

Kevin,
That's a good question that requires just a little bit of background to answer.  First, we generally toss out the issues of focus and Strehl to limit the answer to a system operating at the diffraction limit.  Obviously, alignment, fabrication errors, and focus errors will make things worse but those are also things that we can control.  So let's assume a "high quality", well-aligned, well-focused scope operating at the diffraction limit.  In that case, scopes with an aperture of less than about 200 mm can operate as a diffraction limited system under good seeing conditions.  Obviously, you need really good conditions (like ~0.5" seeing) to get there at 200 mm, but an 80 mm scope can be diffraction limited under much more common conditions.  For this class of telescopes, we generally set the blur diameter to equal to the Airy disk diameter, which is given by:  2.44*lambda/D (in radians, in object space).   Using a wavelength of 550 nm works fine for most cases.  There is one "little" glitch here and that is that the Airy diameter really represents the diameter at the first zero in the Bessel function that forms the PSF--so it's not the FWHM of the central lobe.  That actually doesn't make much of a difference because we normally compute everything relative to the Airy disk diameter so we'll ignore that minor detail.

Once you start considering a scope larger than about 200 mm, the atmosphere becomes the dominate factor in determining the PSF blur function--pretty much no matter how good the conditions are.   As you know, the seeing blur function is a Moffat function, which is mathematically a modified Lorenzian function (as discovered by Moffat.).  So when you use your 14" scope under 2 arc-second conditions, the time averaged FWHM of the Moffat function will be about 2 arc-seconds in the image plane.  That's what the seeing number means. When we measure the integrated optical irradiance that forms the blur function with a sensor array, we will get a result that's a little bigger than the irradiance pattern simply because of the finite size of the pixels.

The trick to designing a critically sampled system for a larger scope is to first pick an appropriate blur diameter that represents the best possible seeing conditions for the site.  For DSW and SRO, I'd probably start with 1 arc-second blur diameter.  Then pick a camera that can sample 2-3 pixels across that diameter in the image plane.   In my NIAC talk, I showed the results of a calculation that I did showing how MTF is affected by seeing and where the 2-3x sampling number comes from but that's probably more than we can get into here.  So,  you'll just have to trust me on that one!  ))).

John
Like
lucam_astro 9.15
...
·  1 like
Hi John,

Thanks for these two posts. At first I was surprised by the statement that the MTF for an ideal optical system vanishes exactly at the critical frequency. I would have guessed that it behaves like a bandpass filter and gets cutoff with some power law at high frequency. I went back to Fourier optics and there is a very intuitive way to understand that result because the MTF can be calculated as the auto-correlation function of the entrance pupil through the convolution theorem, so of course as you slide the finite diameter circular pupil across itself, at some point the overlap goes exactly to zero. Brilliant.

I did hear your NEAIC talk a couple of years ago but I had missed this point. Thank you for these excellent pointers on design principles for a high-resolution imaging system.

Cheers,

Luca
Like
 
Register or login to create to post a reply.