I constructed an Automated Aperture Photometry system for myself, to check out just what the little Seestar S50 can provide. The graph shown here compares the photometric accuracy that is possible using only 1x10s frame (red curve) versus a variety of 10x10s stacked images.
[ Note: An earlier version of this info was retracted after experiencing a lack of repeatability in the data. I think that has now been solved.]
When it comes to stacking images, you can either let the Seestar do it for you, or you can do it yourself with the individual FITS frames making up the stack. The graph suggests that some improvement can be had by doing it for yourself. Something about the de-Bayering process to interpolate adjacent pixel values in general, and the way the noise floor is handled by Seestar in particular, causes suboptimal results when you use the stacked image provided by the Seestar telescope.
The red curve shows the result of analyzing a single raw FITS image from the Seestar. It shows that we should be able to reach ±0.1 mag accuracy in differential photometry for stars brighter than 11.5 mag. I think that is remarkable for a mere 10s exposure.
Don't pay too much attention to the extreme ends of each line on the graph. At extreme faint levels you start failing to detect stars that are present, and the statistics flounder due to small numbers and the impact of nearly co-equal background noise levels. At the bright end, there are so many lit-up pixels surrounding a star image that you become especially sensitive to issues of centroiding between the photometry aperture and the true center of the star. This quick little photometric analyzer didn't pay close attention to centroids, and that can cause starlight to bleed into the background nearby, resulting in larger uncertainty and repeatability issues in the analysis where you are reaching ±0.01 mag.
While it looks like you really could reach 0.01 mag for brighter stars, I wouldn't trust this quick analysis to that extreme level. But certainly at the 0.1 mag level, things are pretty repeatable. And too, these results suggest that a more careful design of the photometry analyzer has the possibility for reaching repeatable 0.01 mag differential photometry.
What is striking is that by stacking images you gain some reach. The most carefully stacked image in this analysis (the green line) was obtained by demosaic'ing the raw FITS frames of a 10x10s stack using PixInsight, grabbing one of the Green channels, aligning them with StarAlign, then integrating with Windsorized sigma clipping to remove hot and cold pixels. Doing this allows us to reach another 1.25 mag deeper into the faint end at 0.1 mag accuracy. Getting 0.1 mag accuracy at 13 mag, in only 100 sec on a little toy telescope, is amazing to me!
In comparison, the grey line represents the analysis of the FITS stack coming straight out of the Seestar. It only offers an additional < 0.5 mag for a stack of 10x10s frames. If you have multiple images worthy of a stack, then you'll gain so much more by demosaic'ing them and stacking them yourself.
The blue line represent a typical CFA color processing chain in Pixinsight on the 10x10s raw FITS frames, then separating out the Green Channel of the integrated image for analysis in this tool. It performs more poorly than focusing directly on the Green pixels of the CFA. I think the deBayering interpolation has much to do with this. You are trying to invent information in adjacent pixels that was never directly measured by the sensor. There is a slight gain over the Seestar stacked image, but not by much, considering the extra effort involved.
The little Seestar S50 isn't just as good as grandpa's photometric setup, it's remarkably better!
So - how did I make this graph?
My Automated Aperture Photometry system allows me to add 100 fake stars, randomly placed onto the actual star-field image, but not too close to other stars or fakes already planted. I know exactly how bright these fake stars are, and I sit back and watch the automated system try to find and measure them along with the real stars. Then I do a cross check of found stars against my list of known fakes, and see how many the system recovered and what it thinks their magnitudes are.
Take the mean and standard deviation of all the recovered stars and you get an estimate of how well you can measure stars of that magnitude. The percentage of stars recovered tells you about the probability of detecting stars at that magnitude.
I repeat this process for every magnitude from 9.0 to 16.0 in steps of 0.5 mag. Those data are collected and graphed as shown above. Now the results are subject to some random variations - depending on where your fake stars land, which image you are using, and what its background noise floor looks like - but using 100 fake stars helps mitigate that variation from one run to the next.
This is a test of both your ability to find the stars in the first place, and the accuracy of your photometry. I also found that the results can vary with the shape of your fake star. For this test, I finally settled on a simple 2D Gaussian shape. I originally used a stack of star images from the whole image frame, chosen from intermediate brightness stars - not too faint and not too bright. I actually used stars from the 25-75th percentile in SNR to form a composite fake star.
I measured that composite, which looked prettty good to me, and found its FWHM from vertical and horizontal cross-sections. I found that the Seestar S50 was producing images of stars with FWHM of 2.85 pixels, or 6.8 arcsec. That figure matches prior measurements of the dark sky seeing from my Backyard, here in Bortle 6.5 country.
But I also found that my composite fake star had faint garbage around its edges, and when I amplify that fake to 9 mag, that garbage grows visibly strong and affects the measurements. I wanted something cleaner, and which could be used across a collection of images. So I matched the core of the composite fake with a Gaussian, and found that for deBayered 1080x1920 images, where every pixel is sampled, I needed an 11x11 fake that can be added into my star frames. The sigma for that full-width Gaussian works out to be about 1.3. And for these kinds of images, my aperture needs to have a radius of 5 pixels, with noise annulus radii of 7 and 11 pixels.
I'm using square apertures, so my core aperture contains 121 pixels (-5 to +5 in X,Y), and my noise sampling annulus contains 304 pixels. For measuring the magnitude, I first find the median and MAD of the 304 pixels in the ring, then I sum up the core pixels after first subtracting off the median from each. Then the final engineering magnitude is -2.5 Log10(Flux), where Flux is that median-removed sum of core pixels.
I use the ratio SNR = (Flux / σ) to gauge the quality of the measurement, where σ = MAD/1.4826. The σ here refers to the probability distribution of the background noise, not the star's Poisson noise. We want to understand how much more significant the star signal is compared to the surrounding background. Stars are considered if their peak amplitude is at least 5σ above the whole image median, and collected if their integrated median removed flux remains at least 5σ above the nearby noise in the aperture ring.
But those magnitudes are in engineering units, not very recognizable by humans accustomed to the sky. So I find an additive adjustment, by taking a high SNR image of 3c273, and choosing the additive constant which. when added to the engineering magnitude of 3c273, produces a magnitude of 12.9. The measured flux on 3c273 was 414σ. Now my reported magnitudes, although quite probably incorrect in detail, are at least recognizable. We all know what 9-16 mag stars looks like. (My offset works out to +9.4.)
Now when you use a demosaic'd image, your pixels are effectively double size in each dimension, the image itself measures 540x960 pixels, things move along a whole bunch faster in computing, and you need a smaller Gaussian sigma ≈ 0.75, and also a smaller aperture of radius 3 pixels. Same noise annulus radii, 7 and 11 pixels.
Let's compare apples to apples for a moment. The two deBayered image stacks are using pixels half the size in each dimension as the raw single frame and the stacked single channel images. So lets compare the two most directly comparable images - the single frame G channel against the stack of 10 G channel images. (the red and green curves in the graph)
When I examine the SNR of these two images at each magnitude, the stacked image shows SNRs about 3.2x higher than the single frame image. That stands to reason, assuming that the noise process is mostly dominated by the Poisson noise of the star signal, and not much affected by the background. A stack with 10 images of the same exposure duration should show Sqrt(10) ≈ 3.2 as much signal to noise. And that much increase in SNR implies that we should reach about 1.25 mag [ = 2.5 Log10(Sqrt(10)) ] deeper into the faint end for the stacked image.
But note: I did not multiply my fake stars by 10x to account for adding to the stacked image. I used the same scaling for both the raw and stacked images. And it worked perfectly for me. Which tells me that PixInsight is keeping roughly the same background median value, and scaling the amplitudes above that level to keep a similar amplitude in the stack as for individual frames. It scales down the MAD by 3.2x instead of scaling up the amplitude of the summed star images by that much. The math works out the same, either way. But I have to thank PixInsight for having the foresight that I lacked when I just went forward with the same scaling on every image.
The two deBayered images are using smaller pixels, and have more of them on each star image. The aperture radius used on those images is 5/3 as big as the aperture used on the direct channel images. And too, their SNR is 5/3 higher still, [Sqrt((5/3)^2) = 5/3] than the stacked single channel image. So the separation between the curves seems explainable. The two deBayered images should show about the same 3.2x improvement over a single image done with the same sized pixels. Unfortunately, we can't get such an image from Seestar to compare against.
So, whereas deep-sky photographers stack images in an effort to reduce the noise floor in the image, to see ever fainter diffuse nebular wisps, here we are stacking to increase SNR in the stars of the image. In deep sky photos, the noise floor drops by 1/Sqrt(N) for N images. For us, the SNR grows by Sqrt(N) on the star images themselves. And that growth in SNR leads to making fainter magnitude stars visible. The two aims are related and similar, but work from opposite directions. Our noise sources are completely different.
So... how many 10s frames do we need to stack in order to see a 20 mag star or galaxy? I did mention that I count the number of fake stars reaped during the photometric analyses. For the single frame image you have a 50% probability of detecting a star as faint as 14 mag. For the stacked single channel image that limit is between 15 and 15.5 mag (probably 15.25, eh?). To reach 20 mag, we need about 6 magnitudes extension from stacking. That's a factor of 250x, so we need the square of that number, or a 63,000x10s stack !! (That's about 7.3 days, or 175 hours, of integration. Doable?).
I first read about using this star planting and harvesting technique decades ago, when reading PASP articles by Peter Stetson of DAO. I highly recommend reading everything you can from Peter.
Source code for the automated aperture photometry system is free for the asking. It and all the graphics and image viewers were all written by me, over several decades, all in (Lispworks) Common Lisp.
Comments