Quality Control Astrobin Community Survey · James Tickner · ... · 105 · 1199 · 29

james.tickner 1.20
...
· 
Starting a new topic to discuss quality control issues
Like
james.tickner 1.20
...
· 
·  1 like
@Michael Ring@Brian Boyle@Astrogerdt Appreciate your thoughts on the following approach to one part of image QC, namely determining noise levels.

I've taken some raw images and run them through the registration and colour calibration steps of my pipeline. At this point I have images on a well-controlled 10x10" grid with known RGB intensities. Currently I'm setting the scales for each colour so that stars of Mag 13 have unit integrated flux. 

I then apply two 2D convolutions filters with different kernels:

[-1/8 -1/8 -1/8                  [1/9 1/9 1/9
 -1/8     +   -1/8     and      1/9 1/9 1/9
 -1/8  -1/8 -1/8]                 1/9 1/9 1/9]

The first calculates the difference between each pixel and the average value of the 8 surrounding pixels. The second calculates the average intensity in a 3x3 grid of pixels.

I then construct a histogram of the values of the first kernel for pixels where the value of the 2nd kernel lies within a range that corresponds to the sky background image intensity. Typically, a histogram of the values of the 2nd kernel show a prominent pedestal peak and this is the range that I select.

The result of this process is an approximately Gaussian curve that illustrates the magnitude of pixel-to-pixel variations for portions of the image that correspond to sky background. The standard deviation (width) of this Gaussian peak should be equal to the individual pixel noise for sky background multiplied by a factor of 1.06 ( = sqrt (1 + 1/8) which corrects for the noise contribution of the 8 surrounding pixels). So by fitting the noise curve, determining its standard deviation and correcting by a factor of 1.06 I can find the noise level in arbitrary flux units. But because from the colour calibration we know that unit flux corresponds to magnitude 13, we can calculate the noise level in magnitude units from

Noise (mag) = 13 - log (noise (flux)) / log (2.512) 

Testing this quickly for 4 typical images, I get the following:

Brian image (Field 009): noise = Mag 22.3
Michael image (Field 618): noise = 21.3
James image (DSLR, Field 002): noise = 21.0
James image (Cooled OSC astrocamera, Field 089): noise = 22.0 

Which if I've got my maths right shows the New Zealand is nice and dark and a dedicated cooled camera outperforms a cheap DSLR!

A couple of caveats: my curve fitting is pretty rough and ready, so I'd like to tidy this up; I've only looked at one image from each of us so far, so once I've automated everything I'd like to run over the whole data set; and finally, I've only looked at the red channels so far, and will extend this to the green and blue shortly.  But before getting stuck into that it would be good to get your feedback on whether this approach makes sense.
Like
Astrogerdt 0.00
...
· 
·  1 like
Hi James, 

those are some excellent ideas by you! 

Your tests show that the general concept is already pretty good, I guess, as it nicely fits what we would expect. 

However, I see one possible problem here, which could be tested pretty easily I think. 

I remember a discussion in the PixInsight forums from about two years ago about problems with their SNR weighting, and I suspect that your algorithm is affected by the same problem. 
Let's suppose you image an object in the zenith. As the objects descends closer to the horizon, the sky background becomes brighter (normally, there is more light pollution close to the horizon) and the background sky noise increases. In your case, the result of the first convolution filter should increase slower than the result of the second convolution filter because the noise of the sky is proportional to sqrt(t*skyFlux). 
This leads to the SNR estimate decreasing slower than it really should. 

In the original discussion, this was proven by analyzing a series of images taken from high up in the sky down to closer to the horizon and plotting the SNR estimate. The result was, that the SNR weight stayed nearly the same all the time, but it was clearly visible that the images became worse over time. 

I am not entirely sure if my suspicion is correct, but maybe you could test your algorithm with such a data set. 

CS Gerrit
Like
MichaelRing 3.94
...
· 
·  1 like
I cannot contribute much to the math but one thing I am thinking about is how much the effect of gradient removal should be taken into considderation. When I understand what you are doing correctly Gradients could influence the result quite bit in contrary to what Gerrit wrote but as I am a dummy in the science part I may have gotten that wrong…. Anyway:

I tried on Field 618 and the differences in SNR are quite significant between a version without and with GraXpert applied, for SNR calculation I used Script->SNR in Pi and results were

36dB 37.8db 34.5dB (r g b)

After GraXpert applied values changed to 42dB 43.8dB 43.4dB

Not sure what your algorithm will do but it could be worth checking how much results differ after Gradients are gone. 

Grasping for every straw here… the field you looked at had over 4 hours of integration, would be painfull to have to increase average integration to 300+ minutes….

Michael
Like
profbriannz 16.52
...
· 
Great work, James. Gerrits point should be tested, if possible. But I am struggling a little with the concept. 


Is the measure of noise 1sigma?. If so for a noise of 22, this suggests that an SNR of 40 would only be obtained for 22+2.5log(40) which seems rather bright! Clearly I have misunderstood - so happy to be corrected. 

CS Brian
Like
james.tickner 1.20
...
· 
Thanks for the feedback guys. The whole SNR thing is complicated and there seem to be lots of different definitions which never helps! To try to answer some of the questions raised:

@Astrogerdt I'm not quite sure I follow the argument you're making. The expectation value (mean) of the first kernel - the one I'm using to determine the pixel-to-pixel noise level - should be zero for empty regions of sky away from any stars. This is by construction as the sum of kernel values is zero. The random variations about the expectation value should capture the noise fluctuations and hence vary like sqrt(sky brightness) if we assume we can ignore read noise. The output of the second kernel should be proportional to sky brightness (assuming there is no pedestal offset). I'm not using this value directly in my evaluation - it's role is just to identify 'dark' regions of sky to ensure that I'm calculating the noise level away from any stars. Also, I'm only applying my approach to the final stacked image, not to select and weight individual subs.

@Michael Ring By construction, I think my approach is insensitive to gradients. I'm not really calculating a signal-to-noise ratio because there's no 'signal' as such. Rather, I'm trying to calculate the noise level expressed in units of magnitude. For each pixel that's in a 'dark' part of the sky (away from stars) I calculate the fluctuation of that pixel compared to its immediate neighbours. If there was a gradient, that would lift the values of all pixels in the little 3x3 neighborhood that I consider by the same value, so the difference between the centre pixel and its neigbours wouldn't change. Of course, a strong background light level would also increase the noise level in all of the pixels and so the size of the random fluctuations would increase, but this is precisely what I'm trying to measure.

@Brian Boyle  Yes, the estimated noise level is one sigma. So a SNR of 40 would be achieved for a mag 18 object (approx) assuming that it occupies just one 10x10" pixel. For a faint star occupying 5x5 pixels (which seems fairly typical in our images), the SNR for a mag 18 object would be about 8. Zooming in and performing some extreme brightness scaling on some of our images, the faintest stars I can convincingly see are around mag 18-19, so this seems consistent.

Here's a plot to illustrate things. The curves show histograms of my 1st kernel values (ie difference of each pixel from the average of its neighbours) for 'dark' regions of Brian's field 009 image.  The red, green and blue curves are results for R, G, B pixels respectively. The image intensity is scaled such that unit flux corresponds to mag 13. The widths (1-SD) of the curves are roughly 2E-4 (red a bit better, blue a bit worse). A factor of 2E-4 corresponds to about 9 magnitudes, so the SDs of the noise curves are approximately 13 + 9 = 22. 

image.png

At the end of the day, I think this noise tool might be most useful for providing a relative indication of noise for images obtained from different sites and using different equipment. If we determine a certain performance is 'acceptable' by some subjective criterion then it would help us quickly figure out whether submitted images meet this requirement or not. Definitely more thought needed though!
Like
profbriannz 16.52
...
· 
@James Tickner  great work again!

A few observations

1) Our original metric was to reach SNR=30 for V=22.5 mag/arcsec**2 in a 10arcsec pix.  I am not sure how to calculate this from this data.
2) I am still trying to wrap my head around what is being measured in the histogram - it is a linear histogram of differences, correct?   If so, why does the peak not correspond to zero?  I measure the FWHM to be around 8e-4 on the x-axis, so I am not quite sure how you get 2e-4 (what is SD in 1-SD?) Standard deviation?  [I have forgotton a lot of statitics and I have no doubt you have done the correct thing!]
3) A 5 x 5 pixel [at 10arcsec/px] seems very big for a star.   Normally stars detection would be done with a small aperture about 2-3times the seeing disk - which would contain the bulk [>95%] of the stellar flux.  For this survey this would correspond to 1px.  

The challenge here is the horrible magnitude system - so could I just suggest a quick check.

1) Using the 2nd kernal to select, what is the value in counts/pixel of the "sky" region in each image?  [Not difference as above] 
2) This can be converted to mag/pixel using the scaled counts in derived for the 13 star is a suitable aperture, correcting for the aperture size.  [I would use 2pix max since very little flux from the star should be beyond outside a 20arcsec radius.  
3) This give us an estimate of the sky brightness in each image.  This will vary with zenith distance (a little), location (a lot) and with moon (a little, if we stick to <25% moon).
4) Staying in the counts domain, what is the 1sigma variation about the peak of the sky counts?  I believe this is your 2e-4.  But what is your scaled sky absolute value?  The ratio of these numbers should give SNR per pixel.  As since we know the sky in mag/px [from 2 above], we can derive a surface brightness detection limit for a given SNR - and a stellar detection limit for give SNR and aperture.
5) Hopefully this check [close to the one I used as a postdoc many many years ago - and my memory is fallible] come up with the same answers as you method above.

CS Brian
Like
james.tickner 1.20
...
· 
·  1 like
Some quick thoughts (it's been a long day already!):
  1. Here's my go at translating my noise metric to yours. V = 22.5 mag/(1")^2 = 17.5 mag/(10")^2 (reasoning: 10 x 10" pixels = 100 arcsec^2 area; 100 x brightness in 1 arcsec^2 pixel = 5 mag difference). SNR of 30 = 3.7 mag difference so we want a noise level in 10 x 10" pixels of 21.2 mag. So with our current range of noise levels of 21-22 mag per pixel (approx) we're pretty much on the money.
  2. The histogram is generated by measuring the difference of each pixel value from the average of its 8 immediate neighbours and then plotting the frequency with which different values occur. I also wondered about the (small) offset from zero. My suspicion is that this is due to the 'cut' I apply to restrict the calculation to 'dark' parts of the sky introducing a small bias, but I haven't explored this carefully. When I tested the algorithm on a synthetic image containing just noise and no stars I didn't see an offset, so I think the algorithm is working OK. A FWHM of 8e-4 should first be corrected by the factor of sqrt(1+1/8) = 1.06 to account for noise from 8 neighbouring pixels - this reduces it to about 7.5. To reduce to standard deviation (for a Gaussian peak) divide by sqrt(8*ln(2)) = 2.355, which brings it down to about 3E-4. My 2E-4 value was a fairly quick-and-dirty estimate (I'll get some proper fitting going over the weekend), but in the same ballpark as your value.
  3. Generally the reported half-flux diameter values for our images are in the range 3.5-4.5 pixels (at least that's what I'm seeing from DSS and ASTAP), and zooming right in shows that fainter stars are generally occupying an area of ~5x5 pixels. I also wondered about why this is so much bigger than would be expected from seeing and diffraction. I don't have a great answer - maybe scattering in the Beyer filter?

On your second list of suggestions:

Points 1-3  Good idea! I'll take a look over the weekend.

Points 4-5 - Agree on calculating the SNR as you outline, but see also my response on point 1 above. I think we are probably in the right ballpark.

However, I'd argue that introducing an arbitrary signal level (aka the sky background level) and then using this to calculate an SNR value is unnecessary, as the noise value alone tells us everything that we need to know.

Imagine a dark area of sky with no stars or other features. The pixel values for this piece of sky will have some mean value (corresponding to the sky brightness) with superimposed random noise. The mean level isn't particularly interesting in itself as it can be subtracted off if needed. The noise is approximately normally distributed (well, probably Poisson distributed + readout noise, but normal is hopefully close enough) with a standard deviation given by the values I calculated in my previous answers - say mag 22 for your images. So, a real mag 22 star whose light appeared in one pixel would sit above the noise level by just 1 standard deviation (SNR = 1). A mag 21 star would be a factor of 2.5 above the noise level (SNR = 2.5); a mag 20 star would have SNR = 6.25 and so on. But for our imaging systems, real stars actually occupy ~ 5x5 pixels. The combined noise in a 5 x 5 region is larger than the noise in a single pixel by a ratio of sqrt (25) = 1.75 mags approx. So the noise level in an area corresponding to a star image is about mag 20.25. The faintest stars I can confidently see in our images are around mag 18.5 which corresponds to an SNR of about 5, which seems reasonable.
Like
Astrogerdt 0.00
...
· 
@James Tickner I re-read and re-thought your initial suggestion, and you are indeed correct that my suspicion is incorrect. My problem was a slight misunderstanding in the use of the second kernel. 

CS Gerrit
Like
profbriannz 16.52
...
· 
hi James,

All good here. I agree with your points and thanks for the explanation.  My only desire to see the actual sky value is just as a metric for the site/conditions under which each field was taken.  

CS Brian
Like
james.tickner 1.20
...
· 
·  1 like
Here are some first-cut auto-generated QC reports.

qcreport_F005.pdf - collected by me, 135 mm Samyang F/2, uncooled and unmodified DSLR
qcreport_F009.pdf - collected by Brian, 200 mm lens, modded DSLR
qcreport_F066.pdf - collected by me, 135 mm Samyang F/2, cooled OSC astro camera
qcreport_F618.pdf - collected by Michael, 135 mm (?) lens, camera (?)
qcreport_F1051.pdf - collected by Dan, 135 mm (?) lens, camera (?)

From top-to-bottom, the reports show:
  • Results of plate solve, including field alignment, rotation, pixel size and number of stars used in solve. The green rectangle indicates the 'ideal' 10 x 6 deg field and the red rectangle the solved field
  • Star quality and colour. For this, I divide the original image into a grid of 3 x 3 tiles and calculate the mean HFD for each tile. The HFD in pixels and arc-seconds is reported for the centre tile. The graphic illustrates the relative HFD values in the tiles normalised to the centre value. Ideally the values are all equal, resulting in the green square. If the lens shows tilt or optical distortions then the plotted values form a non-square polygon (this idea is shamelessly lifted from ASTAP's presentation). Tilt is the ratio of the largest to smallest corner HFD values, and off-axis aberration is the mean of the 4 corners divided by the centre value. The unit flux magnitude is the stellar magnitude whose integrated flux is unity, calculated separately for R, G and B values. The colour correction factors are the values by which the R, G and B images are multiplied to ensure that the integrated flux is unity for a nominal stellar magnitude of 13.
  • Sky brightness and image noise. Brightness is calculated (separately for R, G and B) from the mode histogram value for regions of the image that are 'far' from stars. The histograms are shown on the right. The 10x10" pixel value is already in magnitude units after the colour correction step (unit value = mag 13) and is converted to a brightness per arcsec^2 by adding 5 magnitude units (ie a factor of 100 going from 10x10" to 1x1"). Obviously this assumes that the dark correction is accurate. The 1-SD noise value is calculated according to the discussion above: histograms of differences between pixels and their neighbours (for regions 'far' from stars) are plotted on the right, and the 1-SD value is calculated from the FWHM of this histogram divided by 2.355 (correct factor for a Gaussian peak). Finally Brian's preferred SNR for V=22.5 mag/arcsec^2 is calculated straightforwardly from the 1-SD noise value.

I think there's a fair bit of uncertainty in the sky brightness and noise values. First, converting from sky brightness using RGB filters to equivalent SQM sky brightness is quite uncertain (there's a good paper on the topic here: Sky Quality Meter measurements in a colour-changing world | Monthly Notices of the Royal Astronomical Society | Oxford Academic (oup.com)). Second, the sky brightness calculation assumes that the dark field is very accurate. For example, I'm getting different values for my two cameras and both are pretty bad (19-20 mag/arc-sec^2) and I'm inclined to blame my darks.

Similarly, the determination of the 1-SD noise level and hence the SNR value is complicated by the fact that the noise distribution is far from Gaussian, meaning that a somewhat arbitrary determination of the SD has to be made.

Overall, the values might be useful for comparisons between different setups (and sky conditions), but I'd be careful about reading too much in to the absolute values.

Appreciate thoughts and feedback!
Edited ...
Like
MichaelRing 3.94
...
· 
Interesting stuff... ASTAP agrees with your tilt numbers and HFR seems also in the same ballpark for my picture.

What surprised me were the differences in your numbers, exposure time is roughly in the same ballpark but SNR is very, very different, were conditions very bad for your cooled cam pic or has it to do with what's actually imaged in that region? (I guess not)

What would be nice to have is a batch run to see differences between the results of one camera/lens combination.... does not need to have pictures, just the raw numbers in csv or any other text format that can be parsed.

Once again, impressive work!!!!!

Michael
Like
profbriannz 16.52
...
· 
Hi James, 

Very impressive work indeed.   For me, the results confirm my best/worst expectations i.e. I have a dark sky, but my camera suffers from significant tilt and some aberration.    [One minor point, my camera is an ASI6200MM] 

As Michael has said it would be good to get data for all fields.  For me, it would help in tracking down the origin of tilt.... does in maintain the same orientation with respect to the sensor [fixed tilt of sensor within camera] or does vary with position of sky [variable tilt between camera and lens caused by gravity]?
I was pretty sure I had fixed the former [using a table top rig], but worry about the latter. 

Now we have those QC numbers, what is deemed an acceptable image?   I guess that will depend on the extent to which an uniform mosaic can be made.  


CS Brian
Like
MichaelRing 3.94
...
· 
James, would you mind running your scripts over the following two files:

https://www.mycloud.ch/s/S00C073A960C8040C1724367495A5F712508FACBA45

The files are from my new double eyed scope, as cameras are different in resolution I'd love to see how the stats for the files look like. Quality of the subs is not good, recorded tonight at 98% moon and only 11 frames per integration, nothing to use, just a test to see if something obviously wrong pops up.

Thank you,

Michael
Like
james.tickner 1.20
...
· 
Michael Ring:
James, would you mind running your scripts over the following two files:

https://www.mycloud.ch/s/S00C073A960C8040C1724367495A5F712508FACBA45

The files are from my new double eyed scope, as cameras are different in resolution I'd love to see how the stats for the files look like. Quality of the subs is not good, recorded tonight at 98% moon and only 11 frames per integration, nothing to use, just a test to see if something obviously wrong pops up.

Thank you,

Michael

Will do as soon as I get a chance
Like
james.tickner 1.20
...
· 
Michael Ring:
James, would you mind running your scripts over the following two files:

https://www.mycloud.ch/s/S00C073A960C8040C1724367495A5F712508FACBA45

The files are from my new double eyed scope, as cameras are different in resolution I'd love to see how the stats for the files look like. Quality of the subs is not good, recorded tonight at 98% moon and only 11 frames per integration, nothing to use, just a test to see if something obviously wrong pops up.

Thank you,

Michael

Here you go. I've renamed to their corresponding fields as my code uses the filename to get the field number and hence the expected (RA,DEC) position. F527 corresponds to the 4938 x 3282 image and F575 to the 6217 x 4167 image. From the image appearance I assume there's no flat field correction, so the background and noise estimates will (probably) be garbage.

Star HFD in the image centre is very similar in both cases (~20"). Both cameras show left-to-right tilt, more pronounced for F575. 

qcreport_F527.pdf
qcreport_F575.pdf
Like
james.tickner 1.20
...
· 
@Brian Boyle@Michael Ring

I agree that generating reports for all fields (and providing the numbers in CSV format as well) would be helpful. Unfortunately when I performed the original plate solving I didn't keep the star location (*.axy) files, which I now use to generate the HFD results

When I get a chance I'll rerun all the plate solves to generate these files and then run the report generation. 

It seems that all three of us have varying degrees of tilt, either left-to-right (ie along the long axis of the camera) or more prominently in one corner. As you say, it will be good to see if this is constant in orientation or not. 

...

Just raced out to retrieve my rig! The sky was clear and a sudden hail-and-thunder storm has blown out of nowhere. Brushing hailstones out of the lens caps 
Like
james.tickner 1.20
...
· 
·  1 like
@Brian Boyle@Michael Ring

With apologies for the lengthy delay, I've now processed all available images (305 fields, but including some duplicates) through to Stage 1 and uploaded to my Google Drive (link here https://drive.google.com/drive/folders/1C1hDwh87flEKv8X7y_hc3zHlQfpEjvXI?usp=sharing). If you navigate to the Stage 1 folder, you'll find 3 subfolders:
  • images
  • metadata
  • reports

The 'images' folder contains (duh!) reprocessed images in 3 subfolders:
  • equatorial - 10x10" pixels, cylindrical gall projection, centered on RA = 0h with equal RA/DEC scaling at DEC = +/-30 degrees. Generated for fields with |DEC|< 50 deg.
  • polar - 10x10" pixels, zenith equidistant projection, centered on DEC = +90 (N pole) or - 90 (S pole) with RA = 0h pointing vertically upwards. Generated for fields with |DEC| > 35 deg.
  • thumbnail - 100x100" pixels, whole sky, equal area Mollweide projection centred on RA = 0h. Generated for all fields.

All images are cropped to be the minimum area rectangle that is just large enough to encompass the original image frame. They are stored as 32-bit floating point compressed TIF files. The plate solving uses a 3rd order polynomial 'tweak' function to deal with non-ideal optical behaviour of the imaging lens and with sufficient stars used in the solve ensures that the rebinned image is usually accurate at a sub-pixel level. The images have been colour corrected as described previously, and are normalised separately in R, G and B channels such that a 13th mag star has a total area-integrated fluence of unity. This means that the images are not cropped to a 0-1 intensity range: indeed, the sky background in the thumbnail images is itself usually > 1.

So far I haven't performed any gradient subtraction or equalisation between images - that's the next step!

The 'metadata' folder contains image metadata in JSON format. Whilst designed for machine readability, it's an ASCII format and pretty easy to interpret by hand. You can get a browser plugin that provides nice visually formatting. Probably of most interest are the polar -> offset and equatorial -> offset values, which are the x and y pixel coordinate offsets needed to stitch fields together. For example, if you grab each south polar field (DEC < -35 S) you can create a single polar master image by offsetting each polar field image by these offset values. The image centre (the south pole in this case) is assumed to be at location (0,0). Other values include the star quality (half-flux diameter, hfd in a 3 x 3 grid), plate solve information, image scale, and rgb color factors. Happy to provide more detail if required. What I've accidentally omitted is the sky background and noise information (included in the QC reports described below). If you'd like a summary of some of this information made into a more convenient format eg CSV let me know.

Last but not least is the reports folder which contains the QC reports for each field. These have been described previously, but key information includes the plate solution, star quality and color normalisation, and sky brightness and noise levels. Brightness is estimated from intensity of background pixels; noise is estimated from the standard deviation of background pixels compared to their neighbours. Here 'background' refers to pixels that are estimated to be well away from any star.

Most images achieve an SNR in the range 20-50 using Brian's preferred metric of comparison to a background of 22.5 mag/arcsec^2 in a 10" pixel. 

Once you've had a chance to digest, please let me know any comments, suggestions for improvements or bugs you might find. Some things that it would be good to get an independent check on include:
  • Do my colour corrections check out (I think PI should be able to answer this)
  • Do my background sky intensity and noise calculations make sense
  • Do neighbouring fields overlap properly (ie stars line up OK)

Next up I'll start to prepare some mosaiced fields (probably starting with the south pole) to test the gradient extraction and normalisation. I'll also build the 25% sky map image (actually 26% now!) for general sharing.
Like
MichaelRing 3.94
...
· 
Wow, a lot to digest, many thanks for your work. I will start with looking at the metadata/reports and playing with them. The 10x10 tiles will be very helpful, I am wondering what will happen when I put them in Nina as a new survey....

Michael
Edited ...
Like
profbriannz 16.52
...
· 
Thanks James.  Lots to digest there. Great job.  Will review, and I think I still have work to do with my sensor tilt.

Gave my 40mm Sigma lens a try out a couple of nights ago, using startracker and focussing by hand.  Managed 2 x 30min fields (30 x 1min subs) before moon rose.  Used it wide-open.  It is fast, but unusable at f1.4.  Although I still see the same sensor tilt with the 2400MC with this set up as with the 200mm lens [confirming I still have work to do here] the image quality rapidly drops off from 1.5pix FWHM in the centre to 3.5pix [horrible comet shaped objects] at the edges.  Even half way out it looks bad.  I suspect that even a APS-C at f1.4 will show the same issue.

@Michael Ring  is this consistent with your experience?  I could improve a little on focus with an EAF attached to the barrel. Or perhaps revert to my Canon body, where sensor tilt will be less of an issue.  [Michael, what brackets did you use?]  

I attach an image below 30x1min image of the region from Pleiades to Orion taken with moon down and no cloud.  The northernmost part of the field was only 13degs above the horizon, and you can see part of the obscuring mountains in the lower left of the image.   [GraXpert, SPCC and defualt STF stretch applied in post] 

test1.jpg
Edited ...
Like
MichaelRing 3.94
...
· 
Wow, sorry to say, your stars are really, really messed up.... But as you are using an astro cam and all comets fly away from the center of the lens there is hope that with some back focus adjustments things may get better..

I was also shocked how bad my stars were yesterday (pls see the mosaic thread) compared to my first results done from home (see the show off your hardware thread) but they are much better than what you have. Hope we get this under control I would have expected close to perfect results from this lens because of the hype about it.... My hope is that temperature plays a role with those issues, yesterday it was -11° celsius, at home temperatures were more in the range between 5° and 10°

ASTAP tells me that my camera has quite some tilt, hope that things get better tomorrow night when I switch to an Astro Cam, then the weight of the lens will not play an important role anymore. 

I designed and 3d printed brackets for the AF, do you have access to a 3d printer? Then I can send you the STL's. Will now also design rings to attach the camera to a vixen dovetail, hope that I get them printed before I leave for another test tomorrow night.

Michael
Like
profbriannz 16.52
...
· 
Hi @Michael Ring ,   Yes they are.  And I plan to return to my [unmodded] Canon body when I next get clear weather.   I am using the ZWO adapter for Canon lens - which has worked well with others.  Perhaps it is just my focussing - but the stars dramatically deteriorate, so I suspect you are right about the backfocus adjustment.
Like
james.tickner 1.20
...
· 
·  1 like
Just a thought. If the main intention of the 40 mm setup is to give us wide field images for estimating true sky background structures, are poor quality stars an issue (other than for aesthetic reasons of course)? I would anticipate dividing the images into a grid of squares and estimating the background level in each of these, ignoring pixels that are part of a star. Of course, if stars are so bad that their light washes out completely then this would be hard, but even fairly distorted stars shouldn’t be too much of a problem. Could you each upload a couple of images to my Google Drive and I can have a play with my background estimation algorithm. 

Thanks!
Like
MichaelRing 3.94
...
· 
What I have learned so far from Steffen the stars should not play a huge role, what was important: 
The color range of the camera should be as close as possible to the target images, I sent data from my unmodified Z6ii and there were issues with color casts when trying to fix the gradients of the 135mm pictures done with Astro camera. Another issue were high clouds

You can download my first results from here:

https://www.mycloud.ch/s/S0012A31EE7B15509D2A9051E1B72E54FE47FEC8168

The files with pattern Field N* are the right ones
Edited ...
Like
Astrogerdt 0.00
...
· 
Hello guys, 

I'm a few days late again, it seems..... Had a lot to do with studying, unfortunately.

I looked at the post and the processed data that @James Tickner uploaded. I suspect I get you wrong somewhere, but I tried to check the color calibration of your processed data as you asked for in your original post indicating the progress. 

For that purpose, I downloaded two images, Field 1120 and 965, the first in polar and the second in equatorial form. When I opened the images in PixInsight and did an unlinked STF (basically boosting the brightness and contrast of every channel in the same way, thus leaving the color balance untouched), I got a strong color cast in Field 1120 and pretty normal color in 965, although there still seems to be a slight color cast. Is that to be expected, or is this an error in the color calibration process? The last options seems unlikely, since your all sky image shows good colors, but it still seems somewhat odd to me. 

This is a screenshot showing the images opened in PixInsight, the top row with unlinked STF (no color balance change), the buttom row with linked STF (color balance): 
Screenshot 2023-12-10 120816.jpg

CS Gerrit
Like
 
Register or login to create to post a reply.