Mosaic Making Astrobin Community Survey · Brian Boyle · ... · 90 · 1391 · 47

profbriannz 16.52
...
· 
Have been doing more investigation of the mosaicing possible with PI's MosaicByCoordinates/TrimMosaic/PhotometricMosaic routines

1) It works *reasonably* well, but it is very painful to use.
2) Its other major (fatal) limitation is its insistence to treat overlap areas as rectangular areas within the larger mosaic area  - which does not take into account the rotation of individual fields over large stripes in RA direction or near the poles.
3) Perversely, it appears to do slightly  better it the images are mosaiced in the Dec direction first and then RA.  But this also required the Dec stripes to be trimmed back to rectangular areas first, and MosaicByCoordinates run again to realign the Dec strips.
4) I have also experimented with an order=1 ABE applied to each field to see if it improves the stitching without destroying the galactic gradient. 

Here are some examples of the output from a 2 (RA) x 3 (Dec) super-field mosaic covering 15 x 15 degs centred on 16h -45 [pretty much overhead from here when I have been able to collect data]. 

Overall, it doesn't look like ABE reduces the stellar gradient much , although it does improve visibility of faint structures.  However, this comes at the expenses of (perhaps) a more uneven unstitching.  Trimming the Dec stripes may give a slight improvement, but at this stage, I am beginning to see things that are not there.

My assessment is that PI is cumbersome but just OK for mosaics up to  6 fields or 15 x 15 degrees.  But not beyond.  However it is very cumbersome to use.  Although I am impressed by how it does in 1 x n or m x 1 mosaics, it really starts to fall down with n x m mosaics.



Field with no ABE and no trimming of Dec slides 

test_noabe.jpg

Fields with ABE and but no trimming of Dec stripes

test_abs_notrim.jpeg

Fields with ABE and trimming of Dec stripes.

tesb_abe_trim.jpeg
Like
Astrogerdt 0.00
...
· 
So guys, I'm back again, at least for a while now. I just had too much to do over the last weeks to actively participate in this project, but I tried to catch up with what happened in the meantime. 

Brians work looks quite impressive I have to say, it is great to see the results of your experiments. 

But I have to note one thing regarding the results: to me, they look like completely different images. They show large scale structures in one image, that are not visible at all in another image. This adds a huge uncertainty to our data, at least in my opinion. 

Is there anyone here, that has dark skies (Bortle 1 or 2 ideally) with good transparency and an all sky camera? I would volunteer to do some experiments with such data regarding large scale gradient reduction. For that purpose, we would need the all sky data fully calibrated and preprocessed to achieve a high SNR in the data. Then I could try to normalize the gradients based on these images. It may take me some time, as I plan on going on vacation in a few days, but prior to that and after the vacation, I can try to make it work. 

CS Gerrit
Like
james.tickner 1.20
...
· 
·  1 like
@Astrogerdt ​@Brian Boyle Likewise, apologies for being a bit quiet for a while, but I've been continuing to work on the automated image processing. I've been focussing on Todd's north-pole data set as this provides some interesting challenges: irregularly arranged and oriented fields surrounding the north pole; significant image gradients, and very faint IFN to tease out.

I've now got most of the pieces of the image train together. This includes:

- Plate solving including 3rd order polynomial aberrations using the Astrometry package
- Image reprojection onto a standard gnomonic projection with nominated pixel size and centre point
- RGB colour correction using a database of star colours
- Background stripping of individual images. This proceeds by dividing the image into 0.5x0.5 degree tiles, determining the background sky colour in each tile, fitting a smoothed spline through these background levels, iterating to remove any tiles whose background level sits significantly above the fitted splines (with the aim of not fitting the background through nebulosity or other DSOs) and then finally subtracting the background from the image
- Background matching for overlapping images, using an iterative approach to fit differences in backgrounds in overlap regions and then apply these as corrections
- Final stitching of images with user-controlled blending

Total run time for a 10 field mosaic is about 120 secs, so 12 s per image. I'd expect this to scale approximately linearly with the number of images processed. About half of the time is for the first step, which only needs to be performed once.

The image below shows the current result on Todd's Polaris mosaic. Stars were removed with Starnet to allow heavy stretching of the IFN background, before being readded at the end. Processing is very basic - the main aim is to show the matching across the image seams. The edges of the mosaic show some artefacts where there are no neighbouring fields to constrain the background matching process, but overall the process seems to hang together.

Touching on the points you both raise above, this approach will remove any real, smooth structures whose scale is of order a few degrees or greater (ie several of my 0.5 x 0.5 degree tiles). Essentially this is unavoidable without external information: it's impossible to distinguish algorithmically between a smooth gradient across a field that arises from airglow, moonlight, camera artefact etc and a real structure of similar scale.

But I think there is an option to construct a whole-sky image on the 0.5 x 0.5 degree tile scale and use this to restore any smoothly varying structure. For example, stitching together some whole sky images taken with a very wide-angle lens as you suggest Gerrit, or it may be possible to reconstruct the background from overlapping portions of images taken under different conditions. polaris.jpg
Like
Astrogerdt 0.00
...
· 
·  1 like
I am honestly impressed by your work, especially considering the short time you did that all on your own. Huge respect, and thanks for that!

Some time ago, I looked into the possibility of renting exposure hours for an all sky camera under truly dark skies to generate some background models for myself. Does anyone of you guys know about a site that offers such capabilities? I would be fully willing to pay for that on my own, at least for huge parts of the sky in case it is somewhat expensive, because I need those data also for my own purposes and they could also be beneficial for this purpose. 

If it is ok with you guys, I will ask some of the bigger remote imaging providers from known locations if they could provide such a service for our survey. I can imagine, that a lot of people and other projects could benefit from such data. Of course we would need locations on different continents to cover the whole sky effectively. 

CS Gerrit
Like
Astrogerdt 0.00
...
· 
·  1 like
I did some tests regarding my proposed idea of multiscale gradient reduction using images from all sky cameras. 

For the test, I used a JPG all sky image created with a Canon EOS 6d and an 8 mm fish eye lens. It was a single shot and downsized, so the preconditions for the test were bad. But I wasn't able to find any high quality data from an all sky camera during my research, at least none that was published in any forums. 

But still, I wanted to make a little proof of concept, that the aproach is indeed applicable with all sky data. 

Here is the original image with stars removed: 
image.png
Here is the image with first order ABE applied prior to star removal, with stars removed after ABE: 
Screenshot 2023-07-15 004822.jpg
And here is the image with multiscale gradient reduction applied and then stars removed: 
image.png

Each image has an individual, unlinked STF applied. For reference, the images were taken under Bortle 4 skies, Simeis 147 is in the center. It was captured using a modified Canon EOS 200d and Nikon D7500, both with 50mm lenses.  

Even though I only had access to a heavily compressed, single shot JPG image, the method still managed to outperform ABE by a huge margin in my opinion. Yes, there is still some uncertainty, but with better data that is integrated over a longer period of time, I guess the quality will be even higher. 

So for me, this looks like a promising approach for the large scale gradients, especially the exact representation of the glow along the Milky Way and large IFN clouds. We still need the gradient reduction approach by @James Tickner for smaller scale gradients and frame adaption, but this COULD solve some more issues. 

Currently, the biggest problem for this is to find high quality all sky camera data. I asked for some uncompressed data in another forum for further testing. When those tests are done and show a positive result (it may take me some time as I go on vacation, without a Laptop), I will look into a good source for that, which should be located in some desert. 

CS Gerrit
Like
profbriannz 16.52
...
· 
Great work @James Tickner ​​​​@Astrogerdt

I wish I had your coding skills.  For the time being, I will continue to take data and place in the DropBox once processed in WBPP using our Pipeline process.  

If it helps for testing of mosaic making, the 30 x 20 degs comrprising the fields will be available by the end of today.  

Dec

-35   251 250 249 248
-40   213 212 210 209
-45.  176 175 174 173
-50   143 142 141 140

With fields 248 and 249 having been previously made by stitching together four panes from my Scorpius mosaic

This region runs through and just off the galactic plane, including Dragons of Ara, Prawn and Dark Wolf nebula.  So a good area to test stellar gradients and visibility of bright and dark nebulae.

CS Brian
Like
james.tickner 1.20
...
· 
·  1 like
@Astrogerdt  Thinking some more about your suggestion, I think it should be possible to combine our approaches. At the moment I do the field-by-field gradient removal after the image has been plate solved and reprojected. So at this point there is a simple relationship between the (x,y) coordinates in the image and (RA,DEC). As noted above, I estimate the gradient by calculating background in 0.5x0.5 deg tiles (with an attempt to remove tiles where nebulosity or other DSOs contribute). Essentially any background over a nominal clip level (currently 1% of full scale in R,G,B) is smoothly fitted and remove. But there is no reason that we have to remove background down to this nominal constant floor. If we know that the average RGB luminosity in tile should be (from your whole sky image) then we just adjust the background down to this level instead. That would ensure that large scale real structures are preserved.

I've tried to illustrate this in the figure below, which is intended to represent a line (say one row of pixels) through a section of a mosaic formed from three overlapping fields. The solid curved lines represent the background level measured in the as-collected field images, including different unwanted contributions from skyglow, ampglow etc. In my current algorithm (top figure) these backgrounds are pulled down to a constant flat line (shown dashed): the arrows indicate the amount of background removed.

But (bottom figure) if we know the 'true' slowly varying brightness across the sky (indicated as the curved dashed line) we can pull the backgrounds from the individual field images down to this level instead. Net result: large-scale structures are preserved from information in the low-resolution whole-of-sky image, with small scale structures and stars being added from the field images.

Hope this makes some kind of sense!

image.png
Like
james.tickner 1.20
...
· 
·  1 like
And on a different topic, here's a mosaic of the south pole constructed from fields 2-7 (field 1 turned out to be redundant in the end as it was covered completely by the others). The final processing is pretty basic - as with the processing of Todd's north-pole data, the intention is to illustrate registration and background matching. Better treatment of colour and noise is definitely needed. I've stretched hard to pull out the IFN around the pole.

The field alignments were fairly close, with the pole almost dead centre. There is complete coverage south of 83 degrees and almost complete coverage to 82.5 degrees, with just a corner nipped out on the far right-hand side of the image.

south_pole.jpg
Like
profbriannz 16.52
...
· 
·  1 like
wow @James Tickner  the south polar field is stunning!  Great work.   

It seems you guys are doing a great job of the mosaciing. 

If I get a chance I will prioritise the -80 strip to extend out the coverage here.  

The weather here continues to make for slow progress.  Here is a PI Mosaiced 6 (RA) x 4 (dec) region directly overhead me during the early-middle of the night.

I have used an arcsinh stretch and bump up the saturation in an attempt to pick out any seams.  Its not perfect and too many may attempts to to even this close [PI was fussy about the order in which panes are mosaic'ed.  Not really an option for the survey, but good to see the quality of the data.

survey24_bin2.jpg
Like
james.tickner 1.20
...
· 
·  1 like
As we gradually collect more fields, I'm starting to think about how we can scale up the mosaicing.

Stage 1 processing (in order):
  1. Grab raw images (stacked + dark + flat corrected)
  2. Perform plate solving and distortion measurements
  3. Reproject images onto standard 'grid'. I'm thinking of using a Zenith Equidistant projection for the polar caps normalised at Dec = +/-60 degrees, and a cylindrical Gall projection normalised at Dec = +/-30 degrees for the equatorial region. The normalisation choices keep scale distortions below 8% to Dec +/-40 degrees for the polar caps and to +/- 50 degrees for the equatorial strip. In other words, we can cover the whole sky in 3 pieces with a generous 10 degree overlap and still keep scale distortions below 8% everywhere.
  4. Perform colour calibration as described previously
  5. Perform image QC (more below)
  6. Save the first stage intermediate images


So at this point we have colour calibrated images on a well-understood regular grid ready for further work.

Stage 2 processing (again, in order):
  1. Perform localised background removal for each field image as described previously. This could include correcting backgrounds using large-scale information from a whole-sky image as noted above.
  2. Equalise backgrounds between neighbouring images as described above. This could be performed for any field when all of its neighbouring fields are available.
  3. Blend field edges with information from neighbouring fields. Again, this requires all neighbouring fields to have been collected
  4. Save the second stage intermediate images

At this point we have 'final' images which can be trivially combined by tiling, as all background variation, colour variation and blending has been performed. So an arbitrary mosaic can be built by just collecting the required second stage field images and assembling them.

Creation of any first stage image for a field can be performed whenever data for that field have been submitted. Creation of the second stage image for a field can be performed when data for that field and all of its neighbours are available. So I think the process should scale smoothly as more fields are added.

Final processing (stretching, star removal etc etc) can then be performed either on individual or tiled sets of the stage 2 images.

Some thoughts on image QC (in no particular order for a change!), which is to be performed before saving first stage images:
  • Coverage (ie does field cover expected part of the sky with acceptable accuracy)
  • SNR (need to agree on definition - I'm thinking something related to standard deviation of background sky noise expressed in terms of magnitude per square arc second)
  • Star HFR and shape (I'll need to build some code for this)
  • Magnitude of stars with unit integrated flux (as described previously) - basically, are stars bright enough but not overly saturated
  • Magnitude of image distortions

Appreciate feedback and suggestions for improvement.
Edited ...
Like
profbriannz 16.52
...
· 
Hi James,

A very comprehensive summary.  I agree with the approach.  

A couple of points to add

1) At what stage to re-sample to uniform plate scale 10arcsec/pix.  Given that the QC is being applied to fist stage images, I might suggest that resampling occurs before first stage QC ie. that the definition of raw image is (stacked + dark + flat corrected + resampled) otherwise we end up comparing star shape/SNR at different plate scales.

2) Size of the second stage.   This sounds like an opportunity to also publish the survey in large super-fields (15 x 15deg, or around 100 per hemisphere) as discussed before.  This would have the advantage of bringing out some of the low surface brightness features in those 2nd stage fields e.g. celestial poles.  

Brian
Like
james.tickner 1.20
...
· 
@Brian Boyle  I agree that QC should be done after resampling. My intention was that the 'Stage 1' and 'Stage 2' processing should be performed in the order listed, but rereading my post I realise that wasn't clear! I've edited for clarity.

Broadly, my logic around QC is that the various preprocessing steps furnish the required information to determine image quality. So, for example, the plate solve and distortion correction determines the FOV of the image and hence allows us to determine whether it covers the desired field. After the reprojection onto the standard 10x10" grid we can calculate the noise statistics in a consistent way. After the colour calibration step we can determine the star magnitude that corresponds to full saturation and determine whether the exposure settings are appropriate. 

If Stage 0 corresponds to image collection and basic dark/flat corrections, Stage 1 performs registration and colour correction and Stage 2 performs background correction, gradient matching and blending. Images are saved after each stage to allow us to go back and fiddle with the approach. For example, I anticipate a lot of fine-tuning of the background removal and blending.
Like
Astrogerdt 0.00
...
· 
I finally came around to read your posts. Sorry for the delay guys. 

Regarding @James Tickner pipeline for stage 1, I see one problem. If I remember correctly, color calibration was planned to be done with SPCC. Is that correct? In case that is correct, it becomes problemativ to do this after the rescaling because it heavily alters star representation, especially for smaller and dimmer stars. So I would opt to first calibrate the color, and then rescale the images. 

Apart from that, your pipeline looks very well thought-through and comprehensive to me. 

CS Gerrit
Like
MichaelRing 3.94
...
· 
Seems I missed most of the fun, for some reason my subscription for the survey disappeared...

Great progress, James!!!

Michael
Like
MichaelRing 3.94
...
· 
@James Tickner , @Brian Boyle : On resampling: We should resample as late as possible in the process, if it is possible to do Stage 2 processing without touching the resolution of the input data that would be great because then the pre-processed data can be available to everybody in best possible quality. Perhaps is makes sense to calculate Noise Stats and Star Magnitude on reduced data to be consistent, question is if the findings then can be applied to the full size image... But I likely oversee something here....

Michael
Like
james.tickner 1.20
...
· 
·  1 like
Thanks for the input everyone.

My reasons for doing the rescaling (aka reprojection) fairly early are practical. 
  • Most importantly, it means that I 'know' exactly where stars should appear in the rescaled image. In other words, after rescaling I know exactly what the RA/DEC coordinates of each image pixel are. This is important because it then makes the colour correction and luminosity normalisation really easy to do - I just compare the R/G/B flux integrated in a circle centred on a known star location with the R/G/B luminosity of that star. Actually the luminosity integration is done using a FFT convolution with a circular kernel, which is both computationally fast and reduces the entire function to about 15 lines of code.
  • A similar argument applies to normalising backgrounds in the areas where two images overlap. Once both images have been reprojected onto the same projection grid, equivalent regions of sky occur in the same overlapping pixel.  So comparing and eliminating differences in the overlap regions reduces to the much simpler problem on minimising differences between equivalent pixels in the corresponding images.
  • Third, reducing the image size early reduces file sizes and improves processing times. This is less critical of course.

It's worth noting that the reprojection algorithm I've written is 'exact' in the sense that each input pixel (from the original raw image) is mapped at a polygon to the grid of output pixels, and the flux in the original pixel distributed to the output pixels in proportion to the area of overlap. This preserves flux and also seems to do a good job of maintaining star shape. Of course there is a loss in resolution going to 10" pixels.

Overall I think the approach is a practical compromise between maintaining image quality and algorithm complexity. An option might to be reproject onto a finer grid (say 5 x 5") for the intermediate steps, but the image files are already large (300 MB per field approx.) so I think this will cause its own problems.

Happy to hear alternate opinions though!
Like
Astrogerdt 0.00
...
· 
·  1 like
Ok, with that explanation, it makes perfect sense to me to do CC and all that stuff after rescaling for the sake of simplicity. 

CS Gerrit
Like
profbriannz 16.52
...
· 
·  1 like
Dear ABC Surveyers.

Since I can't contribute much to the software development on the mosaicing, I have focussed on gathering data.  It is not been very successful - as the weather here has been awful.

But I have managed to recover a few more fields from an old mosaic and getting a few sub-frames in between clouds (I am running at over 65% rejection ever when I do open) to finish off a small number of fields that didn't get enough data last lunation - again due to bad weather.

Here is my latest "quick and dirty" mosaic of around 60 fields.

Using MosaciByCoordinates and GradientMergeMosaic is not going to produce a good result - particular at edges/corners - but hopefully the data is encouraging and I am looking forward to @James Tickner routines.  All of the data in this mosaic are on the dropbox site, with a couple of exceptions, where I still don't have quite enough good data.   About 10% of the Southern sky, and with James' work around the pole we are making good progress after only a couple of months.  Of course, it may turn out that, once the QC is done, the data isn't good enough - but at least we will have learned a great deal.

My next priorities are the -55 and -60 strips.  

Enjoy.

MergeMosaic2.jpg
Like
james.tickner 1.20
...
· 
@Brian Boyle  Great to see the mosaic coming together - gives a sense of the area that has been completed already. Actually I think we're closing in on 20% of the southern sky, largely due to your stirling efforts!

Apologies for being a bit quiet for the last few days. Apart from being laid low with a cold, I finally received my 'proper' APS-C OSC cooled camera and I've been working to get that dialled in properly. 

I should be able to get back onto the software and mosaicing in the next week or so.
Like
MichaelRing 3.94
...
· 
Congrats on the new camera, I own the same in Mono and color, saw your other message on the issues with it, glad the software update helped...

One question on bad looking stars, you mentioned that you plan to look into this. My data shows not perfect stars in one corner, you could use the data for your tests. Perhaps it is already enough to compare HFR of you sub-tiles that come from 2 images, that could help to clean up issues with stars that are similar to mine where better stars exist in another frame.....

Michael
Like
Astrogerdt 0.00
...
· 
Wow, if that is the quick and dirty mosaic, I am really looking forward to a fully processed one @Brian Boyle. Awesome image and great color!

I really wish I had the skies and equipment to contribute to this.... 

@James Tickner if you need images with varying star quality, I also have a lot of raw data with star problems ;) Either with a f=50mm lens or a f=450mm APO. The first one could also be an interesting test for your plate solving routines, since the FOV is very wide with my APS-C sensor. If you need any of the data, just ask me, I am glad to contribute what I can offer. 

CS Gerrit
Like
Astrogerdt 0.00
...
· 
James Tickner:
@Astrogerdt  Thinking some more about your suggestion, I think it should be possible to combine our approaches. At the moment I do the field-by-field gradient removal after the image has been plate solved and reprojected. So at this point there is a simple relationship between the (x,y) coordinates in the image and (RA,DEC). As noted above, I estimate the gradient by calculating background in 0.5x0.5 deg tiles (with an attempt to remove tiles where nebulosity or other DSOs contribute). Essentially any background over a nominal clip level (currently 1% of full scale in R,G,B) is smoothly fitted and remove. But there is no reason that we have to remove background down to this nominal constant floor. If we know that the average RGB luminosity in tile should be (from your whole sky image) then we just adjust the background down to this level instead. That would ensure that large scale real structures are preserved.

I've tried to illustrate this in the figure below, which is intended to represent a line (say one row of pixels) through a section of a mosaic formed from three overlapping fields. The solid curved lines represent the background level measured in the as-collected field images, including different unwanted contributions from skyglow, ampglow etc. In my current algorithm (top figure) these backgrounds are pulled down to a constant flat line (shown dashed): the arrows indicate the amount of background removed.

But (bottom figure) if we know the 'true' slowly varying brightness across the sky (indicated as the curved dashed line) we can pull the backgrounds from the individual field images down to this level instead. Net result: large-scale structures are preserved from information in the low-resolution whole-of-sky image, with small scale structures and stars being added from the field images.

Hope this makes some kind of sense!

image.png

Just to make sure, I actually understand correctly what @James Tickner said here. 
Your second approach uses the information from a large scale image to decide what the baseline brightness of the sky (represented by the dotted line in your second image) actually is, and then flattens the background down to this brightness. This ensures that no real information is removed (large and dark nebula for example) by using actual observational data from the large scale image and combines this with your approach of matching the frames to each other for better mosaicing. 

Is that correct?

I am asking because I am just doing some tests with @Michael Rings data and some of my own images to test the local normalization capabilities in PixInsight using large scale and small scale images. If I understood you correctly, the process of large scale gradient removal you described is basically the same as the process performed by LN. Which is, of course, without the capability to match tiles as you described it, which is very much necessary near the edges. 

In case my tests turn out to be successful, and you consider it to be helpful, I would be happy to provide an example of the procedure.

CS Gerrit
Like
profbriannz 16.52
...
· 
·  1 like
Update on Aug 20....

75 field quick and dirty mosaic - some field gradients in there.  Not sure if it is due to my data [variable image quality?] or processing.
Since James is doing a great job on the software, I will leave it to him to investigate further.  A few of these fields (about 6 or 7) don't have their full exposure - but I am trying to rectify this as quickly as I can.  I have also obtained a few more around 19h to 20h tonight.... 


CS Brian


MergeMosaic75.jpg
Like
james.tickner 1.20
...
· 
·  1 like
@Brian Boyle That's starting to look like a decent chunk of sky!

I've now downloaded all of your, my and Michael's images from Dropbox and am working through the first stage processing. Once this is complete I'll drop the images onto an all-sky map to give a sense of the overall progress of the project. Actually, I got a bit distracted building the sky map! This is constructed programatically using downloaded data for constellation boundaries and bright stars. I've chosen a scale 1/10 of our final resolution (ie 100 x 100") which keeps the image to 'only' 80 Mpix (about 53 Mpix of actual sky). I've used an equal-area Mollweide projection so we'll get an accurate portrayal of the portion of the sky that has been imaged. With the programmatic approach I can also centre the map on an arbitrary RA value, so as we move through the year I'll update it to keep currently accessible fields in the centre. 

starmap.jpg
Like
profbriannz 16.52
...
· 
Wow, thats amazing work James.  Really looking forward to seeing the results. 

I suspect a fair bit of my data will not pass the QC test, as I pushed into quite marginal conditions (even though I threw a lot of subs out - and re-took a lot of data) but I thought it would be good to have a lot a lot to test with.

 I have another 6 fields to add (either new or re-dos) from Saturday night, and will process them when I return from Sydney. 

CS Brian
Like
 
Register or login to create to post a reply.