3.94
#...
·
|
---|
The question is how was bxt applied, full bxt or correct only? Since release 4 this is the important question.... I can't say much about the green shift as my eyes are not made well for this color but what I see is that chromatic aberrations get reduced a lot so the amount of stars that have a blue fringing and for that reason appear blueish gets reduced. I guess it is always a win/loose situation that we have to deal with.... Michael |
0.00
#...
·
|
---|
I used correct only mode, which I think is the most adequate for this purpose. As far as I know, SPCC in PixInsight does PSF fitting separately for RGB, so a color fringing of any form shouldn't have much impact on the measured color. CS Gerrit |
3.94
#...
·
|
---|
James Tickner: I also use rather high offset (200 for gain 100 and 400 for gain 200), this is the way to go with the touptec imx571 cams. My guess is that the fact that masters in DSS are created with 16bit depth propably created the side effects I saw when stacking with your masters, pi saves & uses masters as 32bit floats. I also saw this crazy behaviour that little changes in exposure time can cause significant changes in the output of flat frames, for this reason I bought a used LED light table in A3 size for $20, this serves me well as a light source for creating flats. I guess an ipad or computer monitor and a few sheets of white paper will do the same good job as a reliable light source for flats. Please ping me when you have some corrected fields ready, I would love to see if the very bad looking flux graphs of SPCC with some of your fields are now a thing of the past.... Michael |
16.52
#...
·
|
---|
Hi, Thanks @Michael Ring for prompting some good discussion here. The process of de-convolution is (mathematically) flux-preserving - otherwise much of professional research in astronomy wouldn't work. Flux is energy/cm^2/s/A - but the cm^2 here is the aperture of the telescope - not the area on which it falls on the detector. Now, it may be that BXT does something in addition that is not-flux preserving - perhaps I could ask Russell - unless @Astrogerdt already knows. Certainly my experience is similar to @Astrogerdt in that BXT produces a noticably greener image if run before SPCC rather than after. (see below). Unsurprisingly, the rms residuals on SPCC are higher if run after BXT than before, and I suspect this [and indeed the visual appearence] has as much to do with the sky background as the stars. However, BXT - and in particular v2/AI4 - is such a game changer for our purprse, that we would be mad not to use it. I have done a few text on a 4-pane mosaic using my data using the sequence below; and with a sequence in which 7 is run before 3.
The slightly greener image with the RC-default process is uniform and it can simply be taken out with SCNR. In the end, it comes down to what we wish to use this survey for. Once that is done, I can see any differences. Comparison of BXT run after SPCC (top) and before (SPCC) 4-pane mosaic using the method above with Arcsinh stretch. Weak seams between panels still seen using photometric mosaic [step 5 still the weakest link in our chain] CS Brian |
0.00
#...
·
·
1
like
|
---|
As far as I know, Russell himself speaks of BXT as being flux preserving. But I don't have any insight on that, except the publicly availible information on his website and some YouTube video. However, the general consensus in the PixInsight forum is that it still effects SPCC. Which supports our own observations. Regarding the processing pipeline, I see no problem in just doing BXT directly after SPCC. This would enable us to do the accurate color calibration without any impact from BXT and then using the clear benefits of it for our project. Or are there any problems I oversee here? CS Gerrit |
16.52
#...
·
|
---|
As far as I know, Russell himself speaks of BXT as being flux preserving. But I don't have any insight on that, except the publicly availible information on his website and some YouTube video. Hi Gerrit, I agree with running BXT after Gradient Removal and Photometric Calibrarion. I suspect Russells advice is based on the fact that Step 5 (whatever it is) will change the colours anyway and doing SPCC at the end will get the most accurate colours (at least in terms of zero point, if not rms). CS Brian |
0.00
#...
·
|
---|
Another thing I noted when looking at many of the images over the last few days: May star cores seem to be destroyed in our images. This is a crop from F226: And here is a crop from F117: Note how many of the smaller stars have holes in their centers. I know very similar artifacts from applying CosmeticCorrection in PixInsight with auto detect hotpixels on widefield images. Sometimes, the process detects star cores as hot pixels and tries to remove them. @James Tickner could this be an explanation for some of your problems? This problem is typically more severe in the image center because of smaller stars and less severe near the edges, because stars get larger and are thus not falsely detected as hotpixels during pre processing. It may be a little late to notice that, but I guess this will affect a lot of star measurements. CS Gerrit |
16.52
#...
·
|
---|
Hi Gerrit, I keep all my original sub-frames, so I could re-process all my data. [Including a final SPCC/BXT if we decide to go that way]. In general, I am quite reassured that the software solutions from GraXpert (gross gradient removal) BXT (for star shapes), SPCC for photometric calibration) and @James Tickner (for mosaic gradient correction) will rescue of data that I though we might have to re-take. CS Brian |
1.20
#...
·
|
---|
Hi guys Interesting discussion around the colour balancing before/after BXT. Reading the description of SPCC it seems that the flux of each star is estimated by fitting a radially symmetric profile; in contrast, my approach is to simply integrate the total flux inside a circle with a certain radius, and subtract off average background flux estimated from a annular region just outside the circle. The SPCC approach should be more robust against distortions from nearby or almost overlapping stars. My approach should be more robust against distortions of star shape eg non-symmetric due to tilt or back-focus issues. Perhaps this explains why SPCC produces different results before/after application of BXT. Even if BXT is flux-preserving, the change in star shape or asymmetry could affect the profile fitting and hence the overall colour calibration. Of course, if BXT is not flux preserving (ie it increased or decreases total flux within a star, and doesn't just shift flux between pixels) then all bets are off! It should be fairly easy to check. If someone could run BXT on a few of our raw images and upload them, I can try my colour correction algorithm on both the original and BXT'd images and see if there are any differences. I can also generate plots of measured star flux before/after the application of BXT and see directly whether they change or not. Regards, James |
1.20
#...
·
|
---|
Another thing I noted when looking at many of the images over the last few days: May star cores seem to be destroyed in our images. Hi Gerrit Can I check - are these images the original files (stage0 directory if you downloaded from my Google Drive) or the rebinned files (stage1 directory)? What processing have you applied? I've had a close look at the images for the fields you show and I can't see the 'donut' stars that you're finding. Thanks, James |
0.00
#...
·
|
---|
Hi James, The files were straigt out from WBPP in PixInsight, according to their file names. I downloaded them from the Dropbox account which is provided by Brian if I remember correctly. I just checked and downloaded F226_WBPP from the stage 0 folder on your Google Drive account. After zooming in 400% in the center, I see the same artifacts as in the screenshot. For the BXT tests, I will download some of the RAW images and upload them so you can do the comparison. Will be intresting to see the results! CS Gerrit |
0.00
#...
·
|
---|
I downloaded 9 random files from the stage 0 folder on @James Tickners google drive account and ran BXT with correct only on each image. Right now, I'm uploading them to a new subfolder in the inbox folder. Is that OK? CS Gerrit |
3.94
#...
·
|
---|
There's one thing to be aware of, especially data from Brian needs special handling in SPCC when BlurX was applied before, the reason is that when stars are very well exposed then they get rejected by SPCC and only the remaining stars are taken into account for color balancing. BlurX seems to have the tendency to create stars in this situation that have one or more pixels in pure white. Adam Block has recently published a video on this topic on his website, he recommends to set the "Saturation Treshold" slider of SPCC to 1.0 when stars are well exposed. What this does is that more stars are taken into evaluation. When I did run James' and parts of Brian Data through SPCC and BlurX I saw exactly this effect on some of the fields, on very well exposed fields only 200-300 stars were taken into account by SPCC, on fields with a bit more headroom the numbers went up to 5000-7000 stars. The effects will likely not be that strong when we run SPCC before BlurX but still something to have a look at.... Michael |
16.52
#...
·
|
---|
Michael Ring: @Michael Ring I didn't apply BXT to any of my files. 95% of my data is straight out of WBPP [as per the agreed pipeline]. Some historic data has also had SPCC applied, but no BXT and no gradient removal either. [GraXpert, ABE or DBE]. @James Tickner just completed another 4 fields last night. 2.5 fields are were are complete just completed another 4 fields last night. 2.5 fields are were are complete [at least as a first pass at -60 and below. Brian |
3.94
#...
·
|
---|
@Brian Boyle , guess we misunderstood each other, what I wanted to say is that when we apply SPCC we need to be careful, especially when we apply it first as recommended. (But we already found reasons not to do BTX before SPCC) Do you have a subscription for Adam Block's website? This video explains it all: https://www.adamblockstudios.com/articles/ngc-1491-bxt-correct-only-dbe-and-spcc @James Tickner Also found this comment from Adam in the description of the video above: "This can actually be beneficial to the SPCC process since BXT now conserves flux." |
16.52
#...
·
|
---|
Michael Ring: @Michael Ring@Astrogerdt@James Tickner To clarity my current understanding: 1) Although deconvolution (and BXT) preserves flux, something in BXT clarity affects the visual (at least) appearance of stars, if it is applied before SPCC. 2) We are working towards a enhancement/addition to the pipeline, which would see the a) application of gradient removal [most likely GraXpert], b) deconvolution [most likely BXT - correct only] and c) SPCC. Before @James Tickner runs his plate solving and mosaicing. Note that we also choose to run BXT [in its full deconvolution mode] and re-run SPCC after James' software, to "tweak" the photometric calbration and image sharpness over the survey. But at that stage we would have to decide on. the size of area we wish to do it over (whole sky??). This brings us back to the issue of superfields. I think we should be also aiming for the release of survey both a) as a whole and b) in 30 x 30 deg superfields on 25 x 25 deg centres [approx 100 in whole sky]. Abou the size of your average constellation. @James Tickner could you run you field centre program to come up with suitable superfields as this scale? I could then try the RC-pipeline above to see how far I can get with our existing data and the current [poor] matching software that PI has. 3) For 95% of my files (i.e. those with _WBPP names), this would involve running GraXPert and BXT [with TBD parameters]. I am very happy to do this, but it will take time, as I haven't yet found a way to get GraXpert to work on PI image containers. For the _SPCC fields I could either a) go back to process. from scratch, or b) simply run GraXpert on them again, BXT and SPCC (again). Since everything remains in the linear regime this should work. 4) Note that this is a shift from our earlier discussion, where there was strong and vocal opposition to running gradient correction, photometric correction and deconvolution. This was against my own experience in astronomical image processing, but I went with the majority view. I think the majority may be coming round... I haven't subscribed to Adam Block's website. Although I am not well-versed in modern coding, I suspect I started doing astronomical CCD data processing before Mt Block was born and have a reasonable feel of what is going on. Did a lot of processing work as a founder member of the Supernova Cosmology Project which turned out OK. But is, of course, ancient history now. CS Brian |
3.94
#...
·
·
1
like
|
---|
Brian Boyle: @Brian Boyle I just doublechecked, but using GraXpert for Image/Process containers works fine for me: The thing is that you need to have an image open to be able to create a process icon for GraXpert on the PI desktop (that you then can drop into the Process Container). I am using the Toolbox plugin to integrate GraXpert, the link to the repository for PI is here: https://www.ideviceapps.de/PixInsight/Utilities/ I think it then also helps to have no pictures open when you then drop the image container on the Process Container. I processed arround 50 pictures from a picture container without any issue before chrtistmas. Hope this helps, Michael |
1.20
#...
·
|
---|
A quick update ...
|
1.20
#...
·
·
1
like
|
---|
Brian Boyle: Thanks for the great summary! Just a couple of thoughts from my side: - I'm keen to run some empirical tests on whether BXT is best run before or after colour calibration - unless I've missed something in the discussion and this is a settled question. In particular, I'd be looking to see whether BXT improves or worsens the dispersion of measured v true magnitudes, whether it introduces any flatness issues (eg stars in image corners undergoing greater corrections and gaining or losing brightness), and whether it introduces any overall colour shifts. Hopefully the answer to all of these is positive, but I think it worth a quick empirical check. - I would still argue for doing gradient extraction later in the pipeline, unless we can demonstrate that significant gradients cause problems with the earlier steps. My argument is that gradient removal is potentially a destructive step (could remove large-scale structure) and potentially one that we want to 'tweak' in the production of final images. If we do it early in the pipeline and change our minds about the parameters later on, then we have to rerun the entire pipeline in the future. Just my two bobs' worth - I wouldn't die in a ditch over it! I think it useful to have at least two alternatives for each part of the pipeline - it's a great way to flush out bugs and other issues. I believe Michael is putting some code together, and along with the PI route and my code, that should give us 2-3 options to compare at each step. With several sets of eyes then making comparisons, I'm optimistic we should find most of the problems. |
16.52
#...
·
|
---|
James Tickner: I was preparing myself for the day when I might have to re-process all my images. I suspect I do have holes from my cosmetic correction in WBPP. |
16.52
#...
·
|
---|
@James Tickner I agree that the post-WBPP pipeline very much depends on whaat you can correct for it the photometric mosaic program. I will hold off further analysis until you have some results from there. It is the key outstanding item in the pipeline after all... CS Brian |
16.52
#...
·
·
2
likes
|
---|
Crux to Carina using 6 field and following RC post-processing routine. Plus arcsinh stretch, HDRMT, and Curves transformation. |
0.00
#...
·
·
1
like
|
---|
I would strongly opt for the idea of @James Tickner to do the gradient reduction after mosaicing. Yes, this introduces some problems (i.e. sharp edges in the gradients and more complex color calibration) but if we do it too early on single panels, we can't judge whether what we are seeing is a gradient or an actual image feature. Let's take the mosaic of the Milky Way we once had as an example. In a single image, I would probably think that what I'm seeing is a gradient, even though it is in reality just the glow of the milky way. @Brian Boyle you got such nice objects down there on your hemisphere.... Really nice looking image! CS Gerrit |
16.52
#...
·
·
2
likes
|
---|
I would strongly opt for the idea of @James Tickner to do the gradient reduction after mosaicing. Yes, this introduces some problems (i.e. sharp edges in the gradients and more complex color calibration) but if we do it too early on single panels, we can't judge whether what we are seeing is a gradient or an actual image feature. Let's take the mosaic of the Milky Way we once had as an example. In a single image, I would probably think that what I'm seeing is a gradient, even though it is in reality just the glow of the milky way. Hi everyone, I think we all agree that @James Tickner work on the mosaicing is the key step which will help us decide on the post-processing pipeline. Brian |
1.20
#...
·
·
2
likes
|
---|
I would strongly opt for the idea of @James Tickner to do the gradient reduction after mosaicing. Yes, this introduces some problems (i.e. sharp edges in the gradients and more complex color calibration) but if we do it too early on single panels, we can't judge whether what we are seeing is a gradient or an actual image feature. Let's take the mosaic of the Milky Way we once had as an example. In a single image, I would probably think that what I'm seeing is a gradient, even though it is in reality just the glow of the milky way. One small correction: I suspect the gradient removal will need to be done in (at least) two stages. Stage 1 is removal of 'gross' gradients from individual frames, performed on a frame-by-frame level. I think this will be necessary as dealing with sharp boundaries between frames in a mosaiced image will otherwise be very difficult. Almost by definition, you can only really fit a gradient with a smooth function to avoid removing real structures, and these smooth functions won't play nicely with sharp edges. Stage 2 is 'matching' of background levels between mosaiced frames. I have a trial of this function working, which just fits gradients in overlapping fields with a very simple linear function and iteratively adjusts the fits in each field until everything matches. From some early tests on images taken near the N and S pole, this results in quite good matching, allowing even the very faint polar IFN to be seen. A potential third stage would be restoration of large-scale structures using very wide field images. As gradient removal by design flattens out large scale structures (eg glows from Milky Way, large nebulae, SMC/LMC), this stage would aim to put them back, probably by mapping out the difference in background levels between the high-resolution mosaic and the low-resolution wide field images, fitting this difference with a smooth function similar to that used in the original removal step, and then adding this smooth result back to the original images. This step I haven't tested yet! I'm pretty much done with the restacking of my field images, so can tackle these steps next. |