LRGB-Combination in Pixinsight Pleiades Astrophoto PixInsight · Johannes Möslein · ... · 27 · 1741 · 0

Joo_Astro 1.91
...
· 
·  1 like
I'm imaging in LRGB, and as my images get better and overall integration times get longer (RGB channels getting better), I noticed one of my biggest problems is combining L and RGB in Pixinsight.

In acquisition, I use the same sub exposure time for all 4 channels, and roughly the same overall exposure time on all 4 as well (1:1:1:1).

I usually process L and RGB separately, then as the last step before Combination stretch both with GHS. Both look good.
I often have trouble fitting them together, the image either looks washed out, loses detail, has some not-so-nice looking "smudges", or looks like the channels even have different details/brightness in some places.
After playing with the Lightness/Saturation sliders and reducing the L-Channel weight, I can make it work, but it just feels like I lose a lot of detail of the L channel.

So my thoughts of possible solutions are:
  • Stretching: Just get better? Stretch more aggressively/less aggressively? Stretch more similar?
  • Only do minimal processing before LRGB Combination? Combine L, R, G, B at once in LRGB Combination instead of L, RGB?
  • Reduce sub exposure time for L compared to RGB?
  • Increase overall exposure time for L channel (3:1:1:1)?
  • As last step, get different filters?


As I don't have the time to test everything, I hope you can help me improve my images, thanks for reading.



Setup:
  • ZWO ASI533MM Pro
  • ZWO EFW
  • Antlia V-Pro LRGB 1,25" mounted


Rough Processing:
  • WBPP to create masters
  • StarAlignement (usually to L)
  • DynamicCrop
  • Background Extraction separately on L, R, G, B
  • Either ChannelCombination RGB, then BXT (BlurXTerminator) & NXT (NoiseXTerminator) on L, RGB, or BXT & NXT on L, R, G, B, then Channel Combination RGB
  • SPCC on RGB
  • Stretch L, RGB separately with GHS
  • Sharpen L a bit (not always)
  • LRGB Combination L+RGB
  • [*]...


Edit: I worded it badly, I do SPCC before BXT if I do L + RGB, but I realized that BXT on R,G,B separately is a bad idea, so thank you for the input.
Edited ...
Like
bdm201170 2.11
...
· 
·  1 like
hi

i Hight recommend first star alignment, as a first step  with   luminance as a referent , then crop , clear sky
Edited ...
Like
Joo_Astro 1.91
...
· 
Brian Diaz:
hi

i Hight recommend first star alignment, as a first step  with   luminance as a referent , then crop , clear sky

Hi, I'm of course doing that, I forgot to add it, ty.
Like
tboyd1802 3.34
...
· 
Sorry, maybe not helpful as I'm an OSC guy, but...

After background extraction wouldn't you want to combine LRGB and color calibrate the combined image?
Like
bdm201170 2.11
...
· 
i prefer combine all channel before stretch it , just imaging and stars separate  in some case ,not all the time , check out the median value for all master light (.r.g.b)
Like
Philippe49 0.00
...
· 
RGB (1:1:1)  is equivalent to L(1) i.e same # of photons, so you do not get much by doing LRGB other than by stretching L some more. I would add a lot more RGB data.
Sorry I meant L data.
Edited ...
Like
DevonRob 0.00
...
· 
·  1 like
Have you checked out the Adam Block videos etc. Noticed them referenced in this PI thread:

https://pixinsight.com/forum/index.php?threads/applying-luminance-destroys-my-rgb-images.12196/

I haven't hit this one yet as I've mostly been NB. But I'd made a mental note to check that thread and the videos when I do to avoid it. Seems to be a fairly common thing (until you know the tricks).
Like
Kjpc85 0.00
...
· 
My usual workflow is to stack and align everything in Astro Pixel Processor with normalization off.
Then bring those into PixInsight.

First thing I do is Nuke/stretch each filter,

then I save the stretched files each as a .tif (I've forgetten this step before and it caused my ChannelCombination to look wacky)

I open the newly saved .tif files and combine them with ChannelCombination, and on from there.
Edited ...
Like
morefield 11.07
...
· 
·  4 likes
A few thoughts:

1) I'd shoot a bit more Lum, at least 1/3 of my time.

2) Once you have your masters, if the R, G, and B masters are as sharp as the L master create a SuperLuminance by combining the masters in image integration with no rejection.  Definitely do this if doing 1/4 time on L.  You get a nice boost in SNR and there is no downside if your RGB masters are good.

3) With LRGB combination, the slider settings I use are .5 for Lightness and .4 for Saturation.  I consider .5 setting neutral so the .4 is a saturation boost.

4) The stretch you do before LRGB combine should be relatively conservative.  More can be done with Curves after combining.  Try to match the background levels and highlights levels of the RGB and L masters.  Probe around with your mouse to see if the ADU counts post stretch are similar between the L and RGB.  

5) It doesn't hurt to boost saturation in the RGB master a little with the curves tool prior to LRGB combine.

If you have the settings for the LRGB combination tool correct it's mostly trial and error on getting the stretches right in the RGB and L masters prior to running the combination. 

Kevin
Like
Gunshy61 10.10
...
· 
·  2 likes
Hi Johannes, Here is what I do with LRGB...

I use a somewhat less exposure time for L (it is gathering more photons/second), but I collect plenty more of them.

In processing, I use the luminance filter data for details and brightness - so I process Lum with this focus.   For RGB, I concentrate on getting the colour right.  This goes for linear processing, stretching, and non-linear processing, and generally only do final noise reduction and touch-up after combination.

A couple of things that might help....

a) do a photometric/spectrographic calibration of you RGB data to get the star colour right (at least the hue).   You will want to ensure white balance in your background and nebulosity too, there are several ways of doing this.
b) when stretching RGB, you can use the "colour" option in GHS to prevent/reduce washout during stretching, but don't overdo it - I prefer to undershoot saturation at this stage.
c) don't make the highlights too bright (neither the RGB nor the L with which you will be combining), this will mathematically prevent the hue/saturation from being maintained
d) certain non-linear processes (HDRMT, Local histogram equalization) will create colour artifacts while they bring out details, which is why these processes are typically done on Lum only.   If you have combined L and RGB already, make sure you conduct the operations on "luminance only" - this will extract L, do the operation, and then do a L*a*b combination again afterwards).
e) with GHS again, gently stretch (-ve b recommended) using the saturation option.  The peak saturation should move to the right, but don't push the histogram past the centre point.   This saturation stretch can be done on non-linear RGB before combination and/or LRGB afterwards.   Some use Curves, but I find I have less control...
f) Before combination, use the Mark1 eyeball to ensure the overall brightness of L and RGB are about the same.   One trick is to extract the Lum from RGB, linearly fit it to the Lum-lum, and then combine the extracted, linear fitted lum with the RGB first - making sure that the colours are still good.   Then do the true lum combination.

Your result should have all of the brightness detail of the luminance and the colours of the RGB.

Hope this helps,
Dave
Edited ...
Like
View_into_Space 7.16
...
· 
·  2 likes
SPCC has to be done before BXT
Like
aabosarah 6.96
...
· 
·  3 likes
You definitely need to significantly increase your L integration time. Typically should be 3:1:1:1. If you are doing 1:1:1:1 then you are really not adding any new data to your RGB image set because the amount of data contained is roughly equivalent. Of course this is subject dependent. 

This video has a nice explanation for using Luminance: 

https://www.youtube.com/watch?v=F-VUsKF7Q28

Adam Block and Russell Croman had a very nice video about BXT. It needs to be done after SPCC.

https://www.youtube.com/watch?v=6hkVBnYYlss

Finally I believe it was Adam Block on his pixinsight fundamentals that said you don't need to do background extraction on each color channel separately. You can just do it after your combine the RGB channels once in the linear state, unless you have very complex gradients and flatfield errors that are difficult to isolate when combined.
Edited ...
Like
Staring 4.40
...
· 
·  2 likes
By only using the L data, you're losing the luminance information from the other exposures. I usually do a "superlum" combination, a weighted integration of all channel masters without pixel rejection. I do this as the first processing step. It ensures your luminance channel has the least noise possible and contains all information from the other channels. Then do your usual processing workflow for the combined rgb image and the superlum and after stretching combine them with LRGBCombination. I usually work with starless masters here. Afterwards, fine-tune the stretch, curves, etc. Adding the stars back in is my last processing step. If necessary, final tweaks with Lightroom follow (I'm no Photoshop guru at all).
Like
jewzaam 3.01
...
· 
·  1 like
Scanned replies, sorry if this is duplicated.  First as others say, more L!  Second, stretch both L and RGB less to reduce the washed out effect in combination.  Bright areas should be less than 0.8 in PI.  Higher will result in washed out colors.
Like
waynec 0.00
...
· 
Check out this youtube video. He does a good job explaining why you need more L than R, G or B. Also a good comparison between results with L, synthetic L (in PI) and just RGB:
https://www.youtube.com/watch?v=F-VUsKF7Q28
Like
aabosarah 6.96
...
· 
·  1 like
Sascha Wyss:
SPCC has to be done before BXT

Hey Sascha! I quoted your video without realizing you are replying here. Great video by the way.
Like
TomekG 1.43
...
· 
·  3 likes
Take more luminance exposures - this is what gives details to your picture. You dont't need that much chrominance or color information. Have a look at this pic to see how luminance and chrominance work in image:
https://en.wikipedia.org/wiki/Chrominance#/media/File:Luma_Chroma_both.png

When combining L+RGB it is important that the luminance you are going to add has similar levels to that of the luminance that exists in RGB image. And both images need to be non-linear at this point. I get them to similar levels by first extracting L from RGB image - let's call it SynthL, and then using Linear Fit with the SynthL as reference image on the L I'm going to add.
Once L is linearly fitted to SynthL, I use Channel combination (CIE Lab) to replace RGB's luminance with my own L.
Like
Gunshy61 10.10
...
· 
Torben van Hees:
By only using the L data, you're losing the luminance information from the other exposures. I usually do a "superlum" combination, a weighted integration of all channel masters without pixel rejection. I do this as the first processing step. It ensures your luminance channel has the least noise possible and contains all information from the other channels. Then do your usual processing workflow for the combined rgb image and the superlum and after stretching combine them with LRGBCombination. I usually work with starless masters here. Afterwards, fine-tune the stretch, curves, etc. Adding the stars back in is my last processing step. If necessary, final tweaks with Lightroom follow (I'm no Photoshop guru at all).

Great tip.   I forgot to mention the super-lum.  I don't trust myself with Lightroom though - am I doing more harm than good? 
Like
apennine104 3.61
...
· 
I have seen some examples and had success myself by processing RGB + L seperately using the techniques above. But, prior to LRGB combination using convolution to blur the RGB data effectively reducing all noise in it. Then, when you do LRGB combination (with chrom. noise reduction and saturation set to ~0.42), it snaps everything sharp again based on the L data which has had its own BlurX/NoiseX, etc. This seems to use the RGB data to "paint" the L. Does anyone have any comments on if this is/isn't a good technique?

Thanks!
Like
aabosarah 6.96
...
· 
Take more luminance exposures - this is what gives details to your picture. You dont't need that much chrominance or color information. Have a look at this pic to see how luminance and chrominance work in image:
https://en.wikipedia.org/wiki/Chrominance#/media/File:Luma_Chroma_both.png

When combining L+RGB it is important that the luminance you are going to add has similar levels to that of the luminance that exists in RGB image. And both images need to be non-linear at this point. I get them to similar levels by first extracting L from RGB image - let's call it SynthL, and then using Linear Fit with the SynthL as reference image on the L I'm going to add.
Once L is linearly fitted to SynthL, I use Channel combination (CIE Lab) to replace RGB's luminance with my own L.

Interesting strategy. I have never seen linear fit being used on non-linear images. Thanks for sharing. Might give it a try.
Like
PhotonPhanatic 4.53
...
· 
Boy, some great info in this thread. I just went through the same ordeal with my M27 image. Ended up tweaking the LRGBCombination settings several times before I got something I liked. Wish I'd read this thread first. Might have to revisit the image.
Like
AnaTa 0.00
...
· 
·  1 like
Johannes Möslein:
I'm imaging in LRGB, and as my images get better and overall integration times get longer (RGB channels getting better), I noticed one of my biggest problems is combining L and RGB in Pixinsight.

In acquisition, I use the same sub exposure time for all 4 channels, and roughly the same overall exposure time on all 4 as well (1:1:1:1).

I usually process L and RGB separately, then as the last step before Combination stretch both with GHS. Both look good.
I often have trouble fitting them together, the image either looks washed out, loses detail, has some not-so-nice looking "smudges", or looks like the channels even have different details/brightness in some places.
After playing with the Lightness/Saturation sliders and reducing the L-Channel weight, I can make it work, but it just feels like I lose a lot of detail of the L channel.

So my thoughts of possible solutions are:
  • Stretching: Just get better? Stretch more aggressively/less aggressively? Stretch more similar?
  • Only do minimal processing before LRGB Combination? Combine L, R, G, B at once in LRGB Combination instead of L, RGB?
  • Reduce sub exposure time for L compared to RGB?
  • Increase overall exposure time for L channel (3:1:1:1)?
  • As last step, get different filters?


As I don't have the time to test everything, I hope you can help me improve my images, thanks for reading.



Setup:
  • ZWO ASI533MM Pro
  • ZWO EFW
  • Antlia V-Pro LRGB 1,25" mounted


Rough Processing:
  • WBPP to create masters
  • StarAlignement (usually to L)
  • DynamicCrop
  • Background Extraction separately on L, R, G, B
  • Either ChannelCombination RGB, then BXT (BlurXTerminator) & NXT (NoiseXTerminator) on L, RGB, or BXT & NXT on L, R, G, B, then Channel Combination RGB
  • SPCC on RGB
  • Stretch L, RGB separately with GHS
  • Sharpen L a bit (not always)
  • LRGB Combination L+RGB
  • [*]...

Hi,

Processing is fine. I use usual stretching, no GHS.

I think data acquisition is key.
I believe final SNR for Lum should be at least 5-8 fold more than R, G or B. So, acquire Lum at least 5 times longer. I also make frames longer for Lum. For example, taking pictures of Iris nebula. 160 frames 3 min Lum, 120 frames 30 sec long for each RGB. 

All the best!

AnaTa
Like
Joo_Astro 1.91
...
· 
I didn't expect that many answers, I'm sorry that I can't reply to every single one individually. 

What I'm taking away from this so far is that im gonna practice editing for now.
I'll definitely give the SuperLum a try, too.

Thanks for all the input!
Like
hollo 0.00
...
· 
·  1 like
I have seen some examples and had success myself by processing RGB + L seperately using the techniques above. But, prior to LRGB combination using convolution to blur the RGB data effectively reducing all noise in it. Then, when you do LRGB combination (with chrom. noise reduction and saturation set to ~0.42), it snaps everything sharp again based on the L data which has had its own BlurX/NoiseX, etc. This seems to use the RGB data to "paint" the L. Does anyone have any comments on if this is/isn't a good technique?

Thanks!

I often used that and got good results from it. More recently I saw an Adam Bock video where he referenced it (calling it an "old trick" or something to that effect, so not disparaging it), and suggesting a better option than convolution was to run Multiscale Median Transform on the RGB with the first couple of detail layers disabled. This also removes colour noise at the expense of colour detail, but does so a bit more subtly than convolution.

There is a logic to it: our eyes perceive the sharpness of an image largely in the luminance, but are more sensitive to noise in the colour of the image. As an experiment try the reverse (convolution to L and then combine with the sharp RGB) and the results are dreadful.
Like
AnaTa 0.00
...
· 
Johannes Möslein:
I didn't expect that many answers, I'm sorry that I can't reply to every single one individually. 

What I'm taking away from this so far is that im gonna practice editing for now.
I'll definitely give the SuperLum a try, too.

Thanks for all the input!

*I could focus more on data accumulation monitoring of SNR, FWHM and eccentricity of integrated Lum and RGB.
Like
 
Register or login to create to post a reply.