Luminance: Still important for a CMOS mono camera? Moravian Instruments C3-61000 PRO · Rafael Sampaio · ... · 66 · 2955 · 4

jrista 11.42
...
· 
·  Share link
Wei-Hao Wang:
Hi Jon, Arun,

Below is what I observed in PS and what I make use for LRGB combination. I never tried to figure out what it mathematically actually is. All I know is that it works beautifully. So maybe you can try it and let us know what you think.

First, luminance. In PS's layer window, one can choose many different blending mode. One of them is "luminance," while the default is "normal."  You can first create a layer under the RGB astro image (linear or not), and paint that layer with any gray. Then you can change the blending mode of the RGB astro image to luminance. If you flatten the two layers, you get an image that's the luminance of the RGB.

One interesting thing to know is that this luminance image will look different from the grayscale image of the RGB.  You can convert the RGB image to grayscale, copy/paste it as a new layer on the luminance image created previously, and turn that new layer on and off to blink between the grayscale and luminance. You will see that they are different, but the difference isn't huge.

Back to the luminance. You can copy the luminance image and paste it onto the original RGB. Change the blending mode of the luminance layer to luminance. You will see that the RGB image does not change at all.  You can confirm by turning the luminance layer on and off to see if it makes a difference. It shouldn't.  This shouldn't be surprising. Since we are using the luminance of the RGB as its luminance. Nothing should change.

You can also use the grayscale image as the top layer on RGB instead. Change the blending mode to luminance and turn the layer on and off. You will see that using the grayscale as luminance changes the look of the RGB.

If you are able to conduct the above simple experiment in PS and make the observations I suggested, you are 100% ready to do LRGB composition in PS (linear or not).  All you need to do is to copy/paste the L image onto the RGB and change the blending mode to luminance.  If the L has a matched brightness/contrast to the luminance of the RGB, then you are done. It should not change the overall look of the RGB image, as the match demand that the L image looks very close to the luminance of RGB. So pasting such an L image on the RGB as luminance should not change its overall look (including color). All it changes is detail, since at the detail level (S/N and sharpness), the L image should not look similar to the luminance of the RGB at all.  (Once again, after the L is blended with the RGB, it's no longer trivial to talk about S/N. But for the L itself and for the luminance of the RGB, since both are grayscale images, S/N has rather simple definition, for linear ones.)

The real challenge is how to match L to the luminance of RGB. In my opinion, this should be most easily done in linear space. I tried PI's linear fit (on the linear images), but somehow it works very poorly. At the end, I found that with some little practice, this can be quickly done manually in PS using layers.

I am looking forward to giving it a try. I hit my dark site a couple times in spring, but each time I had light cloudcover. That's been a problem this year..."clear" nights are not really clear...there are mostly clear, but just enough clouds to make the night a bust for imaging. :\

Thanks for sharing your thoughts on the mechanism. I'm not surprised matching the L to the real luminance is the most challenging aspect. Linear fit in PI is a global operation, IIRC, when in fact what you probably really need is more of a pixel level, or perhaps kernel-averaged pixel level fitting algorithm... Well, if I can capture any data this year, I intend to give it a try. The only time I've had good LRGB combinations in PI, seems to be mostly luck that the L was a good match to the intrinsic luminance of the RGB. If that is not the case, then no matter what method I have tried in PI, I get washed out colors. Sometimes quite severely.

 have mostly done this on galaxies, but, I don't get much galaxy data (and the galaxies are usually quite small given my focal lengths), so I don't do a lot of LRGB imaging. For nebula, its mostly been narrow band, but by using my dark site this year that was something I was hoping to change. The plan was to get a lot of dark site RGB data and just do RGB imaging of everything (and maybe some additional NB data for nebula targets.)
Like
whwang 11.80
...
· 
·  1 like
·  Share link
Hi Jon,

The match I use for L and luminance of RGB is a global match.  And even for that, linear fit in PI doesn't work as well as my manual PS match, especially in the shadow part.  I suspect that PI linear fit either fits too match into the shadow (low S/N area), or it handles black poorly, or it's not really linear (unlikely).  Anyway, I tried it several times on different types of images, and I was never happy with the results.

I thought about a local match, something like local normalization in PI. The size of the sub-area for matching should still be many pixels, as it's exactly the pixel level details that we want to maintain in the L. But if the match can be done at scales of roughly 100 pixels, that may solve (at least partially) the fundamental mismatch between L = R+G+B and the luminance of RGB that I mentioned earlier.  In some of my galaxy images, I can see the galaxy spiral arm in L is brighter than the arm in luminance, no matter how I match them. I suspect this is because luminance gives a higher weight to G while L is R+G+B and B dominates the signal in spiral arm regions. There are very often such subtle and yet noticeable difference between my L and luminance of RGB.  The difference is not large enough to make me uncomfortable about the LRGB outcome, but nevertheless I hope it can go away.  I think a local match may help.  I haven't identified an easy way to do this though.
Like
jrista 11.42
...
· 
·  Share link
Wei-Hao Wang:
Hi Jon,

The match I use for L and luminance of RGB is a global match.  And even for that, linear fit in PI doesn't work as well as my manual PS match, especially in the shadow part.  I suspect that PI linear fit either fits too match into the shadow (low S/N area), or it handles black poorly, or it's not really linear (unlikely).  Anyway, I tried it several times on different types of images, and I was never happy with the results.

I thought about a local match, something like local normalization in PI. The size of the sub-area for matching should still be many pixels, as it's exactly the pixel level details that we want to maintain in the L. But if the match can be done at scales of roughly 100 pixels, that may solve (at least partially) the fundamental mismatch between L = R+G+B and the luminance of RGB that I mentioned earlier.  In some of my galaxy images, I can see the galaxy spiral arm in L is brighter than the arm in luminance, no matter how I match them. I suspect this is because luminance gives a higher weight to G while L is R+G+B and B dominates the signal in spiral arm regions. There are very often such subtle and yet noticeable difference between my L and luminance of RGB.  The difference is not large enough to make me uncomfortable about the LRGB outcome, but nevertheless I hope it can go away.  I think a local match may help.  I haven't identified an easy way to do this though.

I've been wondering about a localized matching algorithm for a while. Its too bad that the local normalization in PI is so tied into the ImageIntegration functionality. It would be nice to be able to use its mapping data to do an L -> luminance match... Its a challenging subject, for sure. 

I guess its also one I've largely foregone. I (in what limited time I hav eto image these days) do either just NB (and I never use an L channel with my NB data of course), or just RGB. Can't wait to get out to my dark site on an actual clear night here...
Like
Alexandre_Fagundes 0.00
...
· 
·  1 like
·  Share link
Rafael Sampaio:
Some people believe that capturing using a Luminance filter does not make sense anymore, and it would be better to expose RGB for longer and make a synthetic luminance. But seems that most people still use luminance filters. What is your opinion, considering the use of a CMOS sensor like the Sony IMX455 of my Moravian C3?

Rafa, my friend... I still believe that capturing data with Luminance is very important to gather more fine details in RGB. Data such as IFN in the background of galaxy photos, for example, is more easily achieved using Luminance. Another important detail that I won't give up is the fact that the stars have a much more acceptable, realistic and consequently beautiful brightness definition. The color dispersion of diffraction in Spikes also benefits a lot from Luminance.
Furthermore, a greater amount of data gathered will always combat noise more efficiently. Today, some of my friends (Astrophoto mentors) no longer use Luminance in their RGB data, however, I continue to do so.
These days, I had a very interesting experience. In my capture of NGC 6604 (https://www.astrobin.com/2gdnsz/), I used H-Alpha data as luminance because it contained many fine details of the nebulosity around the Cluster. And look... I thought a lot of fine details were added to my RGB. So, after this, I still used my Luminance data, as the stars needed it!
Edited ...
Like
HegAstro 13.08
...
· 
·  Share link
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.
Like
jrista 11.42
...
· 
·  1 like
·  Share link
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.
Like
Overcast_Observatory 20.21
...
· 
·  Share link
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.



The biggest problem that most of us have is that we have limited sky time and limitless processing time. Since a smoother more detailed image can be achieved using L, in less sky time... it's hard to justify dropping it entirely unless you have an abundance of clear skies. We can spend as much time as we want to coerce lum to play well when added to RGB. In principle, i agree with you, its just not a practical solution for infrequent clear nights. Why I'm going remote.
Like
HegAstro 13.08
...
· 
·  Share link
Chris White- Overcast Observatory:
. In principle, i agree with you, its just not a practical solution for infrequent clear nights. Why I'm going remote.


Chris - the nice thing you have about going remote is abundant clear sky time to test and prove out theories ! I wish I had that luxury. I am left doing theoretical exercises!
Like
aabosarah 9.04
...
· 
·  1 like
·  Share link
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.

Frankly if I am not using a luminance filter for broadband targets, then I would just use an OSC. I don't see the appeal or advantages of just doing RGB imaging.
Like
frankszabo75 1.20
...
· 
·  Share link
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.

If there is one thing that the L makes thing worse, it has to be the stars.  If there is a very dense starfield around the target, L may actually ends up ruining the stars.  Especially with a doublet scope that thas a bloating problem.  BTW- triplets or Petzvals can have a bloating problem too. 
So, I see a lot of people doing all sorts of things to go around these problems, by completely dumping the stars from their  LRGB and re-shooting the RGB stars with very short exposures.   If you ask me, the whole thing becomes a chore or a pain in the rear, just to come up with 1 image with up to 7 different stacks now to combine. 
What I've done in the past is removing the stars from the L, because of the bloating issue and then combine the L.  
In PI, the amount to blend the L still has to be adjusted via curve adjustments, because it can "dilute" the colors to have the DSO looking washed out. 
So, there are some annoying/extra headaches to deal with doing LRGB. 
Just my 2 cents.
Edited ...
Like
HegAstro 13.08
...
· 
·  1 like
·  Share link
Ashraf AbuSara:
Frankly if I am not using a luminance filter for broadband targets, then I would just use an OSC. I don't see the appeal or advantages of just doing RGB imaging.


In light polluted sites, the RGB filters give you better exclusion of LP gradients. Because of the overlap between filters in an OSC, light pollution gradients will be much more complex than with RGB filters and a mono.
Like
Overcast_Observatory 20.21
...
· 
·  1 like
·  Share link
Arun H:
Chris White- Overcast Observatory:
. In principle, i agree with you, its just not a practical solution for infrequent clear nights. Why I'm going remote.


Chris - the nice thing you have about going remote is abundant clear sky time to test and prove out theories ! I wish I had that luxury. I am left doing theoretical exercises!



Arun,

I'll go get 100 hours on a dusty target and give it to you to test your theories. I just make pretty pictures and while I am fascinated by this topic, the testing and theory (especially with PS) is not my forte.
Like
CCDnOES 7.05
...
· 
·  Share link
Frank Szabothe whole thing becomes a chore or a pain in the rear, just to come up with 1 image with up to 7 different stacks now to combine.



IMHO, that is usually the price for excellence.
Like
Overcast_Observatory 20.21
...
· 
·  1 like
·  Share link
Bill McLaughlin:
Frank Szabothe whole thing becomes a chore or a pain in the rear, just to come up with 1 image with up to 7 different stacks now to combine.



IMHO, that is usually the price for excellence.



You wouldn't even need 7 stacks. Just use stars from your primary rgb integrations. With tools like starnet and starx, you just remove the stars before adding lum. It's easy as pie nowadays. No need to take another set of images just for stars.... unless of course you clip your stars using longer exposures
Like
jrista 11.42
...
· 
·  Share link
Chris White- Overcast Observatory:
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.



The biggest problem that most of us have is that we have limited sky time and limitless processing time. Since a smoother more detailed image can be achieved using L, in less sky time... it's hard to justify dropping it entirely unless you have an abundance of clear skies. We can spend as much time as we want to coerce lum to play well when added to RGB. In principle, i agree with you, its just not a practical solution for infrequent clear nights. Why I'm going remote.

Yeah, I guess this is true.

Even if you still get L, you don't necessarily need to push the ratio so high as people do these days. I've come across images with over a 10:1:1:1 ratio of L to RGB, which is just crazy. You could easily enough create a super-luminance by integrating all the channels together, and stick with a classic 3:1:1:1 ratio, and I think that would do well to allow deeper color, while still allowing the potential benefits of L for smoothing the results.

I wish I had the funds to go remote. It would be so nice to be able to just program sequences and let em rip whenever there was a clear night, and not have to drive anywhere, set up, tear down at dawn, etc.
Like
jrista 11.42
...
· 
·  1 like
·  Share link
Ashraf AbuSara:
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.

Frankly if I am not using a luminance filter for broadband targets, then I would just use an OSC. I don't see the appeal or advantages of just doing RGB imaging.

Well, it wouldn't be just RGB, it would also be narrow band a lot of the time. Further, there are image quality characteristics that differ between OSC and RGB. Maybe not everyone can pick them out... It is very easy for me to. In fact, Wei-Hao posted an OSC vs. RGB comparison some time back, and asked people to figure out which was which. There are key aesthetic qualities to mono data that I really love, and they stood out very well in his example. Most people got it wrong, interestingly...but the differences between the two types of cameras are pretty stark for me. If you get a TON of subs, and are able to use Bayer Drizzle to integrate, you can improve the quality of OSC data such that it is more like mono data, but it takes a lot of subs and a lot of data overall, to really get a good result doing that. 

In any case, while you kinda-sorta can do NB with OSC, its just not the same, and doesn't deliver the kind of total isolation of these signals, nor the very high contrast results, that imaging narrow band with a monochrome camera can do. Even on galactic targets, or areas of space that seem largely empty (or just dark), I would always want to have that narrow band capability...look what people have been doing the last few years, capturing incredibly faint but extensive regions of Ha and even OIII around galaxies, or in regions that were previously thought to not really contain narrow band emissions. Mono cameras offer a lot of flexibility here. IF that's your thing, then there are definitely advantages of imaging with a monochrome sensor (not necessarily just RGB with mono.)
Like
jrista 11.42
...
· 
·  1 like
·  Share link
Frank Szabo:
Jon Rista:
Arun H:
Jon Rista:
This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y


I don't think RGBWorkingSpaces will help in the calculation of XYZ, since they are used to determine the weightings in the calculation of L*, not in the conversion of RGB to XYZ. So, I think it will impact the LRGB combination the way PI does it (non linear) but not in attempting to replace a linear Y from an RGB to XYZ conversion.

What does RGB working space actually do? | PixInsight Forum


Now there may be other linear color spaces that may be more appropriate to a replacement of Y (or whatever the luminance is called) by L, but I think yuou'd have to research what the 3x3 matrix entries would be for such a color space.

Ironically...all this complexity is why I'm dropping L and switching to just RGB, investing time I would have spent in L on RGB data (probably mostly more G data, unless I find one particular channel needs even more, say blue.) It just ends up simpler to just get the RGB, then invest all this effort trying to find ways to combine L with RGB without some kind of consequence.

If there is one thing that the L makes thing worse, it has to be the stars.  If there is a very dense starfield around the target, L may actually ends up ruining the stars.  Especially with a doublet scope that thas a bloating problem.  BTW- triplets or Petzvals can have a bloating problem too. 
So, I see a lot of people doing all sorts of things to go around these problems, by completely dumping the stars from their  LRGB and re-shooting the RGB stars with very short exposures.   If you ask me, the whole thing becomes a chore or a pain in the rear, just to come up with 1 image with up to 7 different stacks now to combine. 
What I've done in the past is removing the stars from the L, because of the bloating issue and then combine the L.  
In PI, the amount to blend the L still has to be adjusted via curve adjustments, because it can "dilute" the colors to have the DSO looking washed out. 
So, there are some annoying/extra headaches to deal with doing LRGB. 
Just my 2 cents.

Aye, some good points here. The bloating issue can be pretty non-trivial. I guess that may be one of the reasons I've had trouble with my system...its an FSQ106 EDX IV, and while that does control dispersion better, it still gets fairly bad in the blues... Blue scattering in general is a bit of a problem with that scope (certainly more so than the Canon 600mm f/4 L II lens I originally used, which has the lowest scattering of any scope I've used myself, or data that I've processed from others....so I may also be a bit spoiled in that area.) 

The interesting thing to me, is, we don't really have a lot of information about deep RGB imaging. There are a TON of ASSUMPTIONS, that L improves things... But since hardly anyone does just RGB with mono, and among those that do, those that go really deep are even fewer...its hard to really know how much L really is improving things. Way back when LRGB was first..."invented" (?) it was to resolve limitations of the cameras. They had very high read noise, but they could be hardware binned. The general idea was to acquire L to get high SNR data (which was often not even remotely close to the kind of SNR people get far more easily today), then to bin 2x2 or even 3x3 to acquire the RGB. The RGB was then upsampled, usually blurred or otherwise heavily denoised, and then the LRGB combination was made. This "painted" the L data with some color. The lower resolution of the RGB data is often egregiously obvious in a lot of those older LRGB images. Back when I first started imaging, I thought some of those images were just mind blowing...when I look at them today, the lack of resolution in RGB has become rather obvious and sometimes overtly detrimental to the quality of the image. 

Since we have so little data about pure RGB imaging with monochrome sensors...I honestly wonder if all of our assumptions are truly correct. There hasn't really been any real concrete, apples to apples comparisons of deep data (deep enough RGB to compare directly to a comparable LRGB. If you use a 3:1:1:1 ratio, and get say 6 hours of L, and 2 each of RGB, that's 12 hours of data. How would a solid 12 hours of RGB only data compare? I honestly don't know that I've ever seen such a comparison...
Like
 
Register or login to create to post a reply.