Luminance: Still important for a CMOS mono camera? Moravian Instruments C3-61000 PRO · Rafael Sampaio · ... · 66 · 2955 · 4

cratervanawesome 2.11
...
· 
·  1 like
·  Share link
Wei-Hao Wang:
Mark Petersen:
Wei-Hao Wang:
Hi Arun,

Because of the above, personally I don't rely on PI for LRGB composition, and I don't do nonlinear stretching before LRGB composition.  I do LRGB composition while both the L and RGB images are linear.  That makes the match much easier.  I do this in PS as layers, so I get real-time preview when I adjust the contrast/brightness of L to match the luminance of RGB.  It's not easy, and not a one-click task, but it works very well for me.  All my LRGB images made in recent two years were made this way.

Hi Chris,  

You raise some really good points about the processing side of the LRGB equation.  This is an area where I’ve struggled with in the past as far as doing the LRGB combining in the linear state in PI vs later in PS after a non-linear stretch.  I’m curious about how you do the LRGB composition in PS while both are linear since PS needs to work off of 16-bit files. I usually think of the conversion from 32-bit format to 16-bit TIF files needed for PS as where the bulk of the non-linear stretching happens.  How are you getting the images into PS in a linear state?  Do you do something like a “pre-stretch” in PI to get the data converted and then the final stretching in PS?  Or am I missing something?

Thanks,
Mark

I guess this is a question for me.  So I will try to answer it.  I apologize if this is indeed for Chris.

I nearly do most of my post processing in PS right after stacking in PI.  I am more familiar with PS than PI.  This is why I chose to do the linear LRGB composition in PS.  And the layer function of PS is really fantastic. Once you know the math behind the layer blending methods, it can be as transparent as pixel math in PI, and it offers outstanding real-time preview.

Converting 32bit linear image in PI to 16bit TIFF for PS processing will sacrifice dynamical range and digital resolution. If one sees discontinuity (gaps) in the histogram in PS after strong contrast stretching (linear or not), that implies insufficient bit depth with the 16bit TIFF for that particular stacked data. To prevent this, I always apply a 4x to 10x linear brightness stretch in PI before exporting the 32bit image to 16bit TIFF. For nearly all my cases, this solves the insufficient bit depth problem and the PS histogram remains gapless even after contrast stretching that's way too aggressive.  

The down side of the above 4x to 10x brightness stretch is that the cores of very bright stars become more saturated, or become closer to saturation. To make these stars look better, I may apply some masking during the stretching stage to prevent them from being completely blown out. If that's not enough (this is rare), I would go back to the original 32bit RGB (not LRGB) image and export it to 16bit TIFF directly without the 4x to 10x brightness stretch. Such an image can be used to "salvage" the cores of a handful of very bright stars. Since these stars are bright, they don't need the S/N of LRGB.  So just use the RGB version is fine.

So in short, you are right.  I do pre-stretching in PI before exporting, but that pre-stretching is strictly linear.

A newer plugin for PI called ImageBlend does a great job of preventing the need to jump outside pixinsight for this type of feature. Allows you to process and stretch your L and RGB separately, then pick a blending mode and even has parameters to control the blend based on highs mids and blacks.
Like
whwang 11.80
...
· 
·  1 like
·  Share link
Michael Gorman:
A newer plugin for PI called ImageBlend does a great job of preventing the need to jump outside pixinsight for this type of feature. Allows you to process and stretch your L and RGB separately, then pick a blending mode and even has parameters to control the blend based on highs mids and blacks.

I do not "jump outside PI for some features."  I jump outside PS for PI stacking and go back.  I started using PS almost 30 years ago and I use it through out my photographic life. PS is my bread and butter. PI is not.
Like
cratervanawesome 2.11
...
· 
·  1 like
·  Share link
Wei-Hao Wang:
Michael Gorman:
A newer plugin for PI called ImageBlend does a great job of preventing the need to jump outside pixinsight for this type of feature. Allows you to process and stretch your L and RGB separately, then pick a blending mode and even has parameters to control the blend based on highs mids and blacks.

I do not "jump outside PI for some features."  I jump outside PS for PI stacking and go back.  I started using PS almost 30 years ago and I use it through out my photographic life. PS is my bread and butter. PI is not.

Totally fine. I prefer to stay in PI so new tools that make that possible are exciting.
Like
jrista 11.42
...
· 
·  3 likes
·  Share link
Rafael Sampaio:
Some people believe that capturing using a Luminance filter does not make sense anymore, and it would be better to expose RGB for longer and make a synthetic luminance. But seems that most people still use luminance filters. What is your opinion, considering the use of a CMOS sensor like the Sony IMX455 of my Moravian C3?

I don't think its "any more" ... It is more that, I think people are realizing that combining an artificial luminance channel with RGB is problematic. The luminance you acquire is not the same as, and can never really be the same as, the intrinsic luminance inherent to the RGB channels themselves when combined. This is why without extensive efforts to counteract it, combination of L with RGB washes out color. 

This debate has raged on a long time. As long as I've been in the hobby. It occurred during the CCD era. I think it is just that, with CMOS sensors as they are now...exceptionally sensitive with incredibly high efficiency, the benefits that LRGB combination once provided (and remember, this goes back 20 years to when it was first introduced, when CCD read noise levels were 15-20e- or more and sensitivities were in the 40% range) have diminished greatly. The consequences of LRGB Combination, however, have not changed...

SNR is a tough thing to define in the context of an LRGB image. I think what most people see is a potential improvement in image smoothness, but that does not necessarily mean that LRGB combination increases SNR. If one were to skip L, and invest ALL of their acquisition time into their RGB data, and further to invest that time into the most effective distribution of R, G and B time (i.e. they would not necessarily be the same, the weakest channel could be weighted higher than the others with more time, some make the argument that G should have the most time as it contributes the most to the intrinsic luminance inherent to the RGB combination), would in fact produce a higher REAL signal to noise ratio. Not just in luminance, but also in color...i.e. greater color accuracy, especially on fainter details (galaxy arms, extra-galactic structures like tidal arms etc.) 

This is a common debate, and I've been longing to get some data myself to demonstrate some real world differences with some SIGNIFICANT data (i.e. not just an hour or so, but more like tens of hours of RGB, as well as with L, so that more realistic comparison can be made.) I haven't had the chance to get out to my dark site on a clear night yet, but my plans are to try and do some deep RGB imaging once I do, particularly on dark and dusty areas or galaxies, once I am able to get back out there.

In any case, based on my prior and more limited experiments here, I believe that the SNR of RGB only imaging will be higher, even if the image may not be as "smooth" as a comparable LRGB imaging (and I only really consider dark sites here...light pollution introduces too many problems and makes any useful evaluation problematic, not to mention with noise as high as it is with LP, the differences probably don't matter.) SNR and moothness are not necessarily the same thing, although a very high SNR will often lead to smoother data, it really depends on what the data is and what the signal represents (i.e. galactic arms, for example, are not like nebula, and instead are massive streams of stars...so, with enough resolution and high enough SNR, they shouldn't actually be smooth!! They would actually look noisy, which would in fact be correct. Distant galaxy-dense backgrounds in some regions of the sky could also look the same...)
Edited ...
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
Because of the above, personally I don't rely on PI for LRGB composition, and I don't do nonlinear stretching before LRGB composition.  I do LRGB composition while both the L and RGB images are linear.  That makes the match much easier.  I do this in PS as layers, so I get real-time preview when I adjust the contrast/brightness of L to match the luminance of RGB.  It's not easy, and not a one-click task, but it works very well for me.  All my LRGB images made in recent two years were made this way.


Hi Wei-Hao -

This makes a lot of sense. If I understand correctly, you are making nonlinear adjustments to both images to get the result you want, with lots of visual feedback in the process. This would be a lot more involved and require finesse than simple matching of white and black points that LRGB combination does in PixInsight. I guess my overall take on this would be that, to maximize the benefits of LRGB imaging the following would have to apply:
  1. An adequate base of RGB data to support the image. The overall integration time/ratio of L:RGB is different for each image, there is no magic ratio.
  2. Finesse in the processing of each image before the combination.

So the main advantages are:
  1. The possibility of splitting the time so that good seeing is used to get "L", poorer seeing for "RGB"
  2. The possibility of getting increased SNR in low signal areas within a given total time than RGB alone.


It wouldn't be an unreasonable thing for those with much less limited access to clear sky time and average to poor seeing to simply focus on RGB, rather than attempt the increased complexity of LRGB for a return that may not be worth the increased effort.
Like
whwang 11.80
...
· 
·  1 like
·  Share link
Arun H:
If I understand correctly, you are making nonlinear adjustments to both images to get the result you want, with lots of visual feedback in the process.

Hi Arun,

No. I strictly keep the RGB and L images linear before and when I do the LRGB composition. I try to make as few adjustments as possible. Pretty much all I do is color balancing the RGB and match L to the luminance of RGB, all linear.

Nonlinear processes only enter after the LRGB composition is done.
Like
roofkid 1.20
...
· 
·  Share link
Similar to Wei-Hao I also tried creating an apples-to-apples comparison. You can check out the conversation here: https://www.astrobin.com/forum/c/astrophotography/deep-sky/lrgb-vs-rgb-trying-to-create-an-apples-to-apples-comparison/

I also agree that it only makes sense for continuum targets. After this experiment my own conclusion was that shooting L is definitely worth it for me for the specific targets.
Like
bdm201170 3.32
...
· 
·  Share link
hi

yes ,  luminance when you apply it correctly and stretch it according to the imagen, brings more details , accents and balance.

   CS, Brian
Like
ngc1977 1.81
...
· 
·  Share link
I was under the impression that L was only helpful in gaining detail when people still binned CCD cameras to gather RGB and used that data to "paint" the higher-resolution unbinned L image.  Since (again, as I understand it) there are no advantage to binning with CMOS cameras, this technique is somewhat obsolete.

Very open to being educated more on this.
Like
HegAstro 13.08
...
· 
·  1 like
·  Share link
Adam Drake:
I was under the impression that L was only helpful in gaining detail when people still binned CCD cameras to gather RGB and used that data to "paint" the higher-resolution unbinned L image.  Since (again, as I understand it) there are no advantage to binning with CMOS cameras, this technique is somewhat obsolete.

Very open to being educated more on this.

Yes, binning color in camera in ccd was the original reason for luminance. However, with CMOS, it has morphed into considerations around more efficient SNR gathering. After all, you are gathering three times as many photons per unit time with L as with any RGB filter. The problem is I don’t think it is a simple matter of adding photons gathered. The RGB image is converted to L* a* b* and the L* replaced with the luminance filter version. The SNR of the composite image then is not simply determined by photon counting but by much more complex math. Incidentally, this replacement illustrates why there are so many color problems with LRGB combination. If the luminances are not close to each other, the colors will be washed out and the image will look “off”. Pure RGB images will not have this problem as the luminance is, obviously, matched as it is derived from the RGB information.
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
Arun H:
If I understand correctly, you are making nonlinear adjustments to both images to get the result you want, with lots of visual feedback in the process.

Hi Arun,

No. I strictly keep the RGB and L images linear before and when I do the LRGB composition. I try to make as few adjustments as possible. Pretty much all I do is color balancing the RGB and match L to the luminance of RGB, all linear.

Nonlinear processes only enter after the LRGB composition is done.

Wei Hao - this is where my confusion lies. To match the luminance of the RGB to the Luminance image, several things would need to happen:
  1. You would need to extract luminance from the RGB image. The relationship between L and RGB is nonlinear.
  2. if you then attempt to match the histogram of the Luminance to the extracted luminance - you are inevitably making contrast adjustments to the Luminance which are nonlinear processes.
  3. the reverse transform back to RGB is again nonlinear.


My point is that it is not possible to think of the resulting image as truly linear. It may be more accurate to say that the combination is being done when the images are lightly stretched and color balanced?
Like
whwang 11.80
...
· 
·  2 likes
·  Share link
Arun H:
  1. You would need to extract luminance from the RGB image. The relationship between L and RGB is nonlinear.
  2. if you then attempt to match the histogram of the Luminance to the extracted luminance - you are inevitably making contrast adjustments to the Luminance which are nonlinear processes.
  3. the reverse transform back to RGB is again nonlinear.

Hi Arun,

The three points you listed are all incorrect.

#1 Luminance is not directly R+G+B. This is correct. Because our eyes are more sensitive to G, G has higher weight in luminance. But nevertheless, luminance is a linear combination of R, G, and B. So the luminance image is still a linear image.

#2 If luminance of RGB is linear, then matching L to luminance is a linear process. All you need to do is to match the white and black points and leave the rest of the curve linear (i.e., without midpoint adjustment). This is super easy in PS with curves. No fancy histogram matching is involved.

#3 There is no reverse transform. The luminance of RGB can be easily created with a layer, and then L is matched to that. The matched L layer is merged to RGB as a new luminance to form an LRGB image. Everything remains linear.

The only issue here is that, as I mentioned in a previous reply in this thread, L is not strictly luminance of RGB. Imagine the RGB filter transmission curves all have perfectly square shapes and the three curves do not have gaps nor overlaps between them. Also, let's say the blue cutoff of B and the red cutoff of R match that of L. In such a simplified situation, L is strictly R+G+B (R,G,B are pre-color balance). But the actual RGB image is color balanced, plus the fact that luminance is a linear combination of RGB rather than simply R+G+B. This makes the luminance of RGB never look exactly the same as L, no matter how you do the matching. This problem even cannot be solved by involving nonlinear matching. In my opinion, this is the true weak point of LRGB composition in terms of color fidelity. But also as I have said, this is subtle and is not the cause of what most people encounter when they think there is a color washout.
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
#1 Luminance is not directly R+G+B. This is correct. Because our eyes are more sensitive to G, G has higher weight in luminance. But nevertheless, luminance is a linear combination of R, G, and B. So the luminance image is still a linear image.


Hi Wei - Hao,


You are correct that luminance is a linear combination of R, G and B, so my statement above was incorrect in reference  to L. I was thinking of  L*, which has a nonlinear relationship to RGB. I think PS and PI maybe do LRGB combination in different ways.


This is what Juan says about PixInsight:

"You cannot perform a LRGB combination with linear images. The reason is that CIE L*a*b* and CIE L*c*h*, which are used to perform the LRGB combination, are nonlinear color spaces.

So you must process your RGB and L images separately. Apply all required processes to the linear images before LRGBCombination (for example, color calibration and deconvolution should be applied to the linear RGB and L images). Then apply the initial nonlinear histogram transformations to both RGB and L, perform the LRGB combination, and continue working on the combined image. Most PixInsight tools allow you to apply processes only to the luminance of a color image (actually, to the lightness component of CIE L*a*b*), without requiring a separate L image."

Converting RGB to L*a*b* requires an initial conversion to XYZ space, then nonlinear functions to create the L*a*b* values.
Like
jrista 11.42
...
· 
·  3 likes
·  Share link
Arun H:
Adam Drake:
I was under the impression that L was only helpful in gaining detail when people still binned CCD cameras to gather RGB and used that data to "paint" the higher-resolution unbinned L image.  Since (again, as I understand it) there are no advantage to binning with CMOS cameras, this technique is somewhat obsolete.

Very open to being educated more on this.

Yes, binning color in camera in ccd was the original reason for luminance. However, with CMOS, it has morphed into considerations around more efficient SNR gathering. After all, you are gathering three times as many photons per unit time with L as with any RGB filter. The problem is I don’t think it is a simple matter of adding photons gathered. The RGB image is converted to L* a* b* and the L* replaced with the luminance filter version. The SNR of the composite image then is not simply determined by photon counting but by much more complex math. Incidentally, this replacement illustrates why there are so many color problems with LRGB combination. If the luminances are not close to each other, the colors will be washed out and the image will look “off”. Pure RGB images will not have this problem as the luminance is, obviously, matched as it is derived from the RGB information.

This is where I think the discussion should really lie. IS it an improvement in SNR? If you do a strait LRGB combination with limited RGB data...without some kind of significant noise reduction of some kind (various techniques can work) applied to the RGB, then you do not actually gain much. The noise of the RGB, especially when its stretched and otherwise adjusted to avoid washout (Wei-Hao seems to have a technique that limits this, I have not had much experience with it; direct L* -> L replacements will usually wash out RGB fairly significantly without significant RGB stretching and even enhancement, i.e. saturation or chromaticity, etc.) shows through the L very well. The L channel itself may be very smooth, but if the RGB channels are noisy, then the combination with L won't actually smooth things out as much as most people think (experiments have born this out in the past.) 

In order for an acquired L channel to improve image smoothness (not SNR, really...how do you define SNR in the context of combining L with RGB?)...then you are basically still following the same old tried and true technique. Perform significant denoising on the RGB, and effectively "paint" the L with it.

So, is it a gain in SNR? Really? Or are we just talking about an improvement in an aesthetic quality of signal smoothness? Signal smoothness is not necessarily the same as an increase in SNR. A true increase in SNR will usually result in a smoother signal (if the original is indeed smooth!!), but other things can smooth signals out as well, and considering the kind of NR/artificial smoothing that usually has to be done to RGB data to combine L with it, you are then losing real SNR on the RGB (i.e. losing color contrasts and micro details in color.)

The question then becomes, in the pursuit of SNR (vs. just improved image smoothness)...if you invest all the time that would have gone into L, could you make your RGB only image a truly higher SNR image, with improvements in ALL details, even faint colored ones, not just monochromatic "details"? If you would have say, pursued a 5:1:1:1 L:R:G:B ratio, and instead went for a 2.67:2.67:2.67 RGB only ratio...how would things look? What about a 2:3:2 RGB ratio? Same total amount of time invested...just trading your L time back into the RGB channels? 

For a long time now, I've always thought that a lot of images that cover areas of the sky with very faint details (largely, galaxies, dark dusty regions, IFN, dark nebula structures that might be reflecting (frequently blue and white light), etc.) usually look gray, or grayish, in those fainter details. However whenever you look into images of those same regions where the imager acquired SIGNIFICANT and very DEEP RGB data, those same details are rarely gray or grayish...they usually have fairly rich color. IFN, for example, if you find images of any of it with very deep RGB, it usually takes on more of a brownish color, often with notable colors with closer star reflections (yellows, deep blues, sometimes whitish, sometimes orangish).

So I always wonder...by putting so much time in to L (which seems to have increased lately, it seems the once 3:1:1:1 ratio has morphed more into a 5:1:1:1 ratio a lot of the time, and I've even come across images with more of a 10:1:1:1 L:R:G:B ratio), and so little time into RGB...what are our images losing? I think there is more color out there, than we often capture. There are these amazing collaborative images these days, often with hundreds of hours of total integration, including significant amounts of RGB data overall from the total amount of data combined. Those images are showing off a lot of these fainter, deeper colors, reflections that are often missed, color in faint outer galactic structures (that are often captured, but usually show up mostly gray, in most images), etc. They are often a good reference point for what kind of colors these faint structures might have. These collabs still often have more L than anything else, and I still wonder if they had invested that L time into RGB and narrow band channels instead...what would things look like? 

Even ignoring the fainter details and colors for a bit...how realistic are the colors of most LRGB images? Its not often that anyone really goes DEEP on RGB. So, the color we see most of the time, is usually fairly limited... How has that skewed our collective understanding of the colors of any of the objects we commonly image? Have we...become, as a collective or community, "normalized" on colors that may in fact be tainted by LRGB combination, and the washout, to one degree or another, that usually accompanies it? 

Anyway...just food for thought. IS LRGB ACTUALLY increasing SNR? Or is it just making our images smoother, while at the same time, tainting our color, and even tainting our understanding of the color of the universe? 
Like
whwang 11.80
...
· 
·  2 likes
·  Share link
Hi Arun,

What I do isn't that complex.  I just use L to replace the luminance of the RGB in PS.  Because of the matching, the L looks just like the luminance of the RGB if you look at it from a distance.  This improves the sharpness and reduces noise.  It's very simple, and it's not magic at all.

The CIE Lab space may be nonlinear, but this has nothing to do with whether LRGB has to be done in a linear or nonlinear manner at all.  The color images that we use to see are all nonlinear.  But this fact does not prevent the composition of RGB in linear space (and then do the nonlinear transform).  We all do RGB composition in linear space, right?  The same goes for LRGB composition.
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
The CIE Lab space may be nonlinear, but this has nothing to do with whether LRGB has to be done in a linear or nonlinear manner at all.  The color images that we use to see are all nonlinear.  But this fact does not prevent the composition of RGB in linear space (and then do the nonlinear transform).  We all do RGB composition in linear space, right?  The same goes for LRGB composition.


Hi Wei -Hao,

I don't want to belabor the point too much, and in the end, it probably does not matter whether it is truly linear or nonlinear in a mathematical sense as long as it works.

When we calculate extracted luminance from R,G,B we are using:


Y= x*R+y*B+z*G where x, y, and z are constants. This is obviously a linear transformation. This is where I was mistaken in saying the relationship between L and RGB is non linear, it obviously is not.

To "incorporate" an acquired luminance into our RGB file, my understanding is we do the following.
  • Convert R,G, B into L*, a*, b*. This is a non linear process as described here:

image.png
  • Replace the L* so calculated with the L* calculated from the acquired luminance
  • Once this is done, the math can be reversed to convert back to RGB space for display or other processing


Of course, it does not matter, from a math standpoint, whether we are working with linear or non linear data. It is a matrix of numbers, after all. The point of non linearity is that the resulting LRGB transformed image no longer has a linear relationship with the photons collected either in the L data or the RGB data. That is, LRGB is inherently nonlinear because of the way it has to be done. 


Incidentally, I think this is why it is not very simple to say that LRGB captures X times more photons than pure RGB, therefore SNR must be SQRT(X) better. Very clearly, this is not the case. And this math also shows why it is important to match the Ls. If you don't, the calculated LRGB image can have data that would look off and odd.
Edited ...
Like
whwang 11.80
...
· 
·  Share link
I think the whole point of preventing color washout in LRGB composition is to make the L image look as close as possible to the luminance of RGB.  PI chose to do it in the nonlinear space, which I think is not very wise.  This can be much more easily done in the linear space.  Unfortunately, without actually doing it in front of you in PS probably cannot easily demonstrate this for those who are not familiar with PS.

As for the S/N, I had never implied the S/N of an LRGB image can be easily calculated.  But no matter what's the math behind it, getting more photons to start with implies a smoother image. Also, if you have higher S/N in individual images, no matter you do a linear combination or nonlinear combination, error propagation will lead to a higher S/N end result.  The issue here is just that the S/N in an LRGB image is illy defined.  That why I show the comparison images in a previous reply instead of doing math here.  If one can't calculate a meaningful S/N, at least one can look with their own eyes to decide if the end result is really better.
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
As for the S/N, I had never implied the S/N of an LRGB image can be easily calculated.  But no matter what's the math behind it, getting more photons to start with implies a smoother image. Also, if you have higher S/N in individual images, no matter you do a linear combination or nonlinear combination, error propagation will lead to a higher S/N end result.  The issue here is just that the S/N in an LRGB image is illy defined.  That why I show the comparison images in a previous reply instead of doing math here.  If one can't calculate a meaningful S/N, at least one can look with their own eyes to decide if the end result is really better.


Hi Wei - Hao,

Yes, I agree. You never suggested that the SNR of an LRGB image is easily calculated, and I apologize if my post suggested you did. The purpose of that statement was simply to amplify an earlier point that Jon and I made, which was that it is not obvious, from a quantitative sense, how LRGB improves SNR. It had nothing at all to do with anything you said or implied. Your examples have been immensely helpful and far more so than any math since they show how this works in the real world. It is possible to use LRGB combination to get an improved image, even if one cannot (easily) calculate how much improved the SNR is. But, as you and others with experience have shown, it requires skill and finesse, and a knowledge of the subject and objective you are going after.
Like
HegAstro 13.08
...
· 
·  Share link
Wei-Hao Wang:
PI chose to do it in the nonlinear space, which I think is not very wise.  This can be much more easily done in the linear space.


I wish I knew mathematically how PS implements it in linear space (probably X, Y, Z space). There has to be a way of preserving color, which it is not obvious how PS would do it in linear space.

From a PI standpoint, this is what Juan says:

"For LRGB to provide a meaningful result the RGB and L input images must be nonlinear (stretched). LRGB may 'seem to work' with linear images in some cases, but these are casual results because the implemented algorithms expect nonlinear data...

... The main problem of LRGB is achieving a good balance between the nonlinear L and RGB components, which must be compatible to produce a reasonable result. But this is a completely different (and complex!) topic."

and 

"The LRGB combination process generates an RGB color image. To describe it in simple terms, the LRGB combination technique consists of replacing the lightness component of a color image with a new lightness component that usually has a higher resolution (for example, binned RGB and unbinned L) and often more signal. To achieve this the algorithm works in the CIE Lab color space, which separates lightness (L) from chroma (ab), and some parts also in the Lch space in our implementation (where chroma is interpreted as colorfulness (c) and hue (h)). Both color spaces are nonlinear and hence require nonlinear data. Theoretically we could perform an LRGB combination with linear data in the CIE XYZ space, where Y is luminance and XZ represent the chroma component, but this is difficult in practice (we investigated this option years ago) because mutual adaptation between luminance and chroma is difficult to understand and achieve when working with linear data."


So it appears Juan tried to implement it in linear space, but did not achieve acceptable results.

This is just fyi.
Edited ...
Like
jrista 11.42
...
· 
·  Share link
Wei-Hao Wang:
I think the whole point of preventing color washout in LRGB composition is to make the L image look as close as possible to the luminance of RGB.  PI chose to do it in the nonlinear space, which I think is not very wise.  This can be much more easily done in the linear space.  Unfortunately, without actually doing it in front of you in PS probably cannot easily demonstrate this for those who are not familiar with PS.

As for the S/N, I had never implied the S/N of an LRGB image can be easily calculated.  But no matter what's the math behind it, getting more photons to start with implies a smoother image. Also, if you have higher S/N in individual images, no matter you do a linear combination or nonlinear combination, error propagation will lead to a higher S/N end result.  The issue here is just that the S/N in an LRGB image is illy defined.  That why I show the comparison images in a previous reply instead of doing math here.  If one can't calculate a meaningful S/N, at least one can look with their own eyes to decide if the end result is really better.

FWIW, I wasn't trying to say you every implied that with SNR.

I am curious...do you know how PS combines the luminance with the color in linear space? As Arun shared above, Juan C. indicated it was a difficult task do with linear data. Your images do look very nice, and I wonder what PS is actually doing that makes it work...
Like
jrista 11.42
...
· 
·  Share link
Arun H:
Wei-Hao Wang:
PI chose to do it in the nonlinear space, which I think is not very wise.  This can be much more easily done in the linear space.


I wish I knew mathematically how PS implements it in linear space (probably X, Y, Z space). There has to be a way of preserving color, which it is not obvious how PS would do it in linear space.

From a PI standpoint, this is what Juan says:

"For LRGB to provide a meaningful result the RGB and L input images must be nonlinear (stretched). LRGB may 'seem to work' with linear images in some cases, but these are casual results because the implemented algorithms expect nonlinear data...

... The main problem of LRGB is achieving a good balance between the nonlinear L and RGB components, which must be compatible to produce a reasonable result. But this is a completely different (and complex!) topic."

and 

"The LRGB combination process generates an RGB color image. To describe it in simple terms, the LRGB combination technique consists of replacing the lightness component of a color image with a new lightness component that usually has a higher resolution (for example, binned RGB and unbinned L) and often more signal. To achieve this the algorithm works in the CIE Lab color space, which separates lightness (L) from chroma (ab), and some parts also in the Lch space in our implementation (where chroma is interpreted as colorfulness (c) and hue (h)). Both color spaces are nonlinear and hence require nonlinear data. Theoretically we could perform an LRGB combination with linear data in the CIE XYZ space, where Y is luminance and XZ represent the chroma component, but this is difficult in practice (we investigated this option years ago) because mutual adaptation between luminance and chroma is difficult to understand and achieve when working with linear data."


So it appears Juan tried to implement it in linear space, but did not achieve acceptable results.

This is just fyi.

FWIW, if you try to replace the natural Y component of an RGB image in PI with an acquired L, the results are usually not that much different than LRGBCombination. I've tried this a number of times, and I still ended up with washed out colors. 

The proclamation is that LRGB combination being a non-linear operation is what causes this, but in practice that hasn't been my experience. In practice, if the RGB isn't stretched to the same degree as L (which with weak RGB will greatly exaggerate the apparent noise), then the combination, regardless of how its done, washes out the color. If you DO stretch the RGB enough, then the combination won't wash out as much but certain colors will usually still shift (which reduces accuracy, even if the result remains pretty). Stretching weak RGB that much greatly increases the apparent noise, though, and the combination still ends up very noisy. 

Even with a linear Y replacement with L, stretching the combination usually still ends up quite noisy, and it is rarely neutral to the color.

PI does have a tool, called RGBWorkingSpaces. It is supposed to weight a monochrome L image in such a manner that when it combines with RGB, the weights with which it combines are different in each channel. I've never been able to get it to work as I thought it was supposed to in the past, but I wonder if it might be a way to get a linear Y->L swap to work without changing the color at all. Thus far I haven't been able to get it to work, but, I am not real sure how to figure out what weights to give R, G and B in the L before combination such that it would work...or even how to determine those weights.
Edited ...
Like
HegAstro 13.08
...
· 
·  Share link
Jon Rista:
FWIW, if you try to replace the natural Y component of an RGB image in PI with an acquired L, the results are usually not that much different than LRGBCombination. I've tried this a number of times, and I still ended up with washed out colors. 


Even with a linear Y replacement with L, stretching the combination usually still ends up quite noisy, and it is rarely neutral to the color.


Hi Jon -

I suspect this is exactly what PS is doing, and I expect Wei-Hao's method (and the quality of data he uses) may be what is overcoming the issues Juan was pointing out? In the XYZ space, Y is the luminance. One issue you run into with straight replacement of Y with L is that the Y has a greater component of G in it, whereas the captured L is equally weighted in R, G, and B by design due to the transmission characteristics of the L filter. So, when you invert the matrix to calculate the new R,G,B values, you will have lost color information in G. Effectively, you can see from the matrix coefficients that X and Z don't really have strong weighting in G (more weighted to R and B). Throwing away the Y from the RGB data effectively throws away a bunch of G data (at the benefit of a better Y) which results in washed out colors but better SNR. Using very good R G B data may possibly overcome this. I expect the method PI uses tries to avoid this problem through conversion to other color spaces. But, in either case, a good image will require good data and skill in matching the L's.


image.png
Edited ...
Like
whwang 11.80
...
· 
·  2 likes
·  Share link
Hi Jon, Arun,

Below is what I observed in PS and what I make use for LRGB combination. I never tried to figure out what it mathematically actually is. All I know is that it works beautifully. So maybe you can try it and let us know what you think.

First, luminance. In PS's layer window, one can choose many different blending mode. One of them is "luminance," while the default is "normal."  You can first create a layer under the RGB astro image (linear or not), and paint that layer with any gray. Then you can change the blending mode of the RGB astro image to luminance. If you flatten the two layers, you get an image that's the luminance of the RGB.

One interesting thing to know is that this luminance image will look different from the grayscale image of the RGB.  You can convert the RGB image to grayscale, copy/paste it as a new layer on the luminance image created previously, and turn that new layer on and off to blink between the grayscale and luminance. You will see that they are different, but the difference isn't huge.

Back to the luminance. You can copy the luminance image and paste it onto the original RGB. Change the blending mode of the luminance layer to luminance. You will see that the RGB image does not change at all.  You can confirm by turning the luminance layer on and off to see if it makes a difference. It shouldn't.  This shouldn't be surprising. Since we are using the luminance of the RGB as its luminance. Nothing should change.

You can also use the grayscale image as the top layer on RGB instead. Change the blending mode to luminance and turn the layer on and off. You will see that using the grayscale as luminance changes the look of the RGB.

If you are able to conduct the above simple experiment in PS and make the observations I suggested, you are 100% ready to do LRGB composition in PS (linear or not).  All you need to do is to copy/paste the L image onto the RGB and change the blending mode to luminance.  If the L has a matched brightness/contrast to the luminance of the RGB, then you are done. It should not change the overall look of the RGB image, as the match demand that the L image looks very close to the luminance of RGB. So pasting such an L image on the RGB as luminance should not change its overall look (including color). All it changes is detail, since at the detail level (S/N and sharpness), the L image should not look similar to the luminance of the RGB at all.  (Once again, after the L is blended with the RGB, it's no longer trivial to talk about S/N. But for the L itself and for the luminance of the RGB, since both are grayscale images, S/N has rather simple definition, for linear ones.)

The real challenge is how to match L to the luminance of RGB. In my opinion, this should be most easily done in linear space. I tried PI's linear fit (on the linear images), but somehow it works very poorly. At the end, I found that with some little practice, this can be quickly done manually in PS using layers.
Like
HegAstro 13.08
...
· 
·  Share link
Thank you, Wei-Hao. I am not very good at Photoshop, I generally try to avoid using it, but I think this experiment is simple enough to try.

I was also thinking that the following experiment in PI may be worth doing:
  • Perform all linear process on the RGB image, including color calibration
  • Convert the resulting image to XYZ
  • Now replace Y with L, making sure to match the black and white points as you do in Photoshop
  • Convert the result back to RGB
  • Now perform color calibration on the result the old fashioned way, since it is still linear.

I wonder how this would work?
Like
jrista 11.42
...
· 
·  Share link
Arun H:
Jon Rista:
FWIW, if you try to replace the natural Y component of an RGB image in PI with an acquired L, the results are usually not that much different than LRGBCombination. I've tried this a number of times, and I still ended up with washed out colors. 


Even with a linear Y replacement with L, stretching the combination usually still ends up quite noisy, and it is rarely neutral to the color.


Hi Jon -

I suspect this is exactly what PS is doing, and I expect Wei-Hao's method (and the quality of data he uses) may be what is overcoming the issues Juan was pointing out? In the XYZ space, Y is the luminance. One issue you run into with straight replacement of Y with L is that the Y has a greater component of G in it, whereas the captured L is equally weighted in R, G, and B by design due to the transmission characteristics of the L filter. So, when you invert the matrix to calculate the new R,G,B values, you will have lost color information in G. Effectively, you can see from the matrix coefficients that X and Z don't really have strong weighting in G (more weighted to R and B). Throwing away the Y from the RGB data effectively throws away a bunch of G data (at the benefit of a better Y) which results in washed out colors but better SNR. Using very good R G B data may possibly overcome this. I expect the method PI uses tries to avoid this problem through conversion to other color spaces. But, in either case, a good image will require good data and skill in matching the L's.


image.png

Right, the lack of weighting in the L vs. Y is a problem. This is where I wonder if RGBWorkingSpaces could help. I have actually never tried to extract the RGB weights from an extracted Y...I wonder if that would allow you to properly weight the L before replacing the RGB's intrinsic Y with L...

In any case, I always find that a weak RGB, no matter how good your L is, still results in a noisy image. Especially with say 5:1:1:1 or even 10:1:1:1 exposure ratios across the filters, the RGB tends to be very weak and noisy. No matter how good your L combination is, the chroma noise will be determined by the RGB data. So you can get pretty noisy results with a Y->L swap (which can be done linearly.) This is why I question what we call an improvement in SNR with L combination. 

I have been waiting for a good window of opportunity to try to get a ton of RGB on a couple of targets at my dark site. I will also get L (although probably not more than a 3:1:1:1 ratio) so comparisons can be made. I don't expect pure RGB data to be smoother than an LRGB combination although I think pure RGB data can have plenty good SNR if you skip acquiring L. I think that too much emphasis is put on getting immense amounts of L data (leaving RGB really weak), and no one seems to care much about RGB SNR (which frequently starts to desaturate out in the middleish tones and fainter, resulting in less and less color as structures disperse or fade into the background...when my guess is, say, brown dust, should be brown throughout, even into the faintest whisps of it, instead of it desaturating to gray.)

If your RGB signal is weak, then this will be a problem no matter how good your L is. If you never capture RGB photons or just capture too few on faint structures, then your L could be 100x better, and you would still be missing color information. Even if you still prefer to capture L, I think the ratios often used today (I just came across an image with ~7:1:1:1 L:R:G:B ratios) are really hurting color in a lot of areas of many images. A 3:1:1:1 LRGB ratio could still lead to an effective 6:1:1:1 ratio if a synthetic L was created by integrating all four of the channels, if you still want a really strong L channel for processing purposes, which should at least minimize loss in color fidelity in fainter signals.
Like
 
Register or login to create to post a reply.