Luminance Subframes Degrade Color Images' Resolution (A demo and explanation of why) [Deep Sky] Processing techniques · Alex Woronow · ... · 25 · 1606 · 5

Alex_Woronow
...
· 
·  4 likes
Luminance Subframes Degrade Color Images' Resolution
Alex Woronow, 2024



Introduction
Luminance subs were collected in ancient times to improve signal-to-noise and to "provide" detail when color (RGB) subs were collected in 2x2 binning to save image time while sacrificing color resolution. Adding the luminance (L) data could restore at least some image resolution (theoretically). If done well, both objectives, resolution recovery, and noise suppression, might have been achieved by capturing luminance (L) subframes. But we now seldom collect binned data to the extent that it reduces our potential optical resolution, and we now have highly effective AI denoise methods, which, theoretically, easily outperform the benefits of capturing L. Both of these advances come from CMOS technology. So here we re-examine if actual benefit incurs from capturing L data over just RGB data with modern image technology and methods.

The conclusion documented below is that not only is no benefit accrued from using L subs, but L actually degrades the potential resolution of color images.

A Controlled Comparison
For this first examination of the effects of adding L to color images, I processed an image of Messier 98 using Telescope Live data. The data were processed in three ways: 1) simple RGB, 2) LRGB as done classically by introducing the nonlinear luminance into the nonlinear RGB, and 3) making an lRGB image where the luminance is introduced into the nonlinear rgb image. All three were processed identically in preprocessing and post-processing, except obviously when introducing the luminance data.

Before going further, let me describe my method for introducing linear luminance into the linear rgb. The steps are
1.    Extract the luminance from the rgb image (call it "extracted_l).
2.    Make a super luminance by the equation super_l = l + r + b + g. This sums up all the photons that have been collected.
3.    Linear fit super_l to the extracted_l.
4.    Replace the luminance in the rgb linear image with this linear super_l, creating the l_rgb image.
These manipulations and all preprocessing image manipulations were done in PixInsight.

We stretch the three images: The rgb image becomes the nonlinear RGB image, the linear l_rgb image stretch yields l_RGB, and the stretched RGB receives stretched L to become the L_RGB image. The simple RGB image has less data support than the other two. In practice, we could rectify that at the scope by spending the time used to collect the luminance subs to collect more color subs. However, we cannot remediate these data at this point, and we will simply recognize that the straight RGB image is somewhat disadvantaged.

We post-process our three stretched images identically in Topaz Studio2 according to the workflow shown in Fig 1. However, the first AI-noise-removal step is customized for each image. The intensity of the processing targeted revealing the images' full detail, including color and brightness details and full color and brightness contrasts. With this approach, we can best see the differences among the images.

Fig 2 shows the results for the RGB, l_RGB, and L_RGB images, from left to right. The first noticeable difference among the images is their mildly varying brightness and contrasts. Those are ignored. More to the point, each image's details and clarity differ. The RGB image detail and clarity exceed that of both luminance-hosting images. Between the two images with luminance, the l_RGB  has noticeably more detail than the L_RGB image.

Even with less data support, the simple RGB image appears better. Equalizing the support of RGB data with more subframes would most likely make that distinction even more evident. In either case, adding luminance to the RGB data has decreased the image's detail.

image.png
 Fig 1: These are the post-processing steps used in this study. The order of application of the tools is from the bottom up. The first  DeNoise steps were adjusted to suit each input image, and the subsequent steps were applied identically to all three images. The last denoise step operates identically on each image.

image.png
Fig 2: The three images left to right are RGB without any luminance added, l_RGB_l, and L_RGB. The three images have been processed identically. This image is available for download and closer inspection at
https://www.dropbox.com/scl/fo/hog44sku0wipxd4i6y367/ANvEYEspY6Aa-OYX-H0sh9E?rlkey=cpj8jers37crr8yw7992moo3f&dl=0
or you can copy the image and paste it into a program that allows zooming.

At this point, we may wonder, can we do anything to improve the l_RGB and/or the L_RGB image? What if they used the same post-processing steps but optimized each processing step's parameters for each image?

Glad you asked…

A Free-Processing Comparison
Suppose we allow each image to have identical processing steps, but each step can have different values set for its parameters. Will we reach the same relative image-detail levels found above? (The processing train is the one I repeat for virtually every image I process. But I may follow those steps with additional processing in various other programs.)
After the customized post-processing, following the workflow in Fig 1,  the histograms were adjusted visually using the Histogram Transform tool to match the "shadows" and "midtones." A better image-matching tool is Histogram Matching (https://en.wikipedia.org/wiki/Histogram_matching), but my request for that went unacknowledged by the PI team. Fig 3 shows the results from this make-do manual matching.
Once again, the RGB shows more detail than the l_RGB, which shows more detail than the L_RGB.
Could more processing steps be added to the L-containing images to bring them to the level of detail shown in the plain RGB image? Maybe in some cases, but in general, my experience suggests otherwise. Again, spending the time needed to capture L to capture more RGB is probably the better route to better images.

image.png
Fig 3. The three images left to right are RGB without any luminance added, l_RGB_l, and L_RGB. The three images have been processed identically. This image is available for download and closer inspection at
https://www.dropbox.com/scl/fo/hog44sku0wipxd4i6y367/ANvEYEspY6Aa-OYX-H0sh9E?rlkey=cpj8jers37crr8yw7992moo3f&dl=0
or you can copy the image and paste it into a program that allows zooming.

Why Does Luminance Degrade RGB Detail?
The examples (and other experiences I have had) show that L diminishes the detail in RGB images. It appears to do this by introducing a softness or blur. The following is a logical explanation for that behavior.
Most of us agree that, for relevant targets, the Ha filter reveals more detail than the red filter. Why is that? The red filter captures photons identical to the Ha filter and even more photons from a broader wavelength range. That "and even more" wavelength acceptance causes the degradation of the red-filter detail relative to the Ha filter. Those extra photons dominantly come from the continuum radiation and are largely featureless, diffuse, and only weakly correlated with the local details. That is, the red filter passes both the photons for structural detail and a considerably featureless background, all blended together, thereby degrading the sharpness and contrast of the R image relative to the Ha image.
The relationship between the R, G, and B images and the L image is the same as the Ha - R relationship just described. The L has a pervasive featureless, diffuse component that originates with the continuum radiation and is largely uncorrelated with the structural detail in the RGB image, which originates from more localized radiation, such as SII, Ha, and OIII. Therefore, when L is added to an RGB image of a nebula, the RGB image's detail resolution and contrast are lessened.

Summary
To begin, assume the following statements are factual:
•    We have RGB data with either the maximum pixel resolution afforded by the camera or a pixel resolution reaching the Dawes limit.
•    The data set has sufficient RGB subs to reach the sky limit or the desired/acceptable limiting magnitude.
We can conclude from the discussions and the above assumptions:
   1.    Luminance data added to RGB data will not improve the resolution of the RGB data; it will degrade it.
       a.    Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.
       b.    Adding L may improve the limiting magnitude, but that has yet to be demonstrated.
       c.    Luminance data added to narrowband data will be even more damaging.
   2.    Reassigning the capture and processing time spent on L data to the capture of sufficient RGB is the better strategy.
   3.    One-shot color images do not suffer from being unable to take luminance subframes.
   4.    If you do not push your images hard for detail, you may see scant, if any, differences between RGB and LRGB images. In that case too, L offers no added     value.
   5.    The conclusions reached here extend beyond the galaxy example to include emission, reflection, and dark nebulae. Stars and star structures are probably less affected.
   6.    If your data have very limited RGB subs (maybe <5 or so, depending on the exposure times), then L may be necessary to attain reasonable image quality. But it would have been better if the L had been omitted and more RGB had been acquired.
Edited ...
Like
messierman3000 4.02
...
· 
·  7 likes
I like turtles
Like
whwang 11.64
...
· 
·  12 likes
In my experience, when done correctly, LRGB is always better than RGB (same total integration time) on continuum objects (galaxies and reflection nebulas). So I disagree with everything you said. Unfortunately I can't confidently tell you why. I tried to find out why you reached such conclusions by looking at your example images, and they all look terribly over-processed to me. Maybe your over-processing hides all the relevant details and misled you.
Edited ...
Like
Rustyd100 4.26
...
· 
I understood the second paragraph. Luminance is bad.
Like
WhooptieDo 9.82
...
· 
·  10 likes
Alex Woronow:
Luminance Subframes Degrade Color Images' Resolution
Alex Woronow, 2024





We post-process our three stretched images identically in Topaz Studio2 according to the workflow shown in Fig 1.



image.png


Summary
To begin, assume the following statements are factual:
•    We have RGB data with either the maximum pixel resolution afforded by the camera or a pixel resolution reaching the Dawes limit.
•    The data set has sufficient RGB subs to reach the sky limit or the desired/acceptable limiting magnitude.
We can conclude from the discussions and the above assumptions:
   1.    Luminance data added to RGB data will not improve the resolution of the RGB data; it will degrade it.
       a.    Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.
       b.    Adding L may improve the limiting magnitude, but that has yet to be demonstrated.
       c.    Luminance data added to narrowband data will be even more damaging.
   2.    Reassigning the capture and processing time spent on L data to the capture of sufficient RGB is the better strategy.
   3.    One-shot color images do not suffer from being unable to take luminance subframes.
   4.    If you do not push your images hard for detail, you may see scant, if any, differences between RGB and LRGB images. In that case too, L offers no added     value.
   5.    The conclusions reached here extend beyond the galaxy example to include emission, reflection, and dark nebulae. Stars and star structures are probably less affected.
   6.    If your data have very limited RGB subs (maybe <5 or so, depending on the exposure times), then L may be necessary to attain reasonable image quality. But it would have been better if the L had been omitted and more RGB had been acquired.




First of all, don't use Topaz.   Topaz does not properly sharpen anything, makes assumptions and creates false details.    It's no wonder you're seeing degradation when you add luminance, the luminance data will look nothing like your RGB image because it's all fake.   The astro community abandoned topaz ages ago because of this.   Theres a few here and there that still use it, but extreme caution/care must be used if you're even going to consider it.  Judging by your provided examples, you're using it carelessly on a high power setting.


Your summary is all we need to see to understand that you'd don't comprehend how luminance works.   See my response in bold.

***We can conclude from the discussions and the above assumptions:
   1.    Luminance data added to RGB data will not improve the resolution of the RGB data; it will degrade it.

a.    Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.     

     Luminance was never meant to be added during the linear stage.  This process is incorrect.  Luminance and RGB data must be stretched to properly combine.   

b.    Adding L may improve the limiting magnitude, but that has yet to be demonstrated.

     It's be demonstrated many times before.

c.    Luminance data added to narrowband data will be even more damaging.
  This is obvious.  The signals are not the same, period.  Luminance is a broadband image, not a narrowband image.   Narrowband data must be added after luminance addition.


In order for your theory to be respected, you should understand how luminance data works in the first place.
Like
HegAstro 11.99
...
· 
·  3 likes
Alex Woronow:
Fig 2: The three images left to right are RGB without any luminance added, l_RGB_l, and L_RGB. The three images have been processed identically. This image is available for download and closer inspection at
https://www.dropbox.com/scl/fo/hog44sku0wipxd4i6y367/ANvEYEspY6Aa-OYX-H0sh9E?rlkey=cpj8jers37crr8yw7992moo3f&dl=0
or you can copy the image and paste it into a program that allows zooming.


I actually went to dropbox and viewed the supplied comparisons. I have to agree with Wei-Hao. It is hard to see how to arrive at meaningful conclusions from such badly de-noised images.
Like
whwang 11.64
...
· 
·  3 likes
Brian Puhl:
a.    Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.     
     Luminance was never meant to be added during the linear stage.  This process is incorrect.  Luminance and RGB data must be stretched to properly combine.   
     [i] [/i]

Allow me to add a note here.  Actually LRGB composition CAN be done during the linear stage.  In most (or all?) current image processing programs, like PI, LRGB is done during the nonlinear stage.  I can suspect several reasons for this (I am not convinced by the official reason that most people would say), but this is off topic.  Let me just say that I think linear LRGB can actually have several advantages (but with its own challenges).  All my recent LRGB images were composed in the linear stage.  I think my pictures can demonstrate that not only linear LRGB is possible, but also the results can be very good once the challenges are overcome.
Like
WhooptieDo 9.82
...
· 
Wei-Hao Wang:
Brian Puhl:
a.    Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.     
     Luminance was never meant to be added during the linear stage.  This process is incorrect.  Luminance and RGB data must be stretched to properly combine.   
     [i] [/i]

Allow me to add a note here.  Actually LRGB composition CAN be done during the linear stage.  In most (or all?) current image processing programs, like PI, LRGB is done during the nonlinear stage.  I can suspect several reasons for this (I am not convinced by the official reason that most people would say), but this is off topic.  Let me just say that I think linear LRGB can actually have several advantages (but with its own challenges).  All my recent LRGB images were composed in the linear stage.  I think my pictures can demonstrate that not only linear LRGB is possible, but also the results can be very good once the challenges are overcome.



I presume you're linear fitting an extracted luminance from the RGB to the actual luminance?   Only way I can see this one working.
Like
aaronh 3.21
...
· 
As this post is about resolution, I think it would benefit from some 100% crops, entirely unprocessed, with a simple MTF stretch (e.g. STF) applied. Additionally, some FWHM measurements would certainly help tell the story!

This isn't a new topic, the LRGB vs RGB debate comes up on CloudyNights semi-regularly, and there is never any clear consensus.

Personally, I think the choice depends on the target.

If the object lacks colour variation, and the primary challenge is in capturing faint detail with acceptable SNR, then every photon counts. Any minor improvement in resolution within the RGB channels will be more than offset by additional noise.

On the other hand, some objects have significant colour variation which benefits from being captured in fine detail. I imaged the Sculptor Galaxy a while back, and it was only during processing that I realised just how much fine colour detail there is within the galaxy. Putting all the effort into capturing the cleanest possible RGB data would be worthwhile - it's a bright enough target that the Lum data doesn't really add much.
Edited ...
Like
jhayes_tucson 22.64
...
· 
·  12 likes
Alex Woronow:
2.    Make a super luminance by the equation super_l = l + r + b + g. This sums up all the photons that have been collected.


Doing this kind of comparison is admirable but before you jump into this kind of analysis it's a good idea to first have a good fundamental understanding of the theory behind what you want to do.  And you are in trouble right from the beginning at step 2.  You do not create a synthetic luminance channel by simply adding L+R+B+G without properly weighting the values.  That's easy to understand by considering an example that only considers the contribution of photon noise.  If you take say 100 hours of Lum data and then combine it with one hour each of RGB data, the Lum data will have a SNR|photon of more than ten times  the SNR|photon of each of the RGB channels.  When you simply sum the four signals,  the signals add arithmetically and the noise adds in quadrature. That means that at best, the RGB data doesn't do much to improve the overall SNR and depending on how the data is normalized you may even end up with a much lower SNR in the result due to the low SNR of the RGB data adding more noise to the result than it should.

Many years ago I discussed how to do simple noise weighted averaging using the Integration tool but those discussions are probably lost to history so I'll quickly review the idea here.  First compute an integrated image using just the Lum data.  Second, create an RGB image with the correct color calibration and use the ColorExtraction tool to extract the synthetic Lum from the RGB image.  Now make a copy of both the "real" Lum image and the synthetic Lum image so that you have a total of four images.  Load those four image into the Integration tool.  You've got to duplicate the two images simply because the Integration tool requires a minimum of 3 images.  Set up the tool to produce an average with additive+scaling normalization using either SNR or PSF signal weight.  The result should have a higher SNR than either of the two input images and that's how you can test if it did the right thing.

I should also mention that while it is common among processors to create a synthetic Lum channel by summing the RGB signals that's not how it's done in RGB color space theory.  Lum = Max{R, G, B} where the Max{} function takes the maximum value of R, G, or B, which is the correct way to compute a proper Lum channel and it eliminates the need to renormalize the result to stay in range.  It is easiest to think about the effect of the Lum channel in HSL color space.  In that case, the Lum channel simply controls the brightness of what you see in the image and it is scaled from 0 to 1.  It is what controls the image sharpness and the spatial noise and it can be easily non-linearly scaled for things like gamma correction.  The RGB data contributes solely to the hue and saturation in the image.  Noise in the RGB data converts directly into color and "saturation" noise.  In HSL color space, you create a LRGB image by simply replacing the RBG Lum channel with the Lum data that you took with the telescope.  You can then convert that back to a final RGB image.

As for your "test":  Yes, using synthetic Lum data can be a valuable tool for improving SNR in certain cases; however, if you want to look at the advantages of LRGB imaging, you should start with the simplest case.  Take 4 times as much Lum data as each of the RGB channels.  Then, combine and pure RGB image and compute the proper Lum channel.  Toss that out and replace it with the stacked Lum data.   You can do that in PixelMath or you can simply use the LRGB combination tool to  combine the 4 channels.  If you do this with linear data, the resulting image will have extremely low saturation.  That is simply because in it's linear form, the Lum data will dominate the "brightness" and drive down the RGB values way down.  You'll then have to use the curves tool to restore the saturation.  I've recently been doing the Lum combination on stretched data to avoid this issue and if done properly, it works really well.

I have gobs of images that could serve as counter examples to your processed images.  Properly exposing and combining Lum data enhances detail mostly by greatly improving SNR.  And yes, if you use enough exposure you can produce an equally clean RGB image, but that always requires more total exposure to reach the same SNR.  Remember that for any given exposure time, the Lum channel will alway have a higher SNR than any of the RGB channels simply because of the higher total signal.  In general, it is possible to achieve a similar SNR with LRGB imaging with roughly half the total exposure time relative to a pure RGB image with the same SNR.

I suggested that you read through chapter 20, "Building Color Images" in the "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, 2nd Ed, 2011, Willmann-Bell, Inc.  This chapter reviews different color space models and goes over the advantages of LRGB vs RGB imaging.  This reference is probably the best anywhere for understanding the mathematics of image process.  Unfortunately, the publisher is out of business and it is becoming very difficult to find.  You'll pay a lot for a used copy but if you can find it, snatch it up as fast as you can.  It will be worth every penny.  I wish that the authors would go shop it around to find a new publisher.


John



PS. My phone rang while I was composing this post and as a result I pressed send long before I had it properly proof read.  Consequently I've made numerous edits to clean up some of the sloppy stuff that I wrote in the original version.   My apologies to any of you who read through it all before I had it "right".
Edited ...
Like
AstroLux 7.33
...
· 
·  6 likes
Alex, your processing tehniques are degrading your images far more than your claim that "Luminance subframes degrade color images resolution".
Like
gnnyman 4.52
...
· 
·  3 likes
Alex,

I am in disagreement with your statement, that L degrades the overall quality and details of the final image. Here is why:

1. Using Topaz for Astro images is a no-go. I have made several comparisons between Topaz and other apps and the real good ones for astrophotography - Topaz is more or less the worst of all for this specific application. 

2. Do not mix L into the other channels in a linear stage - that must be done after stretching - unless you keep in mind, that adding L at the linear stage needs special and different treatment. Yes, it can be done, but one must be aware that this mode needs a different approach in the further steps.

3. Luminance data are more or less "continuous spectrum" data, compared to either RGB or even worse, HSO data. The smaller the wavelengths range, the later one should add L data. And for NB data it is in my opinion more or less mandatory to mix L in after everything else before was stretched and combined properly.

This is just my point of view and opinion, you of course can come to a totally different conclusion. 

CS
Georg
Like
HegAstro 11.99
...
· 
·  1 like
John Hayes:
In general, it is possible to achieve a similar SNR with LRGB imaging with roughly the total exposure time relative to a pure RGB image with the same SNR.


John - It think there is a typo here. You probably meant to say " it is possible to achieve a similar SNR with LRGB imaging with roughly half the total exposure time relative to a pure RGB image with the same SNR." 

Anyhow, I'd be interested in knowing if there is a mathematically rigorous way to compute the SNR increase through LRGB imaging. Clearly, in HSL space, the color noise remains. So, when converted back to RGB space, what would be the improvement in noise from the pure luminance? I think it will be somewhat less than would be predicted by simple accounting for total number of photons captured.
Like
jhayes_tucson 22.64
...
· 
·  2 likes
Arun H:
John Hayes:
In general, it is possible to achieve a similar SNR with LRGB imaging with roughly the total exposure time relative to a pure RGB image with the same SNR.


John - It think there is a typo here. You probably meant to say " it is possible to achieve a similar SNR with LRGB imaging with roughly half the total exposure time relative to a pure RGB image with the same SNR." 

Anyhow, I'd be interested in knowing if there is a mathematically rigorous way to compute the SNR increase through LRGB imaging. Clearly, in HSL space, the color noise remains. So, when converted back to RGB space, what would be the improvement in noise from the pure luminance? I think it will be somewhat less than would be predicted by simple accounting for total number of photons captured.

Arun,
Thank you for that catch.  Yes, indeed it was supposed to say "half" and I've fixed it in my post.  (I'm a terrible at proof reading my own writing!)

In general, the visual perception of noise is much stronger in luminance than in color.  I still consider color noise to be an issue but it is almost always easier to deal with than noise in the luminance channel.  There may be a way to compute this stuff but I've never tried.  I also know that at a least a part of the argument to use LRGB to reduce the total exposure time relies on how the final result is perceived.  I just know that when I process my images, the Lum channel aways shows less photon noise than the RGB result (that I always process independently).  I often reach a result with just the RGB data that looks very impressive--until I add the Lum channel on top of it.  That's almost always takes the image to another level in terms of noise control and image detail.

John
Like
smcx 3.01
...
· 
·  1 like
I’ve pretty much given up on lum from my light polluted location. IMO it hurts the final image because it undoes everything I gain from narrowband.
Like
whwang 11.64
...
· 
·  4 likes
To further response to this post, I created my own RGB vs LRGB comparison. For a fair comparison, the RGB and LRGB images have identical data quality, total integration time, and minimum to no post-processing (just screen stretch).

The comparison image can be found here:
https://www.astrobin.com/aqlpa2/B/
The page contains a detailed description of how the comparison images were made (the Rev.B part of the image description).  You may read it there.
Like
aabosarah 7.12
...
· 
·  3 likes
Sean Mc:
I’ve pretty much given up on lum from my light polluted location. IMO it hurts the final image because it undoes everything I gain from narrowband.

Sean I never used Lum for narrowband targets / emission nebula. Luminance filter for narrowband defeats the purpose of narrowband imaging Depending on the target, I'd either use Ha or sometimes a synthetic Lum or SHO. I just test it out and see what I like more. 

​​​​​Luminance still can be used in my b6/7 backyard for broadband targets and it is still very helpful. 
​​
Edited ...
Like
HegAstro 11.99
...
· 
·  2 likes
Wei-Hao Wang:
To further response to this post, I created my own RGB vs LRGB comparison. For a fair comparison, the RGB and LRGB images have identical data quality, total integration time, and minimum to no post-processing (just screen stretch).

The comparison image can be found here:
https://www.astrobin.com/aqlpa2/B/
The page contains a detailed description of how the comparison images were made (the Rev.B part of the image description).  You may read it there.

close examination shows better separation of features against both the background and other parts of the galaxy in the LRGB image versus RGB only. Of course, this is the benefit of L, since the greater SNR allows for greater contrast. Thanks for sharing this.
Like
AstroLux 7.33
...
· 
Wei-Hao Wang:
To further response to this post, I created my own RGB vs LRGB comparison. For a fair comparison, the RGB and LRGB images have identical data quality, total integration time, and minimum to no post-processing (just screen stretch).

The comparison image can be found here:
https://www.astrobin.com/aqlpa2/B/
The page contains a detailed description of how the comparison images were made (the Rev.B part of the image description).  You may read it there.

I think the comparison would have had a much different outcome if you used the "classic standard" of 4:1 or 3:1 ratio in terms of LRGB imaging. 

Because from your comparison you are at ~1.64:1:1:1 (or 105min of Lum for 64min per other filters) which ofcourse then turns the outcome to basically that  RGB & LRGB images are the same.  
The only way to have benefit from shooting LRGB is to shoot from above at least 3x or 4x more Lum then RGB per channel.
Edited ...
Like
HegAstro 11.99
...
· 
·  2 likes
Luka Poropat:
Wei-Hao Wang:
To further response to this post, I created my own RGB vs LRGB comparison. For a fair comparison, the RGB and LRGB images have identical data quality, total integration time, and minimum to no post-processing (just screen stretch).

The comparison image can be found here:
https://www.astrobin.com/aqlpa2/B/
The page contains a detailed description of how the comparison images were made (the Rev.B part of the image description).  You may read it there.

I think the comparison would have had a much different outcome if you used the "classic standard" of 4:1 or 3:1 ratio in terms of LRGB imaging. 

Because from your comparison you are at ~1.64:1:1:1 (or 105min of Lum for 64min per other filters) which ofcourse then turns the outcome to basically that  RGB & LRGB images are the same.  
The only way to have benefit from shooting LRGB is to shoot from above at least 3x or 4x more Lum then RGB per channel.

A different way of saying this is to state that you absolutely need to have adequate color support for an image. Once you have that, time spent on lum is better than time spent on RGB. I believe that is what Wei-Hao’s image illustrates. Formulaic ratios are not meaningful since this is image dependent.
Like
whwang 11.64
...
· 
·  2 likes
Luka Poropat:
Wei-Hao Wang:
To further response to this post, I created my own RGB vs LRGB comparison. For a fair comparison, the RGB and LRGB images have identical data quality, total integration time, and minimum to no post-processing (just screen stretch).

The comparison image can be found here:
https://www.astrobin.com/aqlpa2/B/
The page contains a detailed description of how the comparison images were made (the Rev.B part of the image description).  You may read it there.

I think the comparison would have had a much different outcome if you used the "classic standard" of 4:1 or 3:1 ratio in terms of LRGB imaging. 

Because from your comparison you are at ~1.64:1:1:1 (or 105min of Lum for 64min per other filters) which ofcourse then turns the outcome to basically that  RGB & LRGB images are the same.  
The only way to have benefit from shooting LRGB is to shoot from above at least 3x or 4x more Lum then RGB per channel.

Please read my description.  For the LRGB case, L : (R+G+B) = 1:1.  So L vs individual of R or G or B is roughly 3:1 (R,G slightly less, and B slightly more).
Edited ...
Like
Alex_Woronow
...
· 
·  1 like
It has been very informative to follow the discussions here. It is amazing how difficult a topic  this (RGB versus LRGB) is to understand and how much misunderstanding there is about LRGB. I have continued my years long research into this and have, amazing, just today overturned an important prespective contributed by none other than Juan Canejero, which should set things a little straighter, at least:

"As for the LRGB vs RGB thing, just to state my opinion clear:

- LRGB: Good to save time. This is true as long as RGB is shoot binned; when shooting unbinned L and RGB, the savings are marginal IMO.

- LRGB: Bad for quality. Assuming unbinned data, an independent L does not provide more resolution. At the contrary, it may provide less resolution since it has been acquired through a much wider band pass filter.

- LRGB: Problems to achieve a good match between luminance and chrominance.

- LRGB: More limitations to work with linear data. LRGB combinations are usually performed in the CIE L*a*b* and CIE L*c*h*, which are nonlinear. It is true that a linear LRGB combination is doable in PixInsight, though, working in the CIE XYZ space.

- RGB: Perfect match between luminance and chrominance, by nature. No worries about luminance structures without chrominance support, and vice-versa.

- RGB: A synthetic luminance has the important advantage that we can choose an arbitrary set of weights for the calculation of the luminance (with RGB working spaces in PixInsight). We can define a set of luminance weights that maximize information representation on the luminance, understanding information as data that supports significant object structures)."

You can read more discussion on page 3, here:
https://pixinsight.com/forum.old/index.php?topic=1636.msg9297;topicseen#msg9297

I agree with these statements 100%, and will avoid L except in the circumstances Juan enumerates.
Like
andreatax 7.90
...
· 
·  1 like
Sic Dixit...
Like
HegAstro 11.99
...
· 
·  4 likes
Alex Woronow:
I agree with these statements 100%, and will avoid L except in the circumstances Juan enumerates.


So in essence, the evidence for not using "L" is an opinion Juan in a forum post expressed 14 years ago in a very different time technologically speaking... and we must take that as gospel over the actual examples that have been provide by Wei-Hao and John Hayes who have between them images in the hundreds? Not to speak of many others, who consistently produce broadband images of excellent resolution, contrast, and depth using LRGB imaging. Certainly, in the much better processed comparison that Wei-Hao supplied, there is no evidence of this supposed loss of resolution; on the contrary, there is evidence of better contrast, which is expected. Of course, one is free to image in whatever manner one wishes. My biggest issue here is that the evidence of images you generated does not support your claim, while the evidence supplied by others supports their points of view.
Edited ...
Like
Jbis29 1.20
...
· 
John Hayes:
I suggested that you read through chapter 20, "Building Color Images" in the "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, 2nd Ed, 2011, Willmann-Bell, Inc.


Are there any copies of this in a reasonable range for purchase. I can only find a few of the second editions that are hundreds of dollars.
Like
 
Register or login to create to post a reply.