Alex Woronow, 2024
Introduction
Luminance subs were collected in ancient times to improve signal-to-noise and to "provide" detail when color (RGB) subs were collected in 2x2 binning to save image time while sacrificing color resolution. Adding the luminance (L) data could restore at least some image resolution (theoretically). If done well, both objectives, resolution recovery, and noise suppression, might have been achieved by capturing luminance (L) subframes. But we now seldom collect binned data to the extent that it reduces our potential optical resolution, and we now have highly effective AI denoise methods, which, theoretically, easily outperform the benefits of capturing L. Both of these advances come from CMOS technology. So here we re-examine if actual benefit incurs from capturing L data over just RGB data with modern image technology and methods.
The conclusion documented below is that not only is no benefit accrued from using L subs, but L actually degrades the potential resolution of color images.
A Controlled Comparison
For this first examination of the effects of adding L to color images, I processed an image of Messier 98 using Telescope Live data. The data were processed in three ways: 1) simple RGB, 2) LRGB as done classically by introducing the nonlinear luminance into the nonlinear RGB, and 3) making an lRGB image where the luminance is introduced into the nonlinear rgb image. All three were processed identically in preprocessing and post-processing, except obviously when introducing the luminance data.
Before going further, let me describe my method for introducing linear luminance into the linear rgb. The steps are
1. Extract the luminance from the rgb image (call it "extracted_l).
2. Make a super luminance by the equation super_l = l + r + b + g. This sums up all the photons that have been collected.
3. Linear fit super_l to the extracted_l.
4. Replace the luminance in the rgb linear image with this linear super_l, creating the l_rgb image.
These manipulations and all preprocessing image manipulations were done in PixInsight.
We stretch the three images: The rgb image becomes the nonlinear RGB image, the linear l_rgb image stretch yields l_RGB, and the stretched RGB receives stretched L to become the L_RGB image. The simple RGB image has less data support than the other two. In practice, we could rectify that at the scope by spending the time used to collect the luminance subs to collect more color subs. However, we cannot remediate these data at this point, and we will simply recognize that the straight RGB image is somewhat disadvantaged.
We post-process our three stretched images identically in Topaz Studio2 according to the workflow shown in Fig 1. However, the first AI-noise-removal step is customized for each image. The intensity of the processing targeted revealing the images' full detail, including color and brightness details and full color and brightness contrasts. With this approach, we can best see the differences among the images.
Fig 2 shows the results for the RGB, l_RGB, and L_RGB images, from left to right. The first noticeable difference among the images is their mildly varying brightness and contrasts. Those are ignored. More to the point, each image's details and clarity differ. The RGB image detail and clarity exceed that of both luminance-hosting images. Between the two images with luminance, the l_RGB has noticeably more detail than the L_RGB image.
Even with less data support, the simple RGB image appears better. Equalizing the support of RGB data with more subframes would most likely make that distinction even more evident. In either case, adding luminance to the RGB data has decreased the image's detail.

Fig 1: These are the post-processing steps used in this study. The order of application of the tools is from the bottom up. The first DeNoise steps were adjusted to suit each input image, and the subsequent steps were applied identically to all three images. The last denoise step operates identically on each image.

Fig 2: The three images left to right are RGB without any luminance added, l_RGB_l, and L_RGB. The three images have been processed identically. This image is available for download and closer inspection at
https://www.dropbox.com/scl/fo/hog44sku0wipxd4i6y367/ANvEYEspY6Aa-OYX-H0sh9E?rlkey=cpj8jers37crr8yw7992moo3f&dl=0
or you can copy the image and paste it into a program that allows zooming.
At this point, we may wonder, can we do anything to improve the l_RGB and/or the L_RGB image? What if they used the same post-processing steps but optimized each processing step's parameters for each image?
Glad you asked…
A Free-Processing Comparison
Suppose we allow each image to have identical processing steps, but each step can have different values set for its parameters. Will we reach the same relative image-detail levels found above? (The processing train is the one I repeat for virtually every image I process. But I may follow those steps with additional processing in various other programs.)
After the customized post-processing, following the workflow in Fig 1, the histograms were adjusted visually using the Histogram Transform tool to match the "shadows" and "midtones." A better image-matching tool is Histogram Matching (https://en.wikipedia.org/wiki/Histogram_matching), but my request for that went unacknowledged by the PI team. Fig 3 shows the results from this make-do manual matching.
Once again, the RGB shows more detail than the l_RGB, which shows more detail than the L_RGB.
Could more processing steps be added to the L-containing images to bring them to the level of detail shown in the plain RGB image? Maybe in some cases, but in general, my experience suggests otherwise. Again, spending the time needed to capture L to capture more RGB is probably the better route to better images.

Fig 3. The three images left to right are RGB without any luminance added, l_RGB_l, and L_RGB. The three images have been processed identically. This image is available for download and closer inspection at
https://www.dropbox.com/scl/fo/hog44sku0wipxd4i6y367/ANvEYEspY6Aa-OYX-H0sh9E?rlkey=cpj8jers37crr8yw7992moo3f&dl=0
or you can copy the image and paste it into a program that allows zooming.
Why Does Luminance Degrade RGB Detail?
The examples (and other experiences I have had) show that L diminishes the detail in RGB images. It appears to do this by introducing a softness or blur. The following is a logical explanation for that behavior.
Most of us agree that, for relevant targets, the Ha filter reveals more detail than the red filter. Why is that? The red filter captures photons identical to the Ha filter and even more photons from a broader wavelength range. That "and even more" wavelength acceptance causes the degradation of the red-filter detail relative to the Ha filter. Those extra photons dominantly come from the continuum radiation and are largely featureless, diffuse, and only weakly correlated with the local details. That is, the red filter passes both the photons for structural detail and a considerably featureless background, all blended together, thereby degrading the sharpness and contrast of the R image relative to the Ha image.
The relationship between the R, G, and B images and the L image is the same as the Ha - R relationship just described. The L has a pervasive featureless, diffuse component that originates with the continuum radiation and is largely uncorrelated with the structural detail in the RGB image, which originates from more localized radiation, such as SII, Ha, and OIII. Therefore, when L is added to an RGB image of a nebula, the RGB image's detail resolution and contrast are lessened.
Summary
To begin, assume the following statements are factual:
• We have RGB data with either the maximum pixel resolution afforded by the camera or a pixel resolution reaching the Dawes limit.
• The data set has sufficient RGB subs to reach the sky limit or the desired/acceptable limiting magnitude.
We can conclude from the discussions and the above assumptions:
1. Luminance data added to RGB data will not improve the resolution of the RGB data; it will degrade it.
a. Adding L in the nonlinear stage is more detrimental than adding it in the linear stage.
b. Adding L may improve the limiting magnitude, but that has yet to be demonstrated.
c. Luminance data added to narrowband data will be even more damaging.
2. Reassigning the capture and processing time spent on L data to the capture of sufficient RGB is the better strategy.
3. One-shot color images do not suffer from being unable to take luminance subframes.
4. If you do not push your images hard for detail, you may see scant, if any, differences between RGB and LRGB images. In that case too, L offers no added value.
5. The conclusions reached here extend beyond the galaxy example to include emission, reflection, and dark nebulae. Stars and star structures are probably less affected.
6. If your data have very limited RGB subs (maybe <5 or so, depending on the exposure times), then L may be necessary to attain reasonable image quality. But it would have been better if the L had been omitted and more RGB had been acquired.