Luminance: Still important for a CMOS mono camera? Moravian Instruments C3-61000 PRO · Rafael Sampaio · ... · 66 · 2955 · 4

rafaelss123 1.20
...
· 
·  1 like
·  Share link
Some people believe that capturing using a Luminance filter does not make sense anymore, and it would be better to expose RGB for longer and make a synthetic luminance. But seems that most people still use luminance filters. What is your opinion, considering the use of a CMOS sensor like the Sony IMX455 of my Moravian C3?
Like
cratervanawesome 2.11
...
· 
·  7 likes
·  Share link
I've tried both and I tend to get better results capturing just enough RGB to get clean stacks of color and then tons of Luminance. i usually shoot 4:1:1:1 (L:R:G:B) and the color is usually half the exposure time of the luminance (so often 60 seconds for L, 30 for each R,G,B). I find my SNR and observable resolution is much better with this approach. The details all come from the L, and the RGB is just to screen.color on top.
Like
skybob727 6.67
...
· 
·  5 likes
·  Share link
I'm on the same path as Michael, only my subs are typically 10 to 15 minutes for L, and 5 to 10 minutes for RGB and 15 to 20 minutes for NB depending on what I'm going to be imaging, and this is using the C3-61000 Pro, and a C3-26000 Pro on the way.
Edited ...
Like
gnnyman 4.83
...
· 
·  2 likes
·  Share link
I have tried both - with and without L, the RGB results are quite fine, but the L are the "topping on the cake". L added to RGB brings details, fine details and creates a much better image than RGB alone. That means, I primarily expose for L - as many subs as possible and then I add RGB - for color - less subs than for L.  The usual ratio for me is 3:1:1:1 (LRGB). 

CS
Georg
Like
ashastry 2.81
...
· 
·  1 like
·  Share link
I have found that there isn’t a one solution fits all here. And I have also found that it depends on your configuration and how quickly you are able to gather light.

For dimmer targets, in particular many galaxies, L makes a significant difference and so I capture 2x L for every RGB. For brighter targets, I shoot straight RGB as I haven’t found acquired L to make any significant difference. And for targets with narrowband components (e.g. HaRGB or HaOiiiRGB) I similarly tend to skip acquired L altogether.
Edited ...
Like
skybob727 6.67
...
· 
·  2 likes
·  Share link
Ani Shastry:
And for targets with narrowband components (e.g. HaRGB or HaOiiiRGB) I similarly tend to skip acquired L altogether.


Totally agree, just in the past few years I've added RGB for stars only in narrowband images. Some may, but I think it's pointless to add L in NB images.
Like
cratervanawesome 2.11
...
· 
·  3 likes
·  Share link
Bob Lockwood:
Ani Shastry:
And for targets with narrowband components (e.g. HaRGB or HaOiiiRGB) I similarly tend to skip acquired L altogether.


Totally agree, just in the past few years I've added RGB for stars only in narrowband images. Some may, but I think it's pointless to add L in NB images.

I've never actually seen adding L to Narrowband talked about. It would seem to defy the point of narrowband. It's narrow for a reason.
Like
zermelo 1.20
...
· 
·  1 like
·  Share link
Please take the following with a pound of salt, I've never shot monochrome yet:

Am I right if I believe that R,G,B filters don't cover the whole UV/IR-cut spectrum of luminance filters?

My intuition makes me think that shooting luminance you might get you frequency response otherwise non available via R,G,B. On the other hand, every minute shot on luminance delivers much more photons as a minute shot on R, G or B.
Edited ...
Like
AstroLux 8.03
...
· 
·  1 like
·  Share link
Oh, here we go again with the Luminance topic. It's like watching someone try to reinvent the wheel for the nth time.
Everyone's trying to figure out the perfect formula, but let's be real, there's no one-size-fits-all solution here. People need to do their own testing and figure out what works best for their images.
If you're capturing a bright emission nebula under pristine skies (SQM 21.5+), why bother with LRGB when RGB might just do the trick?
And don't even get me started on faint nebulae; if it's all about shades of gray or blackish-brown dust, Lum might be your lifeline where there isnt much color information in the first place.
Imaging narrowband is imaging narrowband, I dont see a point shooting narrowband (in general SHO (more rarely NIR/He)) + Lum. Combining narrowband (one or more filters) and broadband (LRGB/RGB) is a totally different thing. 
I think shooting Lum in an ideal world with infinite time for broadband images is not worth it. However we dont live in an ideal world and our time per object is finite.
Like
CCDnOES 7.05
...
· 
·  2 likes
·  Share link
One thing that people often ignore is this: 

For high resolution images you tend to only have some parts of some nights where the seeing conditions are optimal. In RGB only images you must get all or most of your RGB subs under such good conditions to get ideal results.  If you are doing LRGB then the resolution comes almost entirely from the Luminance so you only need to get the Luminance under such conditions and that is going to be roughly 3X more likely plus you can take those when the seeing is likely to be or has been demonstrated to be good.

Narrowband with RGB stars is another story - since stars seldom need to be high res in those images (they are not the subject and can be sized in processing) there would be no point in a L image for the stars only and the object detail often comes from the narrowband exposures anyway.
Edited ...
Like
andreatax 8.66
...
· 
·  4 likes
·  Share link
Contrary to some belief there is such thing as luminance for NB. And in general RGB filters deliver better resolution than Luminance. NB flters even better than RGB.
Like
Overcast_Observatory 20.21
...
· 
·  1 like
·  Share link
Luka Poropat:
Oh, here we go again with the Luminance topic. It's like watching someone try to reinvent the wheel for the nth time.
Everyone's trying to figure out the perfect formula, but let's be real, there's no one-size-fits-all solution here. People need to do their own testing and figure out what works best for their images.
If you're capturing a bright emission nebula under pristine skies (SQM 21.5+), why bother with LRGB when RGB might just do the trick?
And don't even get me started on faint nebulae; if it's all about shades of gray or blackish-brown dust, Lum might be your lifeline where there isnt much color information in the first place.
Imaging narrowband is imaging narrowband, I dont see a point shooting narrowband (in general SHO (more rarely NIR/He)) + Lum. Combining narrowband (one or more filters) and broadband (LRGB/RGB) is a totally different thing. 
I think shooting Lum in an ideal world with infinite time for broadband images is not worth it. However we dont live in an ideal world and our time per object is finite.



Also, experience with processing might dictate how much color information to record. Some people struggle to keep color when applying luminance and it washes out. But the more experience you gain, the less color info you need. 

in general, i like to split my time between color and lum, since im a lazy processor. I dont like to push the data too hard. I also like to create a superlum using all the color and lum data.  That helps bring out faint details even better.
Like
whwang 11.80
...
· 
·  7 likes
·  Share link
Not too long ago there was a thread of the exact topic on CN.  (And there are such threads almost constantly in a yearly basis.)  There I presented the following RGB vs LRGB comparison.

compare2.jpg

You may just look at the top row.  The RGB and LRGB image both contain 208 (209) minutes of total integration.  The post-processings are identical, except that the LRGB one of course contains the additional steps for LRGB composition.  The subs have comparable qualities and were taken under comparable sky conditions.

It is clear (at least to me) that the LRGB one has better details on faint objects. The LRGB one contains stronger color noise, obviously because its RGB component has shorter total integration than the pure RGB one.  The top-right panel contains an additional pass of noise reduction to the RGB part of the LRGB image.  This helps to suppress the color noise without compromising any details (protected by L, which did not undergo a noise reduction process).  (Of course one can do the same trick on the pure RGB image (NR on color, not on luminance), but this won't be as effective. This is because in the pure RGB image the color and luminance is correlated. You may wipe out noise spikes or dark holes in the color, but the same spikes and holes would still exist in the luminance, which comes from the same set of RGB data rather than from independent L exposures.)

If you compare the top-left (RGB) and top-right (LRGB with RGB NR) panels, it is quite arguable that the top-right is better in terms of faint details. The difference is noticeable, but not dramatic (it's not magic after all).

Bill mentioned about seeing.  He is absolutely right.  Here I control all subs to have similar seeing (because there is a much bigger pool of subs for me to select). In reality, it's very hard to guarantee all your subs to have similar seeing.  If you reserve good seeing time for L and the rests for RGB, the difference in an RGB vs LRGB comparison will be much more dramatic.

One caveat is that one has to conduct the LRGB composition very carefully, otherwise there is a good chance that the color will be washed out. Color washout is one of the most common complaint toward LRGB.  I personally don't suffer from this problem (as you can see above, there is not color washout), but I can understand why this can bother so many people.  If you are not skillful enough to prevent color washout in LRGB composition, pure RGB imaging isn't a terribly bad idea.  You perhaps just need to double (or even less) the total integration time, and then you can get an RGB image that's as good, without color washout or without struggling in LRGB post processing.  I would say it can be worthwhile.

Finally, I believe many people saids so already, LRGB is only effective on continuum subjects. For pure emission line nebulas, L offers relatively little advantage.
Edited ...
Like
Avjunky 1.43
...
· 
·  1 like
·  Share link
Rafael Sampaio:
Some people believe that capturing using a Luminance filter does not make sense anymore, and it would be better to expose RGB for longer and make a synthetic luminance. But seems that most people still use luminance filters. What is your opinion, considering the use of a CMOS sensor like the Sony IMX455 of my Moravian C3?

I think one of the reasons why RGB and a synthetic Lum are popular is because of the popularity of OSC cameras. 

There is a lot of good info in this thread about LRGB imaging, I'd only add that a person should use caution choosing the sub-exposure length on the 'L' filter.  It's capturing a lot more photons in the same interval as individual RGB channels and it's easy to saturate individual pixels and blow out the details.  So if anything, going with 1/2 the exposure time in the L channel as the others is better than the other way around.  As a minimum, I'd recommend checking the first exposure of each channel and ensuring that the total # of saturated pixels is low.  The image stats window in NINA is useful for this.
Like
Stefek 2.39
...
· 
·  Share link
Again? 😀  Fundamentally , the question of recording luminance or not is a question of visual appearance or scientific correctness (color fidelity) . Human eye is more sensitive to intensity change than to color change , so images with luminance might be more appealing , or call it look nicer. However , it does not bring any new information to the data that are collected by RGB filters (within their bandpass curves) .That is why in scientific imaging luminance is not used combined with RGB.  Theoretically then, applying luminance to the data collected with usual RGB filters either paints luminance wrongly or gives wrong intensities to RGB data. Good processing could cover this up nicely and images look nice . Perhaps, the most correct would be to publish L data as it is, as a monochromatic image. I, personally do not collect L data with my 571 sensor, but would not advice anyone to collect or not. It is up to personal preference, wish , desire or whatever.
Like
HegAstro 13.08
...
· 
·  1 like
·  Share link
Both Wei-Hao and Chris White brought up the topic of color washout. I would like to ask - what techniques are used to avoid this in LRGB combination? I have seen this as well. I did pure RGB imaging for a couple of my recent images and I have to say that using GHS, color was much easier to get than my previous LRGB images. Please note - I am not offering an opinion on which is better or preferred, but asking for advice on how to effectively do LRGB combination and still preserve good color.
Edited ...
Like
Overcast_Observatory 20.21
...
· 
·  1 like
·  Share link
Arun,

I think the best way is to balance your combination and not push the data too hard.  PI offer's sliders you can fiddle with in LRGB combination tool.  It also depends on the target.  I'm drawn to dusty dark targets, and you cant simply get away with a small amount of color information.  When there are very few photons per minute hitting the sensor you need more minutes!  Brighter targets, like what you see above it is less important to spend as much time on color. 

Like ANYTHING with post processing, the harder you push the data you will get undesirable side effects.  Stretch too much, you get noise.  Solution:  collect more data.  Tools like NoiseX help, but in the end you will get a better result with more time on target and less heavy processing.  In my B3/4 skies I found that the magic number is between 15 and 40 hours of data depending on the target and the scope.  I wont even bother processing data if I have less than 10 hours as I can't produce what is in my mind's eye. 

Some people are more clever and can do more with the data than I can.  I'm really not that good of a post processor.  I rely in integration time to get the result I want and subjectively assess every processing step to determine if I've done something that "worsens" my result.  Not always, but in general this means that post processing is not very aggressive. 

My skies suck, and getting 40 hours on a target is not practical.  I've done it a few times over the years, but more projects die on my harddrive because the seasons change.  I've lost ALL interest in imaging from home, the weather is just getting worse and worse and worse.  I've decided to try my hand at remote imaging at a dark site as a last ditch effort to see if this hobby is for me.  If I dont enjoy collecting data remotely, then I'll move on to a different hobby.  I just know that as I have progressed with my skills in both acquisition and processing as well as put together some really nice equipment... that I cant reliably get results I am happy with from my backyard. 

Time time time time... spend more time on getting data and the post processing becomes easier easier easier.  That, my friend is easier said than done (for most of us who image from home in cloudy climates or light polluted skies.
Edited ...
Like
HegAstro 13.08
...
· 
·  Share link
Thank you for the comment , Chris. Yes, I also agree that time on target is critical. My best images are narrowband - but that’s purely because they are less affected by light pollution and it is a lot easier to get more integration time. I would LOVE to do more broadband, I really like those images better, but it is impossible to get meaningful results given how short a window we tend to have with the weather. And that window has to intersect with the free time you actually have from other commitments. It is not very motivating when it takes months to get data for a single LRGB image. At some point, I may also do remote imaging; it is the only way to get the type of images I want. But the tactile association with the equipment is a big part of the hobby for me - so we will see.
Edited ...
Like
Overcast_Observatory 20.21
...
· 
·  Share link
Arun H:
But the tactile association with the equipment is a big part of the hobby for me - so we will see.



This is my main concern with remote as well.  I might lose interest if I dont have the equipment to tinker with.  On the other hand, I tinker a lot because I dont get to use it very often.... so maybe a satisfactory compromise will be mountains of dark site data. 
Like
whwang 11.80
...
· 
·  2 likes
·  Share link
Hi Arun,

In my opinion, lot of the color washout observed by people can be explained by mismatch between the L and the luminance of RGB.  If there is such a mismatch, the color of the LRGB image will look different from that of the RGB image.  It can go either way actually, washout, or oversaturate. However, I believe people tend to complain when seeing washed out color more than seeing oversaturated color.

One of the common wisdom (which I do not agree with) is to stretch L and RGB first before conducting LRGB composition.  If you do that, there is a good chance that the luminance of RGB does not match the L. PI's LRGB composition routine helps to reduce this problem, by trying to match the white and black points and the middle points of the luminance of RGB and L. This reduces the mismatch problem, but nevertheless cannot completely solve the problem. When images become highly nonlinear, it is almost impossible to get exact match across the entire dynamical range. Just matching the white, black, and middle points aren't going to be enough.

Because of the above, personally I don't rely on PI for LRGB composition, and I don't do nonlinear stretching before LRGB composition.  I do LRGB composition while both the L and RGB images are linear.  That makes the match much easier.  I do this in PS as layers, so I get real-time preview when I adjust the contrast/brightness of L to match the luminance of RGB.  It's not easy, and not a one-click task, but it works very well for me.  All my LRGB images made in recent two years were made this way.

Of course, there is an even more fundamental issue. Because the sum of the filter transmission curves of R+G+B is not equal to that of L, there can never be a 100% exact match between the luminance of RGB and L. This is fundamentally impossible. So there is always some loss of color fidelity in LRGB composition. But this effect is subtle to most people, and I don't believe most of the complaints about color washout is caused by this.
Like
Overcast_Observatory 20.21
...
· 
·  2 likes
·  Share link
Wei-Hao Wang:
Of course, there is an even more fundamental issue. Because the sum of the filter transmission curves of R+G+B is not equal to that of L, there can never be a 100% exact match between the luminance of RGB and L. This is fundamentally impossible.




Wei-Hao,

This is exactly why I create a superlum using all of my lum and RGB data.  The fit between luminance and color is better, and results in a better blend... with the additional benefit of more detaisl and lower noise in the resulting lum.
Like
Avjunky 1.43
...
· 
·  Share link
Wei-Hao Wang:
Hi Arun,

Because of the above, personally I don't rely on PI for LRGB composition, and I don't do nonlinear stretching before LRGB composition.  I do LRGB composition while both the L and RGB images are linear.  That makes the match much easier.  I do this in PS as layers, so I get real-time preview when I adjust the contrast/brightness of L to match the luminance of RGB.  It's not easy, and not a one-click task, but it works very well for me.  All my LRGB images made in recent two years were made this way.

Hi Chris,  

You raise some really good points about the processing side of the LRGB equation.  This is an area where I’ve struggled with in the past as far as doing the LRGB combining in the linear state in PI vs later in PS after a non-linear stretch.  I’m curious about how you do the LRGB composition in PS while both are linear since PS needs to work off of 16-bit files. I usually think of the conversion from 32-bit format to 16-bit TIF files needed for PS as where the bulk of the non-linear stretching happens.  How are you getting the images into PS in a linear state?  Do you do something like a “pre-stretch” in PI to get the data converted and then the final stretching in PS?  Or am I missing something?

Thanks,
Mark
Like
aabosarah 9.04
...
· 
·  Share link
Chris White- Overcast Observatory:
Wei-Hao Wang:
Of course, there is an even more fundamental issue. Because the sum of the filter transmission curves of R+G+B is not equal to that of L, there can never be a 100% exact match between the luminance of RGB and L. This is fundamentally impossible.




Wei-Hao,

This is exactly why I create a superlum using all of my lum and RGB data.  The fit between luminance and color is better, and results in a better blend... with the additional benefit of more detaisl and lower noise in the resulting lum.

I started doing this recently and it definitely makes a big difference.
Like
whwang 11.80
...
· 
·  Share link
Chris White- Overcast Observatory:
Wei-Hao Wang:
Of course, there is an even more fundamental issue. Because the sum of the filter transmission curves of R+G+B is not equal to that of L, there can never be a 100% exact match between the luminance of RGB and L. This is fundamentally impossible.

Wei-Hao,
This is exactly why I create a superlum using all of my lum and RGB data.  The fit between luminance and color is better, and results in a better blend... with the additional benefit of more detaisl and lower noise in the resulting lum.

I only do super-L if the RGB integration are long enough compared to L (substantially longer than my typical R+G+B:L = 1:1 ratio) and if the seeing for RGB is as good as the seeing for L.  Otherwise mixing RGB into L doesn't gain me much.
Like
whwang 11.80
...
· 
·  2 likes
·  Share link
Mark Petersen:
Wei-Hao Wang:
Hi Arun,

Because of the above, personally I don't rely on PI for LRGB composition, and I don't do nonlinear stretching before LRGB composition.  I do LRGB composition while both the L and RGB images are linear.  That makes the match much easier.  I do this in PS as layers, so I get real-time preview when I adjust the contrast/brightness of L to match the luminance of RGB.  It's not easy, and not a one-click task, but it works very well for me.  All my LRGB images made in recent two years were made this way.

Hi Chris,  

You raise some really good points about the processing side of the LRGB equation.  This is an area where I’ve struggled with in the past as far as doing the LRGB combining in the linear state in PI vs later in PS after a non-linear stretch.  I’m curious about how you do the LRGB composition in PS while both are linear since PS needs to work off of 16-bit files. I usually think of the conversion from 32-bit format to 16-bit TIF files needed for PS as where the bulk of the non-linear stretching happens.  How are you getting the images into PS in a linear state?  Do you do something like a “pre-stretch” in PI to get the data converted and then the final stretching in PS?  Or am I missing something?

Thanks,
Mark

I guess this is a question for me.  So I will try to answer it.  I apologize if this is indeed for Chris.

I nearly do most of my post processing in PS right after stacking in PI.  I am more familiar with PS than PI.  This is why I chose to do the linear LRGB composition in PS.  And the layer function of PS is really fantastic. Once you know the math behind the layer blending methods, it can be as transparent as pixel math in PI, and it offers outstanding real-time preview.

Converting 32bit linear image in PI to 16bit TIFF for PS processing will sacrifice dynamical range and digital resolution. If one sees discontinuity (gaps) in the histogram in PS after strong contrast stretching (linear or not), that implies insufficient bit depth with the 16bit TIFF for that particular stacked data. To prevent this, I always apply a 4x to 10x linear brightness stretch in PI before exporting the 32bit image to 16bit TIFF. For nearly all my cases, this solves the insufficient bit depth problem and the PS histogram remains gapless even after contrast stretching that's way too aggressive.  

The down side of the above 4x to 10x brightness stretch is that the cores of very bright stars become more saturated, or become closer to saturation. To make these stars look better, I may apply some masking during the stretching stage to prevent them from being completely blown out. If that's not enough (this is rare), I would go back to the original 32bit RGB (not LRGB) image and export it to 16bit TIFF directly without the 4x to 10x brightness stretch. Such an image can be used to "salvage" the cores of a handful of very bright stars. Since these stars are bright, they don't need the S/N of LRGB.  So just use the RGB version is fine.

So in short, you are right.  I do pre-stretching in PI before exporting, but that pre-stretching is strictly linear.
Like
 
Register or login to create to post a reply.