How "pure" is the OIII signal of dual narrowband filters? [Deep Sky] Acquisition techniques · Michael Bauer · ... · 45 · 2943 · 10

mib 0.90
...
· 
·  2 likes
When compiling HOO images from data collected with an OSC and my Optolong L-eXtreme filter I noticed that very often a lot of the Ha signal is repeated in the OIII image (taken by splitting the color channels and assuming red being Ha and green being OIII). I used to think that this is just the nature of things as many regions are emitting Ha and OIII at the same time. However, recently I have growing doubts if this really is the full story. Here is some data I shot on SH2-129/OU4 the "Flying Bat and the Blue Squid". There are many great images of this object here on AB showing the red outer (bat) nebula and with good contrast the blue inner nebula OU4 (the squid). 


Red channel (Ha):
Ha.jpg
Green channel (OIII ???):
Gv2.jpg
 My green channel is showing the squid all right, but it is showing a lot of the bat in high intensity as well. The individual OIII data I have seen from others seems to mostly show the squid without the bat.  So what is going on here? Is the filter curve not exactly as advertised and e.g. including some H_beta? Is it normal and e.g. a more narrow filter is required to bring out OU4 individually? Something else going wrong/ I am missing? The follow on question may be - how to best mitigate this? I have tried to subtract the Ha signal or use it as a mask to reduce those parts. Apparently this helps, but is still far from perfect
Like
Rafal_Szwejkowski 7.84
...
· 
·  11 likes
This is the relative response graph for 294MC.  If you look at the H-alpha line at 656  it will feed into Green at .18 and Blue at .05.  Together it's quite a bit considering how strong Ha is relatively to OIII signal in many objects.

It's not the fault of the filter, Bayer masks are overlapping by design.  This is essentially the limitation of doing narrowband with OSC that cannot be overcome.

What you can do to mitigate the issue is to substract % of Ha from your G/B channels to make it more pure.  It's a must with a target like the Squid.

Like
mib 0.90
...
· 
Thanks - that is really useful. Somehow I completely missed that angle of the issue. 
Apparently I need to experiment more with the subtraction of the Ha. So far the results showed improvement, but remained quite suboptimal and noisy. Perhaps I need to try the subtraction it in the linear state and use the percentages you mentioned above.
Like
andreatax 7.90
...
· 
·  1 like
You should add the B channel to OIII since is less contaminated with the Ha spill-over as well as removing around X% of the R signal from the G channel, the X being the fraction of R spilled into the G channel by your camera.
Like
Rafal_Szwejkowski 7.84
...
· 
·  2 likes
Michael Bauer:
Thanks - that is really useful. Somehow I completely missed that angle of the issue. 
Apparently I need to experiment more with the subtraction of the Ha. So far the results showed improvement, but remained quite suboptimal and noisy. Perhaps I need to try the subtraction it in the linear state and use the percentages you mentioned above.

I will add that the Squid is a very tough target for OSC for this very reason.  Mono imagers have the advantage of shooting pure OIII for hours upon hours without the strong Ha signal from the Bat encroaching.  But that's one of the worst case scenarios and OSC narrowband can normally produce results people couldn't dream of just a few years ago.
Like
mib 0.90
...
· 
Very good points, thank you all. I really had a bit of a "blind spot" on that issue. It feels so natural and so many seem to just use the green channel as an approximation for OIII - that I didn’t give it any second thought. 

In the meantime I did some additional research. There is further discussion e.g.  here and here that appear to point in a similar direction. I probably need to spend some more time experimenting with this approach to see how far I can get. 

Another consideration may be to shoot some additional frames just with an OIII filter and the OSC. This may be comparably inefficient, but perhaps a cleaner approach for such an object. The difference between this OIII signal and the result from applying clever math on the dual narrowband may give me an idea of how much I want an additional monochrom camera after all.
Like
patrice_so 3.61
...
· 
·  1 like
This is a very interesting discussion. I have decent results with a l-ultimate and an ASI294MC pro so far, but I suffer from the same issue indeed. Whereas there is no worries for the Veil for instance (very strong Oiii signal), my WR134 could be much better. 

@Rafał Szwejkowski : thanks for the tip of subrstacting a fraction of the Ha from the Oiii signal. I will look into this and reprocess my WR134. 

@Michael Bauer If you are going into substraction techniques, you may be interested in this wonderful guide : 
https://www.nightphotons.com/guides/advanced-narrowband-combination

This guide does not adress the issue pointed here, but provides very strong tools to improve integration of NB signal into RGB images. 

CS

Patrice
Like
mib 0.90
...
· 
·  2 likes
OK, here is where I got so far: Based on our discussion and using the concepts laid out in the video I linked above I was able to pretty much remove the "bat" from the "squid".

O_combo1_edt2a.jpg

This is far from perfect, but much closer to what I was expecting to see as the OIII signal from this object. There remains a bit of a "glow" or background around the squid. I am not sure what is causing this, as these are areas that are not particularly strong in Ha (the star halos in the squid may be a bit distracting, but are a different story).

I used the following formula for the OIII:
(2*SOG*(1/(SOG-((SOR*SHG/SHR)))*((G)-(SHG/SHR)*R))+SOB*(1/(SOB-((SOR*SHB/SHR)))*((B)-(SHB/SHR)*R)))/(2*SOG+SOB)
The Sxy are the relative sensitivity of the individual RGB channel in the (H or O) wavelength.  The values for SHG and SHB need a bit of tweaking of a few percentage points up or down to make sure the Ha is removed the right amount.  The formula may still be more complicated than is really required and should probably be simplified, but the result was much better than any of the more "brute force" approaches I used so far.
Like
patrice_so 3.61
...
· 
·  1 like
That is a wonderful result. I will restart, again, my WR 134 even though what I started yesterday evening was already much better than the one that I posted here... 

Very useful discussion indeed !
Like
jml79 3.87
...
· 
·  1 like
I have seen lots of YouTubers use SHO filters on OSC for lots of reasons. This would be one, another is to get some Sii signal and another is to shoot during the bright moon phases. It is inefficient but especially shooting near full moon is time an OSC imager can’t use very well anyway, it’s still a plus. A set of used ZWO SHO filters is often very reasonable and they are pretty good. No show stopper halos or other nasties yet and I have 2 sets on 2 cameras since April. I bought a 31mm LRGBSHO set and a 36mm SHO set for about what I paid for my IDAS NBZ filter new.
Like
SteveInColorado 1.43
...
· 
·  1 like
There is also a good discussion of this issue on Cloudy Nights.
Like
mib 0.90
...
· 
·  1 like
Thanks for the link. Apparently on CN they are coming to similar conclusions as we did here so far.
Edited ...
Like
davidelmore 1.51
...
· 
·  5 likes
I recently acquired both of the Antlia dual band filters, Ha/OIII and SII/Hb.  I wanted to extract 'pure' Ha, OIII, SII, and Hb so applied some matrix math to the problem.
As shown above, vendors supply curves of the response of the R, G, and B pixels for your camera.
One can write a 2column by 3 row matrix that contains the three responses (R, G, B)  of the sensor at the wavelengths of the filter Ha and OIII or SII and Hb.
That matrix can be inverted to obtain the reverse transformation of R, G, and B back to the two emission lines.  That is a three column by two row matrix that is multiplied on the RGB vector detected at each pixel.

Sorry.  There are mathematics involved.  You will need to drive, Python, Matlab, IDL or some other language with data processing capabilities to create the pseudo-inverse needed for your sensor.  

The procedure works really well.  Here is a file and if you have an ASI 2400 camera you have the answer.  These values will also likely work for any camera with a new Sony CMOS color sensor.

MatrixMathtoRecoverLines.pdf
Like
mib 0.90
...
· 
That is a neat way to formulate the problem - as the relationship between the (R,G,B) and (Ha,OIII) vector. Conceptually it _should_ be the same as solving the system of linear equations. Not sure if the numbers actually come out identical. 
How does applying this actually work for you? I assume you are doing this to the raw, linear images before any denoise etc?
Like
davidelmore 1.51
...
· 
·  1 like
Yes, the matrix math is a tool for working with linear equations.  Results should be the same.
Yes, one must work on linear images.  

Qualitatively I really like the results.  The emission lines look cleaner.

On my page there are examples of four line images of North America, Heart, and Sadr.  
Once one has the four lines, recombining them in full color is the issue.  HOO is easy, put H-alpha at red and OIII at cyan.  For four line I space them 80° apart on the color wheel, H-alpha, SII, OIII, and Hb.  One could do 90° but I like to tone down the magenta.
Like
mib 0.90
...
· 
These are great results. Congratulations!
One small difference I noticed in our approaches is that I was weighting the green channel twice, perhaps described as assuming the vector being (r,g,g,b). Not sure if this is a strong theoretical foundation for doing it that way. It may be just opportunistic as the green signal is stronger and when trying both variants I liked the results of this variant more.
Based on my initial experience I would have thought that the general approach has its limits, when the individual signal is comparably weak and when the noise introduced by it may become an issue. However, you seem to be able to very successfully apply it even to SIII with moderate exposure times. 
Did you have (like me) to tweak the individual sensitivity numbers? Did you apply any processing (e.g. background extraction) to the individual linear channels prior to applying the matrix?
Like
davidelmore 1.51
...
· 
·  1 like
As for the 2xGreen,  I think that after delayering that issue goes away.

Weights for the 'off-band' color channels are small so too would be the amount of photon noise added to the final result.

Yes, usual blurXTerminator, DBE, NoiseXTerminator in PixInsight.  Then I play free and loose in MacOS Photos to get the final color balance I like.

I am wrestling with whether to apply DBE before the channel extraction and linear algebra to get the emission lines or whether it is equivalent to apply it later.  Since it is additive I think it makes no difference but I suppose some computer time could resolve the question.
Like
rveregin 6.76
...
· 
·  2 likes
This version is edited to correct some errors, should be correct now.

I think perhaps there was something missed in man the responses that I can see, as you were asking about using an OSC with the L-eXtreme filter?

I use an ZWO 2600MC camera with the L-eXtreme in many of my deep sky images . I did a review of this filter for my local RASC group, which I am sharing here.

So if you put the filter on top of the OSC camera Bayer matrix you get this as shown below. The OIII and Ha boxes show the width of their bandpass right down to baseline. The actual FWHM is only 7 nm, so less than what I am showing.

You can see with the L-eXtreme with the Bayer matrix has very little Ha contamination from green (linked to OIII), only 2%. No calibration or subtraction is needed for such an insignificant effect, you will just increase your noise level and gain nothing at all.

OIII is slightly contaminated by red channel Ha, a total of 15% compared to the 95% OIII signal. Again, this contamination is often small enough that trying to correct it is likely worse than leaving it alone, unless the Ha is much stronger than OIII in some area in your image--that is you want to bring out a weak OIII signal in an area dominated by Ha. Note Hbeta is rejected to baseline by the OIII bandpass of the L-eXtreme (Hbeta is not shown), so one doesn't need to worry about that.

Most targets have a lot of overlap between RGB, or SII/Ha/OIII. So for many targets you need to use masks, and selective stretches, vibrance, saturation, etc, to pull out a weak signal from the strong ones. Unfortunately nature doesn't make it easy for us, you have a lot of work to match the best imagers, who know all the tricks. But do not do a correction where none is needed, it will only hurt your end result.

Hope this helps
Rick



image.png
Edited ...
Like
mib 0.90
...
· 
Hi Rick, thanks for the detailed response. Just to make sure I correctly got your point - you are saying that the contamination of the Ha signal in the red channel by the OIII signal is minimal and can be neglected?

That sounds reasonable, particularly as OIII is usually much weaker than Ha. The incremental noise may not that strong if added at 2-3%, but still unnecessary in most cases. I currently can’t think of an object where you want to show a faint Ha signal on top of a strong OIII, so it is probably safe to continue using the red response as a proxy for Ha in most practical cases.

For the OIII on the other hand I guess the argument is somewhat the reverse. The contamination ist relatively high (at around 15%) and the strong Ha can easily contaminate the OIII signal. The example being the image in the first post, where the contamination ist just as strong as  and partially stronger than the OIII signal.

I agree that the absolute best images of such an object may have used all kinds of nifty processing that many of us have not even begun mastering yet. However, I would wager that all of them have started with a clean and good OIII signal, or good and clean individual signals in general. 

Michael
Like
andreatax 7.90
...
· 
·  1 like
How much is too much? The only way to know for sure is to test it, which I'm going to do ASAP using a 12nm Ha filter (nothing tighter at the time of this writing) to check the "performance" of both the IMX533 and the IMX294 (the Bayer Matrix variety) to see how much the G and B channel  pick up the Ha signal. Will report back as soon as the clouds lift and I'll be able to take the measurements.
Like
mib 0.90
...
· 
That is good.  Though the published sensitivity curves are probably relatively accurate - and my results are consistent with them - it is still interesting  to independently check with measurements. 
How much is too much probably depends on the object. For many objects with strong OIII or good separation of features it may not matter at all. However, my current feeling that a systematical error of 15% or more is always worth correcting if it can be done relatively easily. Otherwise you never know if the image would have looked better with the correction.
Like
rveregin 6.76
...
· 
·  1 like
Michael Bauer:
Hi Rick, thanks for the detailed response. Just to make sure I correctly got your point - you are saying that the contamination of the Ha signal in the red channel by the OIII signal is minimal and can be neglected?

That sounds reasonable, particularly as OIII is usually much weaker than Ha. The incremental noise may not that strong if added at 2-3%, but still unnecessary in most cases. I currently can’t think of an object where you want to show a faint Ha signal on top of a strong OIII, so it is probably safe to continue using the red response as a proxy for Ha in most practical cases.

For the OIII on the other hand I guess the argument is somewhat the reverse. The contamination ist relatively high (at around 15%) and the strong Ha can easily contaminate the OIII signal. The example being the image in the first post, where the contamination ist just as strong as  and partially stronger than the OIII signal.

I agree that the absolute best images of such an object may have used all kinds of nifty processing that many of us have not even begun mastering yet. However, I would wager that all of them have started with a clean and good OIII signal, or good and clean individual signals in general. 

Michael

The Ha would have to be extremely strong relative to OIII to contaminate that signal, when it is only at 2 or 3% sensitivity. And remember there is noise in the image, so this low level signal may be just mixed in with LP background, so not really above the current noise threshold in your image. If the Ha signal sensitivity is only 3% of the OIII then most of what you are subtracting may only be noise, which is not helpful. I agree it may be a small effect on noise, but think about it, if the 3% you subtract is as much noise as signal, you will not be improving anything.  Agreed OIII into Ha might make sense to correct for, if the OIII is actually in the Ha areas. But if Ha and OIII areas in the image have little overlap, you will be adding significant noise for nothing. So do this with care.

The main point I wanted to make was with NB filters the contamination is not that large in general, so it may not be necessary for most images.  The best thing to do is try it both ways with your image and see what works best. Personally, I'm rarely happy with an image without trying at least 3 or 4 full processing workflows until I believe I have done the best with the data.

Finally, another trick to to subtract the red continuum from Ha (and SII), and blue or green continuum from OIII. This can provide more detail in the Ha and OIII. So one can do a lot of subtracting. This needs a lot of work because you need low noise RGB images as well as low noise Ha (SII) and OIII, and your noise will take at least some hit.

Good luck
Rick
Like
drmdvl 0.00
...
· 
·  1 like
@Michael Bauer, your OIII extraction looks great. Here is the formula I used for my 2600MC with L-Enhance when shooting WR134. Hope it helps. 
https://youtu.be/Ie5tzJ8EtZQ?si=ODty3ePDVMPLkfl3
Like
mib 0.90
...
· 
·  1 like
Rick Veregin:
The Ha would have to be extremely strong relative to OIII to contaminate that signal, when it is only at 2 or 3% sensitivity. And remember there is noise in the image, so this low level signal may be just mixed in with LP background, so not really above the current noise threshold in your image. If the Ha signal sensitivity is only 3% of the OIII then most of what you are subtracting may only be noise, which is not helpful.

Yes, I guess conceptually we are saying the same thing. However, we seem to be reading the sensitivity charts differently. The Ha signal contributes to the green channel (which may be used as a proxy for OIII) at about 15%, which is quite a lot and can completely drown the OIII signal - like in my Squid image above.  The 2% is the contribution of OIII to the red or Ha, which probably should be ignored in most cases.

As you are saying - results will vary per object and it is worth trying and checking what it looks like. For peace of mind I will certainly go back to a number of my images and reprocess them with this approach to see if some of them can be improved.
Like
mib 0.90
...
· 
Scott Horton:
@Michael Bauer, your OIII extraction looks great. Here is the formula I used for my 2600MC with L-Enhance when shooting WR134. Hope it helps. 
https://youtu.be/Ie5tzJ8EtZQ?si=ODty3ePDVMPLkfl3

Yes, that is a great video (it is the same I linked above). The formula I used was derived based on this video, I just tried keeping the sensitivity parameters as variables to make it easier to adjust for my different cameras.
Like
 
Register or login to create to post a reply.