WBPP not giving me my total integration time or subs used for my file, when splitting my DSLR OSC data into RGB channels to integrate Pleiades Astrophoto PixInsight · Anthony (Tony) Johnson · ... · 31 · 1886 · 5

starry_night_observer 2.71
...
· 
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.
Like
CWTauri 6.72
...
· 
Anthony Johnson:
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.

 That isn't how it works in my way of thinking. See..you used  program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple.

So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? 

And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging).

-adam
Like
starry_night_observer 2.71
...
· 
Adam Block:
Anthony Johnson:
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.

 That isn't how it works in my way of thinking. See..you used  program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple.

So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? 

And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging).

-adam

I think I see what you are saying Adam, then truthfully what you are saying, the time of integration is of no consequence. Because you could have let say 2 images, image 1 is 40secs image 2 is 40secs, but image 1 is only half as good as image 2 then image 1 is actually making the total of image 1 and 2 worse, or bringing down the quality, so it would not be a 80sec integration, but only maybe a 60 sec integration because of the quality issue of image 1. So integration time means nothing unless you know how much each image actually contributed to the whole. If I follow your reasoning correctly. I’m not sure that knowledge comforts me any how can we ever know how long our image integration time really is if we don’t know the amount of contribution of each individual sub. Nobody is gonna look through the processing log to see the weights and values of each individual frame, and still I don’t see how you would ever know the percentage of contribution of each sub frame even looking there. So it seems to be an arbitrary number. If I understand your train of thought. To coin a phase from Scrooge in a Christmas carol, speak comfort to me Adam. Lol
Like
CWTauri 6.72
...
· 
·  2 likes
Anthony Johnson:
Adam Block:
Anthony Johnson:
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.

 That isn't how it works in my way of thinking. See..you used  program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple.

So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? 

And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging).

-adam

I think I see what you are saying Adam, then truthfully what you are saying, the time of integration is of no consequence. Because you could have let say 2 images, image 1 is 40secs image 2 is 40secs, but image 1 is only half as good as image 2 then image 1 is actually making the total of image 1 and 2 worse, or bringing down the quality, so it would not be a 80sec integration, but only maybe a 60 sec integration because of the quality issue of image 1. So integration time means nothing unless you know how much each image actually contributed to the whole. If I follow your reasoning correctly. I’m not sure that knowledge comforts me any how can we ever know how long our image integration time really is if we don’t know the amount of contribution of each individual sub. Nobody is gonna look through the processing log to see the weights and values of each individual frame, and still I don’t see how you would ever know the percentage of contribution of each sub frame even looking there. So it seems to be an arbitrary number. If I understand your train of thought. To coin a phase from Scrooge in a Christmas carol, speak comfort to me Adam. Lol

What I am saying is that the total time you leave your shutter open is not always representative of the total number of photons you observed (or used). I am saying that you are attributing more importance to one over the other (or not quite recognizing the difference). So, total integration time of frames used is a good estimate of total exposure time in terms of counting photons- but it is not a true accounting of the amount of light you detected or used (weighting). This is why PixInsight isn't going to give you a number (perhaps like other software might). What you originally stated is a lacking of a "sophisticated software" is actually the result of a deeper understanding of this distinction. What is customary to do is to state the total shutter open time of the integrated images with the understanding there is still a wide range of variability in one person's results compared to another (even with the same equipment and open shutter time).

-adam
Like
Alan_Brunelle
...
· 
Adam Block:
Anthony Johnson:
Adam Block:
Anthony Johnson:
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.

 That isn't how it works in my way of thinking. See..you used  program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple.

So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? 

And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging).

-adam

I think I see what you are saying Adam, then truthfully what you are saying, the time of integration is of no consequence. Because you could have let say 2 images, image 1 is 40secs image 2 is 40secs, but image 1 is only half as good as image 2 then image 1 is actually making the total of image 1 and 2 worse, or bringing down the quality, so it would not be a 80sec integration, but only maybe a 60 sec integration because of the quality issue of image 1. So integration time means nothing unless you know how much each image actually contributed to the whole. If I follow your reasoning correctly. I’m not sure that knowledge comforts me any how can we ever know how long our image integration time really is if we don’t know the amount of contribution of each individual sub. Nobody is gonna look through the processing log to see the weights and values of each individual frame, and still I don’t see how you would ever know the percentage of contribution of each sub frame even looking there. So it seems to be an arbitrary number. If I understand your train of thought. To coin a phase from Scrooge in a Christmas carol, speak comfort to me Adam. Lol

What I am saying is that the total time you leave your shutter open is not always representative of the total number of photons you observed (or used). I am saying that you are attributing more importance to one over the other (or not quite recognizing the difference). So, total integration time of frames used is a good estimate of total exposure time in terms of counting photons- but it is not a true accounting of the amount of light you detected or used (weighting). This is why PixInsight isn't going to give you a number (perhaps like other software might). What you originally stated is a lacking of a "sophisticated software" is actually the result of a deeper understanding of this distinction. What is customary to do is to state the total shutter open time of the integrated images with the understanding there is still a wide range of variability in one person's results compared to another (even with the same equipment and open shutter time).

-adam

Anthony,

Only caught this thread late.  It appears you may have your answer.  I am curious about your comment several posts earlier that stated for DSLR images, one must do the integration with a color chanel separation/recombination method.  I do not use a DSLR, but I do use exclusively OSC cameras, and wonder what the difference might be in that recommendation for OSC.  I have never processed that way and it is my understanding and experience that the deBayering algorithms are actually quite good at dealing with color, etc.  Nor have I felt that my images would improve moving to that method.  I also struggle to understand how, why, or if, PI would drop certain frames from a specific chanel, but not others for the very same single image (original sub), though I can imagine that such a sub chanel may have been on the ragged edge of the acceptance criteria because of S/N or if there was some color fringing from poor optics, etc.  But, still, it would seem to be a real outlier experience and likely to cause little impact on specific loss of frames.  Which gets to my final point:  Unless this is a detail that will substantially affect your understanding or practice of this process, does the precision of the data your seeking really matter?  I will assume that you are in this for the art, not trying to get infallible photometric data from your images.  A forest from the trees point...
Like
starry_night_observer 2.71
...
· 
·  1 like
Alan Brunelle:
Adam Block:
Anthony Johnson:
Adam Block:
Anthony Johnson:
Michael Broyles:
Anthony,

Let me see if I can explain:

1) WBPP will split your RAW RGB images into separate R, G, and B file.  If you have 400 RAW file, you will have 400 R, 400 G, and 400 B files for a total of 1200 files.  If none of these files are rejected by WBPP and they are each 40 sec, your total integration time will be 400 * 40s = 16000s or 4.45 hours.

2) If WBPP Image Registration fail to being unable to file align some stars, registration files will not be generated by WBPP and these images will need to be subtracted from the image count.  So if R rejected 10 files, G reject 10 files, and B rejected 20 files the total integration time will be 400 - 40 = 360 * 40s = 14400s or 4 hours. 

3) Image Integration now only has 360 files to integrate.  However, Image Integration may also reject files if the calculated weight of the file is lower than Minimum weigh value.  Let say that once again 10 R files are rejected, 10 G files are rejected, and 20 B files are rejected.  This means that the total integration time is not 360 - 40 = 320 * 40s = 12800s or 3.55 hours.

As Adam indicated, you can determine the number of images reject by image registration by counting registration XISF files that were created by WBPP of the pipeline will so the number of active images for each channel during the next step in the pipeline.  

Images rejected during integration base of the weight is a little harder to determine.  But the process logs contain detail information stating the number of files integrated, so check the logs.  You can also count the number of XDRZ files that were update by the image integration.  

The total integration time for RAW Color data the total number of split files divided by 3.  In other words, if you with 400 files, after splitting you have 1200 R, B, G files but the total integration is 1200 / 3 = 400 files.  This is because the RAW color data was capture is a single exposure unlike Mono images which acquire data in three separate exposure for each filter.

Hope this answers your question.

Michael

Definitely did, thank you Michael. Kinda what I was thinking but I wasn’t sure never doing this procedure before. I’ve stacked in WBPP, but didn’t like the amount of rejection I was getting and when it totally failed on my M51 shot and Siril processed it, I said enough was enough, but I’m watching a vid on PI’s YouTube channel saying that you need to do this if you shoot with a DSLR which I do. I thought maybe I was doing it all wrong. I mean it was from the horses mouth. I mean it totally worked, lots of finished files, but it worked and in zooming in the noise did seem to be a little bit better handled so I’m not bashing it, just didn’t know how to figure what I ended up with. It’s just when you’re used to Siril writing the frames used and the total time in the fits header, I kinda thought an app as sophisticated as PI would do the same. I did see those rejections and I did save the log file. So I’ll check there. I usually delete the calibrated files after processing to free up hard drive space so those are gone. So total got it now.

 That isn't how it works in my way of thinking. See..you used  program that reported a value to you. How do you know the meaning of that value? On a more general note, just because a program outputs something (and another does not)- how do you know the output is optimal? It just isn't that simple.

So let us play this game. If you have a program like PixInsight that applies weights to images (using various different schemes)- how would you determine a total exposure time? You stack 20 images that have 300 second exposures and 15 of them contribute little in terms of significance to the integrated result. So you would say (or some other program says) this is a 100 minute exposure? So the other program reports a value *even though* it is weighting images? Do you see how what you think is a benefit could actually be misleading? 

And let me continue... weighing images by FWHM is specious in many cases. "Sophisticated" programs often use a combination of metrics to assign an image quality in order to offset the downfalls of any particular one. I happen to prefer SNR measures attained through photometry. In my opinion, the resolution doesn't matter if there isn't any signal there to begin with (I am not talking about lucky imaging).

-adam

I think I see what you are saying Adam, then truthfully what you are saying, the time of integration is of no consequence. Because you could have let say 2 images, image 1 is 40secs image 2 is 40secs, but image 1 is only half as good as image 2 then image 1 is actually making the total of image 1 and 2 worse, or bringing down the quality, so it would not be a 80sec integration, but only maybe a 60 sec integration because of the quality issue of image 1. So integration time means nothing unless you know how much each image actually contributed to the whole. If I follow your reasoning correctly. I’m not sure that knowledge comforts me any how can we ever know how long our image integration time really is if we don’t know the amount of contribution of each individual sub. Nobody is gonna look through the processing log to see the weights and values of each individual frame, and still I don’t see how you would ever know the percentage of contribution of each sub frame even looking there. So it seems to be an arbitrary number. If I understand your train of thought. To coin a phase from Scrooge in a Christmas carol, speak comfort to me Adam. Lol

What I am saying is that the total time you leave your shutter open is not always representative of the total number of photons you observed (or used). I am saying that you are attributing more importance to one over the other (or not quite recognizing the difference). So, total integration time of frames used is a good estimate of total exposure time in terms of counting photons- but it is not a true accounting of the amount of light you detected or used (weighting). This is why PixInsight isn't going to give you a number (perhaps like other software might). What you originally stated is a lacking of a "sophisticated software" is actually the result of a deeper understanding of this distinction. What is customary to do is to state the total shutter open time of the integrated images with the understanding there is still a wide range of variability in one person's results compared to another (even with the same equipment and open shutter time).

-adam

Anthony,

Only caught this thread late.  It appears you may have your answer.  I am curious about your comment several posts earlier that stated for DSLR images, one must do the integration with a color chanel separation/recombination method.  I do not use a DSLR, but I do use exclusively OSC cameras, and wonder what the difference might be in that recommendation for OSC.  I have never processed that way and it is my understanding and experience that the deBayering algorithms are actually quite good at dealing with color, etc.  Nor have I felt that my images would improve moving to that method.  I also struggle to understand how, why, or if, PI would drop certain frames from a specific chanel, but not others for the very same single image (original sub), though I can imagine that such a sub chanel may have been on the ragged edge of the acceptance criteria because of S/N or if there was some color fringing from poor optics, etc.  But, still, it would seem to be a real outlier experience and likely to cause little impact on specific loss of frames.  Which gets to my final point:  Unless this is a detail that will substantially affect your understanding or practice of this process, does the precision of the data your seeking really matter?  I will assume that you are in this for the art, not trying to get infallible photometric data from your images.  A forest from the trees point...

My original question has actually gotten lost in the thread. My only question was how to figure total integration time when it seems WBPP was rejecting frames from one channel of an image and not the others. Also the technique I was mentioning was from a video that I watched on pixinsight’s YouTube channel on WBPP. The narrator mentioned that with DSLR data you should be splitting the channels and then doing a final integration also with drizzle. I know absolutely nothing about this process past what I’ve watched on YouTube. Adam block is the main guy I listen to because I feel he gives it to me straight, but this new info coming from pixinsight itself I thought it was legit and to some degree it is but not for the reasons I was thinking. Like I said I put my camera n the back of my scope pop the shutter and hope for the best. I’m a total newbie even after a year.  I was just trying to get a number for the total number of frames used when WBPP was slitting channels but rejecting different numbers of frames for different channels then recombining those frames into a single image. What Adam said in his last post makes perfect sense to me. There are so many factors that figure into your final exposure it’s difficult to put a number on it. And no I’m far from looking for pristine data. Just trying my best to understand a hobby that seems as though the deeper I get the more complicated it becomes, but with that said I do try to understand how this works on all levels. This way when something goes wrong I can have a basic understanding of what might have caused it. Like I said my original question is in the title of my post. How do I find out how many frames PI stacked and what’s the total integration time of the final photo. I only got into a long description of what I did because to answer my question you needed to know what I did. I guess I was too descriptive.
Like
Alan_Brunelle
...
· 
·  1 like
Anthony,

Thanks for the clear reply.  I'll certainly look into this splitting of channels during preprocessing.  As a OSC person, I have tried with some success to create separate luminance data from a select set of subs toward the end of generating sharper images with better resolution.  But it is not clear what the benefit of splitting color channels is.  So I'll have to look into that.  That said, I have seen a fair amount of bad advice on the internet.  But you alluded to that in trusting good sources such as Adam.  As you learn from various sources, be critical as you do so.  Ask why and is this really necessary?  And test your hypothesis.  Prove to yourself if any actions are worth it.  I find that the internet is full of bragging contests.  Cooling a camera excessively, to only gain that extra percent of shot noise reduction.  Yet complaining about why the camera sensor keeps frosting over!  Taking 50hrs of subs on a subject yet not spending the time in post to make the time worth it.  Etc, etc.
Edited ...
Like
 
Register or login to create to post a reply.