Celestial hemisphere:  Northern  ·  Constellation: Perseus (Per)  ·  Contains:  B205  ·  LBN 740  ·  LBN 741  ·  LDN 1450  ·  NGC 1333
Getting plate-solving status, please wait...
NGC 1333.  Closeup.  Added Data from Two Cameras, Alan Brunelle
NGC 1333.  Closeup.  Added Data from Two Cameras, Alan Brunelle

NGC 1333. Closeup. Added Data from Two Cameras

Getting plate-solving status, please wait...
NGC 1333.  Closeup.  Added Data from Two Cameras, Alan Brunelle
NGC 1333.  Closeup.  Added Data from Two Cameras, Alan Brunelle

NGC 1333. Closeup. Added Data from Two Cameras

Equipment

Loading...

Acquisition details

Loading...

Description

This was an attempt to improve on earlier images, especially my first attempt at NGC 1333,

NGC 1333 and environment with OSC
, which was more of a closeup than the view of 1333 in my

The Perseus Molecular Cloud, Omi Per to NGC 1333 and Beyond. A Mosaic
.  (Note: As of 2-15-2023, this image was updated with a revision using the newer processing tools.)  It was hoped that the increased integration time applied to this project would yield improvement in S/N and resolution.  So as a simple exercise, I sought to combine the data from my 300 sec subs from my earliest attempt with the 45 sec sub data that yielded the image in the Mosaic.  I have never combined data from two different cameras that had differing image pixel dimensions.  And what I describe in the following may not be the best way to so this.  So please be critical.  

Given the restrictive overlaps from the two data sets, I ended up with a much closer view of the target.  To me that is unfortunate, but I guess it is nice having a close-in view.  The process first involved getting the mosaic part needed from the QHY 268 C camera put together, since 1333 was close to the stitch region of the two independent panels.  This was 2.65 hrs of integration using 45 sec subs.  Then the data (2.6 hrs of 300 sec subs) from the ASI0171MC camera was brought into the process.  Because the pixels from this camera are larger, the images were incompatible for the purpose of star alignment.  So I used Resample in PI to scale the image to match that of the QHY images.  Once done, I  corrected for gradients and color calibration of the independant images.  I did a star alignment and finally summed the images with PixelMath.  OK, I actually took a few steps back and asked myself if I could use NXT and denoise prior to summing.  I also decided to do just star--correction on the denoised images prior to summing.  I assumed that the summing with denoised and star-corrected images would align poorly.  Reason being is that I have noticed that when BXT does its thing with star correction, one can see a clear shift in the positioning of stars and structures in the new image.  The data from these two panels is poorly overlapping, orthogonally positioned and comes from two different cameras, so I assumed a poor chance that the defects corrected would result in alignable images.  But they did so very well!  I would not say any differently than summing the two panels, pre-denoise or star correction.  In any case, I chose to move forward with the summed image from pre-summed denoise and star-correction.  The two images yielded very different signals, etc. so when I summed them, I chose to weight them differently.  I tried different sums untill I got what seemed most favorable.  The rest of the processing involved a heavy crop to maximize my area presented and then the rest is typical.  All the previous steps were done pre-stretch.

The results were less than dramatic.  Well, I did find a dramatic improvement over my first effort.  But the resulting image is hard to distinguish from the NGC 1333 that is presented in the Mosaic.  And the subtle differences achieved with this version did not necessarily depend on the added data.  At least I do not think so.  I think I have better color balance and the stars a bit sharper.  Oh and the use of the NXT suite and Bill Blanshan's star reduction (very slight here) were certainly a confusing factor in this test.  I will say that as a severe crop of my RASA field of view, the image certainly seems to stand up to the task.

So there you go.  Please offer any suggestions to combine data from different cameras.  But at the very least, I guess I figured out a bootstrap method of doing it.

Comments