How do you rate your recent processed image ? Anything goes · Padraig Farrell · ... · 47 · 2692 · 0

padraig 1.20
...
· 
·  3 likes
For me, if it’s good enough and like the result, I’ll save it for my screen background, if I can live with it there…it’s good enough to share. 😉
Like
Die_Launische_Diva 11.14
...
· 
·  10 likes
I store the candidate image and revisit it after a few days, as a single jpeg image outside of my processing software. If I'm satisfied with the image, I post it. This approach helps me to be a more objective judge of my work by breaking the emotional link I have with the image (an image isn't automatically good just because a lot of effort was put into capturing and post-processing it). Even looking at the image after a night's sleep can help with this process.

When I have the time, I prepare several candidate images and choose the best one according to my opinion. However, when the work involves a series of images with different subjects, this process becomes more complex and might be beyond the scope of your question.
Like
originalshilajit 0.90
...
· 
·  3 likes
Storing and revisiting the candidate image after a few days, detached from my processing software, allows me to be an objective judge of my work, breaking the emotional link and ensuring quality before posting.
Like
jmenart 0.90
...
· 
·  3 likes
I wanted to ask realted question: How do you rate it in sense - which applications/monitors/... are you using, especially if you use multiple?

I notice big difference between my two main monitors + phone screen (often lost a lot of details and wasn't even aware because my main monitor didn't show the hidden details in previous version). So now I am exporting to PNG and send it over WhatsApp and Viber to my friend (for additional feedback) and wife.

I check then these two versions also on my phone and second display. It often shows some issues and I can re-iterate. Let's hope a friend doesn't get bored with all the versions he receives
Like
WhooptieDo 8.78
...
· 
·  2 likes
Jure Menart:
I wanted to ask realted question: How do you rate it in sense - which applications/monitors/... are you using, especially if you use multiple?

I notice big difference between my two main monitors + phone screen (often lost a lot of details and wasn't even aware because my main monitor didn't show the hidden details in previous version). So now I am exporting to PNG and send it over WhatsApp and Viber to my friend (for additional feedback) and wife.

I check then these two versions also on my phone and second display. It often shows some issues and I can re-iterate. Let's hope a friend doesn't get bored with all the versions he receives



You're doing exactly what I do.   My monitors aren't calibrated, nor are they accurate.    My main doesn't expose background issues very well, one is very dark, and the other is very bright.    They get cycled between the 3.    From there I'll usually sit on them for a couple days, look at them on phone as well.     If I like them, they get posted.
Like
Turix 0.90
...
· 
·  3 likes
Jure Menart:
I wanted to ask realted question: How do you rate it in sense - which applications/monitors/... are you using, especially if you use multiple?

I think this is a really good question, personally I've noticed significant differences between my two desktop monitors. The actual underlying display technology can make a big difference (eg TN vs VA vs IPS vs OLED), for example IPS and OLED are generally regarded as having better colour representation (assuming equally "average" calibration).

I've found that my phone and secondary (IPS) display largely render images similarly, so I generally aim for a pleasing result on those. My main monitor unfortunately is a poor example of VA panel that leads to a fairly washed out look - as such I generally restrict using it to linear state processing only.

Back on topic however; whilst building up integration time for an image I'll often do multiple test "edits" of the data, where I'll produce near fully processed images in a variety of styles to help me decide how I want to process the final image. During that process I'll often pick up on a fainter detail in the FOV that I like, and then aim to continue adding integration until I can bring it out in a sufficiently noise-free fashion.

I generally know I'm onto a winning "style" when I keep revisiting one of my test edits for a "peek" constantly before I've finished adding integration

For example on my recent image of the Tulip Nebula  I had my eye on the Oiii arc in the top right.
Edited ...
Like
TimH
...
· 
·  5 likes
I have to like the image myself,  or at least think that there is some point of general interest in  it or sometimes in the methodology used to produce it before  posting  it up.  The funny thing is though -- and it happens everytime and no matter how careful I think that I have been - almost the second of pressing the button to submit to the public folder - I then spot some glaring flaw and go through a couple of revisions before being finally happy with it.
Like
jml79 3.87
...
· 
·  4 likes
I édit on a 32 " IPS panel that is calibrated, it’s not super high end but does display well beyond 100% SRGB and well into the high 90’s for adobe. I check the images on my less than stellar VA panel and also put it on the background at home and work. If I can stand it then, it gets posted. I can’t control for other peoples monitors so I have to just accept that many won’t see how I do, especially if they are phone surfers but I edit for print for myself so that’s my goal with the decent monitor. Nothing beats a large high res print if you have the data to support it.
Like
DarkSky7 3.81
...
· 
·  5 likes
Well, I've got three different contexts:
1) compared to "then"..."wow! I can't even believe I am getting any of this right now! So amazing compared to my alt-az drive and an unmodified DSLR and film from 40 years ago!"

2) compared to now... "Each one is getting a little bit better as I learn. I'm very happy with what I can get using what I have!"

3) compared to everyone else..."meh..."😁
Edited ...
Like
jhayes_tucson 22.40
...
· 
·  5 likes
I’ve been in a slump lately.  My level of processing has remained fairly static while the collective skill of other imagers here on AB has continued on a steady growth path.  That makes it easy to feel like I’’m in a slow backward slide.  A part of my challenge is that I’ve been struggling to get acceptable raw data and in spite of my best efforts to figure out why, I haven’t found something that I can fix.  The other challenge is that I’m just too busy with another project right now to focus on improving my processing; but, that’s a temporary set back.  It’s a normal cycle of ups and downs and it will come around again one of these days.  I’ve seen other imagers go through a similar thing so I don’t stress about it.  After all, it’s supposed to be fun…or why do it?

John
Like
DarkSky7 3.81
...
· 
I agree with you there, John.  And to clarify, I am very happy with what I am getting. But technology is passing so rapidly by me that I feel like I've pushed my equipment as far as I can (almost) and the only next step is to keep spending $$ to keep up with cameras and software.  

Truth be told, I am living the dream I had when I was a kid and first got these mirrors. So for me every picture is like Christmas, lol!

Tom
Like
HegAstro 11.91
...
· 
·  1 like
It is nice to have goals for each image - as an example, get the dark dust clouds around M42, the H-alpha regions and detail around Thor’s Helmet, etc. For me, as long as I have achieved that, I rate myself as successful. There is always something I wish was better- usually for me, it is color vibrancy.
Like
astropilch 1.20
...
· 
·  1 like
I don't think I'm happy with any image I've ever processed. I can always find something I've not done, done too much or not enough. My processing has got better but there are always ways of improving, new software new techniques etc.....
Like
padraig 1.20
...
· 
Yes , I can feel can I do more, 
I’ve pulled back a bit or maybe I’m getting impatient and just want to see a half decent result. 
Since my hard drive crashed 2 weeks ago …just getting up and running again will be enough …99% there and learned a few things along the way
Like
chrisbeere 1.43
...
· 
Tim Hawkes:
I have to like the image myself,  or at least think that there is some point of general interest in  it or sometimes in the methodology used to produce it before  posting  it up.  The funny thing is though -- and it happens everytime and no matter how careful I think that I have been - almost the second of pressing the button to submit to the public folder - I then spot some glaring flaw and go through a couple of revisions before being finally happy with it.

Yeah, i go through a bunch of revisions over months to arrive at a render that i can look at and not find something that triggers me to edit.
Like
dmsummers 6.80
...
· 
I'm with Alan Hancox....all my images have shown improvement since I've started, but I always find stuff I don't like about all my images after I've posted them.   This is a tough and challenging hobby, and that's a big part of the allure that makes it interesting and fun.    I chock up my dissatisfaction with my own images to gaining a better eye over time.    Always something to work on to improve.
Like
CCDnOES 5.21
...
· 
John Hayes:
I’ve been in a slump lately.  My level of processing has remained fairly static while the collective skill of other imagers here on AB has continued on a steady growth path.  That makes it easy to feel like I’’m in a slow backward slide.  A part of my challenge is that I’ve been struggling to get acceptable raw data and in spite of my best efforts to figure out why, I haven’t found something that I can fix.  The other challenge is that I’m just too busy with another project right now to focus on improving my processing; but, that’s a temporary set back.  It’s a normal cycle of ups and downs and it will come around again one of these days.  I’ve seen other imagers go through a similar thing so I don’t stress about it.  After all, it’s supposed to be fun…or why do it?

John

As the saying goes "I feel your pain" More sophisticated processing has been a double edged sword, for sure. I find myself doing more research on new processing methods and procedures these days, only to find a few months later that something even better has been released.  Clearly this is all a result of the explosive growth in the number of imagers - some of whom have the imagination and skills to develop these new processes. A far cry from the early 1990s when the number of imagers was probably less than a few hundred and progress was as slow as the PCs.
Like
CCDnOES 5.21
...
· 
·  1 like
I agree that a calibrated monitor is essential and to be honest don't care much about cell phone viewers. Although it is common, it will never be optimal because of the limited size. It is just not my audience.  I process on a 43 inch calibrated monitor that does 99.5 % of the sRGB space - really all that is needed given the nature of astro-images, most viewer's monitors and the fact that I never print my images.

People ask why I don't print my images and the answer is simple. Prints are expensive, wall space is scarce, and these days the rapid pace of new processes will make that expensive print look sad within a year. My walls are filled with  travel photos and those don't change that much.

As far as standards, I look at Astrobin images of my proposed object and sort them  by date and sometimes by award to find the best examples. I try to do as well as the best of those, assuming they were taken with mostly similar equipment.

By the latter, I mean that I do not try to compare my 14 inch CDK images with something from a 3 inch refractor, nor do try to compare to a one meter + professional scope on a dark mountaintop (don't get me started on how frustratingly unfair those images are to us mere mortals).

If I can come reasonably close to the best, then I post the image to Astrobin, if not, I only post to my personal (Jalbum) page.
Like
HegAstro 11.91
...
· 
Now that the IOTD stats on your image is accessible to you, you can get an objective measure of your image’s quality by checking how many submitters promoted your image or dismissed it.
Like
dmsummers 6.80
...
· 
·  7 likes
Arun H:
Now that the IOTD stats on your image is accessible to you, you can get an objective measure of your image’s quality by checking how many submitters promoted your image or dismissed it.

Definitely not trying to invoke a flame war here, but I'd respectfully disagree.   There are plenty of built-in biases in the IOTD process, and looking for IOTD submission feedback as a metric of quality is probably a poor idea.   There are much better technical metrics to judge quality by.    I'd recommend folks find their own style, and then work to improve their approach until happy.    There are plenty of folks who don't even bother to submit to IOTD as the process is seen by some (myself among them) as flawed.  Popularity does not necessarily equal quality!  CS   Doug
Like
jayhov 5.73
...
· 
·  3 likes
I can't resist not chiming in here, and could write a book, but will - hopefully - offer my thoughts in one chapter ....  It is an intriguing question / topic.

1.  Just this morning - and it happens fairly often - I commented against an image I would have been awfully pleased to have captured and processed. 
     It was/is awesome ... but the photographer had doubts.  I find that each of us is our most discriminating critic.  In some cases, there are strong areas
     of subjectivity ....

2.  I too am my toughest critic ... even objectively.  Unless someone is just starting out, most of us know what we are hoping to achieve ... and I am
     only (truly) satisfied with a few of my images.  Two to three clear nights a month just doesn't afford me (and likely many others) the opportunity
     to commit ten, fifteen, twenty or more hours to any one target.  And then there is travel to a dark(er) site ... and just life ....  So, ultimately, I am good
     with it AND love what we do.

3.  Today, I also replied to a friend's comment earlier today regarding IOTD.  Though lately, it does seem that some common folk have garnered an IOTD,
     it does seem that most of the time, IOTD are awarded (perhaps rightfully so) to the imagers with rock'm sock'm equipment shooting under dark, dark
     skies in permanent, automated, remote installations.  For me, personally, I am satisfied with the "likes" I receive from those whose imaging I like.

4.  Finally, let's all continue to grow in this wonderful passion we share.  It's a wonderful community to be part of.
Like
CCDnOES 5.21
...
· 
·  1 like
Doug Summers:
Arun H:
Now that the IOTD stats on your image is accessible to you, you can get an objective measure of your image’s quality by checking how many submitters promoted your image or dismissed it.

Definitely not trying to invoke a flame war here, but I'd respectfully disagree.   There are plenty of built-in biases in the IOTD process, and looking for IOTD submission feedback as a metric of quality is probably a poor idea.   There are much better technical metrics to judge quality by.    I'd recommend folks find their own style, and then work to improve their approach until happy.    There are plenty of folks who don't even bother to submit to IOTD as the process is seen by some (myself among them) as flawed.  Popularity does not necessarily equal quality!  CS   Doug

I think it is just one metric, and not always reliable but better than the number of likes, which are all but worthless insofar as a quality rating goes.  I am sure we have all seen images with dozens or hundreds of likes that are not especially great. The "likes" metric seems to have more to with followers and viewers than it does to do with quality.

As I mentioned, my metric for images (my images and those of others) has always been to look at fairly recent images of the same object taken with generally similar equipment and find the best few of those to compare to. Those do often have some kind of award, somewhat validating the awards system, but not always.

One thing I do wonder about the awards ratings is whether the people doing the rating are looking at recent similar images of the same object before rating. Ideally they should be but I suspect they are not ( at least not always) since that would take a lot of time and there are few raters and many images.

Maybe we should have AI rate images?   (OK, bad idea - )

A bit of a controversial subject in any case - if for no other reason than one can apply many criteria and taste differs.
Edited ...
Like
dmsummers 6.80
...
· 
·  4 likes
I personally tend to rate images by technical quality more than artistic quality.    I rarely (if ever) measure up in my own images, but what I'm searching for is technical excellence against the capture gear.   What I mean by that is that the capture's subs have good FWHM and eccentricity, the stack has good to excellent SNR.   The post-processing must achieve good calibration & gradient control, a smooth (but not completely noiseless) background with a good black point, a stretch that sufficiently highlights/maximizes detail without going overboard;   Color calibration is important as it's an indicator of underlying chemistry/physics.   A reasonable amount of deconvolution and saturation (you know reasonable when you see it) goes a long way to achieving the goal.   Artifacts need to be controlled.  All these things fall into the category of "attention to detail".  In the end, I want the target's chemistry and physics to take center stage without distraction.  I prefer the chemistry/physics not be overwhelmed by artistic license.

Lately,  it feels like a majority in our hobby are leaning heavily into very high levels of saturation and false color palettes.   These preferences can grow "sticky", like fake diffraction spikes.   Once they catch on, everyone wants to do it (regardless of whether it's a good thing or not).   To each their own.  For me, capturing data from 400nm to 700nm, there's good reason not to change colors, add generative AI (not to be confused with AI based deconvolution), or get too artistic.   A beautiful fake will never be as nice to me as the real deal.  When I see an image and feel the natural inspiration of the subject teased out (without distraction), I typically rate the quality as high.  But this is just my own rating criteria!  

As I originally suggested, I think everyone should find their own "groove", and seek to improve while in it.    Preferences change and even cycle/recycle over time, so exploration and enjoyment are key.    Our social media mentality of popularity and likes shouldn't drive how we rate quality.   Just too many variables and non-aligning goals to suggest what the "right" rating approach should be.   That said, it's always worth paying attention to improving on technical detail.  CS  Doug
Like
HegAstro 11.91
...
· 
·  3 likes
Bill McLaughlin:
One thing I do wonder about the awards ratings is whether the people doing the rating are looking at recent similar images of the same object before rating. Ideally they should be but I suspect they are not ( at least not always) since that would take a lot of time and there are few raters and many images.


Having served on the IOTD staff, I can say that, virtually certainly, very few (if any) are looking to compare your image of an object against the top few others of the same. Rather, since a submitter or reviewer has a limited number of slots and typically more images than slots, whether or not your image is advanced largely depends on what other images are in their queue, assuming your image meets some minimum standard. 

Your image can be dismissed if the submitter or reviewer feels it doesn't meet a minimum standard, but I know at least initial, there was a lot of misunderstanding on what dismissal was meant for - the spirit of this was to avoid advancing very bad images, however, I know good images were also dismissed, sometimes accidentally and other times not.  

All in all, I would tend to agree that this is a highly imperfect way of judging the merits of your image since there is a huge amount of subjectivity and bias involved (bias in the sense of interpretation of the IOTD guidelines, which are broad). I would look at it more as trying to select a sample of very good images at a certain point in time rather than judge each individual image.
Like
tom62e 1.51
...
· 
·  5 likes
My suggestion for Astrobin is to mandate Judges to provide feedback on all images they view.  So, if they reject an image at any point along the process, they provide a one or two sentence feedback, such as . . . "a strong image but I found the sharpening to be a little over done".  Likewise, if a judge pushes an image along, then why . . .""I am really impressed by the framing you chose for this target.  A unique perspective".  And please don't give the lame excuse this would be too time-consuming.  Writing one or two sentences would take less time than they take to view the image in the first place.

Moreover, this would be so beneficial for everyone on Astrobin!  I for sure am in dire need of constructive criticism.  I am convincing my M51 image blew away the last IOTD winner (on the same target), yet not one judge nominated my image.  I would love to know why.

Not only will this make us all better astrophotographers, but it would also provide an insight into the judges' thoughts.  the entire process would become more transparent, and therefore more legitimate.  As a result, we would undoubtedly see a lot less complaining about how "unfair" the IOTD process is.
Like
 
Register or login to create to post a reply.