Cookie consent

AstroBin saves small pieces of text information (cookies) on your device in order to deliver better content and for statistical purposes. You can disable the usage of cookies by changing the settings of your browser. By browsing AstroBin without changing the browser settings, you grant us permission to store that information on your device.

I agree

TOP PICK clarification request

RRBBarbosa
01 Dec, 2019 20:36
Hi everybody,

I have a question for AB submitters and reviweres.

For the sake of transparency of the image selection criteria (Top Picks / IOTD), I would like a short clarification.

Recently I posted 2 images, NGC 7000 (https://www.astrobin.com/kqgqyl) and The Cone Nebula (https://www.astrobin.com/euetw9/), both with almost 200 likes, several comments where members liked the processing, the detail, the color, the resolution, etc.

As I share the same sentiment from the members who commented the images, and find that the AB jury had a different opinion, then I would like to be clarified on the following questions:

1. Did submitters vote for these images? If not, I would like to know why the two images do not deserve to be TP?

2. Did reviewers vote for these images? If not, I would like to know why the two images do not deserve to be TP?

I think no one is offended by this request for clarification, but I often see TP images that do not deserve to be TP, as well as images that deserve to be TP not voted on.

Regards,
Ruben Barbosa
Allinthehead
01 Dec, 2019 21:28
Hi Ruben. Both beautiful images. Maybe they're a little sparse on data acquisition detail.

Best regards,
Richard.
Chris-PA
02 Dec, 2019 02:52
I have to agree with Ruben here. I've seen a number of images lately that are IOTD worthy that haven't even been Top Picks. I already posted in the forums here about Sergey Trudolyubov's Cassiopeia A (Cas A) Supernova Remnant last month. Today I went through my feed and saw a few more: Christopher Gomez' IC1848 The Soul Nebula SHO 12 Panel Mosaic is absolutely jaw-dropping in full resolution - the best I've ever seen of that target bar none. Gary Imm's Abell 72 - shot with a 5" refractor - is (in my humble opinion) better than the Doug Matthews/Adam Block/NOAO/AURA/NSF version shot with a 20" RC. And probably the most robbed of all images is Mark Stiles' Dr36 & DrSti1 - a PAIR OF NEWLY DISCOVERED PLANETARY NEBULAE in one image. This is some ground-breaking work, folks. That image was published nine days ago and has 63 likes right now. That image should have close to 63 bookmarks at this stage. Ground-breaking images, top-notch renditions of rare targets and images that are *the best* that amateurs with amateur equipment have done on a target deserve special recognition on Astrobin. So why aren't they getting it?
crazygame12345
02 Dec, 2019 06:51
In fact, the top pick or the IOTD prefer to select EXAGGERATED pictures  rather than BALANCED pictures.
siovene
02 Dec, 2019 09:27
As a reminder, the IOTD staff members are not screened before acceptance. Everyone who volunteered has been accepted. Unfortunately there are not enough volunteers that I could only accept "the best ones" (whatever that means… and besides: if I did the choosing, the resulting IOTD panel would be affected by my preferences).
Magellen
02 Dec, 2019 10:02
Chris, thank you for pointing to some extraordinary good pictures. Mark Stiles' picture went by completely unnoticed by me. From my point of view, it represents the top level of what can be achieved by amateurs today. Chapeau to the disvoverers and to a picture, that truely stands out from the crowd.

IotD and Top Picks are volunteers work. When I was a submitter long time ago, it was impossible to agree on standards to be observed when choosing a picture. I got so frustrated over this, that I did quit. I do not know, if there are standards today, but if so, they should be made public. If not, a clear statement that there are no guidelines would make the process more transparent as well.

I can imagine, that it is not easy to find volunteers for a job, that comes with an considerable amount of workload and time to spend on, exposes oneself to criticism and is hardly appreciated.
Andys_Astropix
02 Dec, 2019 10:32
Clearly this has been a difficult year for Astrobin Staff, as evidenced by the above (and I could easily cite other examples - including a couple of other one off, only one of it’s kind world’s first images posted here).
judging photos well requires certain skills & abilities - as well as being available and willing to do the job!
I and others have previously attempted to write assessment guidelines for staff, but for unknown reasons these seem to have been ineffective.
(maybe not seen or read?)
Instead of just complaining though, I’m willing to assist a chosen expert team (perhaps by Salvatore?) to develop methods & procedures so AB can move forward with IOTD/TP selections that meet documented criteria.
Edited 02 Dec, 2019 11:21
Barry-Wilson
02 Dec, 2019 12:43
I think every year I've been a member of AB someone has raised comments about worthy images being overlooked being evidence of a defective system, so I'm not sure that these comments are any different from those raised previously or a reliable indicator that the process is ineffective.  After all no system is perfect, especially one that relies on volunteers all working in different time zones.

A suggestion as an aid for Submitters - my view of the Moderators Submission Queue doesn't display image Title in the two columns of image thumbnails (I have a Mac so don't know if Windows PCs display differently). The additional detail provided by the Title in Mark Stiles' image of Dr36 and DrSti1 as part of the landing page of the Submission Queue would certainly have helped attract my eye to click on the image and so read its technical details and written description, and thus understand its context.  Otherwise a busy volunteer relies upon just the visual image displayed in the stream of thumbnails to arrive at a first pass of which images to appraise in detail.
Snjór
02 Dec, 2019 13:40
Barry,

Windows  and Linux also just display thumbnail with no other infos. Also depending on image framing and orientation thumbnail not display some images to best advantage.

Best,
Sigga
Chris-PA
02 Dec, 2019 14:51
I need to get to work, but two things I'd like to say: 1. I have offered to volunteer to be a submitter and was told to wait until next year. Perhaps if volunteers are quitting during the year, new ones can be brought on? 2. Not having the image title displayed seems like a defect to me. For example, if you don't know that Christopher Gomez' Soul Nebula was 16 panels, you might take a look at it and say 'hey, wow, that's a nice Soul!' but you might not spend the time to go and view it in full resolution (because you wouldn't even realize the scale of it just looking at its resized version).

At any rate, is there a time limit to these things? I know Mark Stiles' image is 10 days old now, but it still deserves to be a Top Pick. It seems like everyone who's taken part in this discussion agrees with that at least. Can someone who has the ability to submit go ahead and right that wrong?
whwang
02 Dec, 2019 15:04
There seems to be a systematic issue to be addressed. Based on the feedbacks here, it appears that many images were overlooked because the submitters/reviewers did not bother to click into the image to look at the full-resolution ones.  This can also partially explain what Ke said, the system tends to select exaggerated images, since those images more easily attract clicks.

On the other hand, all volunteers have limited time. I do not believe it will be practical to ask all submitters/reviewers to click into every image. To be honest, when I served as a reviewer/judge, I did not click into every single image that was promoted to my review pools. I can imagine it will be even more impossible for submitters to click into every image.  The workload of doing so will be huge.

So we have a dilemma here.  It is probably hard to solve this within the current system.
a028964
02 Dec, 2019 15:43
Hmmm. My honest opinion. Do away with it all together. What's the point? Recognition? I mean really? I appreciate Salvatore and all the others who work on AB to provide a great platform to share and see an overwhelming amount of fantastic images - TP, IOTD or otherwise.

This is a completely subjective context and as such can never be perfect. There isn't an algorithm. I can't even imagine beginning to try and 1) put the time into analyzing 5, 10, 20 or more? (probably more) images a day to be as fair as possible,  2) have some sort of basis on which to judge something worthy of a special recognition and 3) do this for free.

There seems to be a lot of energy and frustration (i.e. venting, complaining, etc.). Life's to short, enjoy the moment and what we have here at our disposal.

Shut it down, that's my vote … for what's it's worth …

-Brad
McComas
02 Dec, 2019 15:58
Questions: Are there qualifications to be a reviewer or a judge?  I see where these people are volunteers but are they picked for these positions for their skills in imaging and post processing?
Edited 02 Dec, 2019 18:33
PeterGoodhew
02 Dec, 2019 16:01
I vote for keeping it, despite it's limitations. My reasons:

1. It's massively motivating and encouraging when an image gets recognised as a TP or IOTD.

2. If I want to take a look at other good images of a target I will select the TPs or IOTDs and use them as a benchmark against which I can assess my own images. This often results in my seeing how I need to improve my images before completing them.

Yes the system isn't perfect, and like Brad and others I can't see how practically it can be made perfect (although as Barry points out there may be some ways of improving it. But for many of us it is still very valuable despite its imperfections.

I think the problem stems from an expectation that it should, or could, be perfect. I think it would help if we could avoid taking it too seriously, and stop getting upset by anomalies or "injustices" when specific images don't get the recognition that they deserve. These are inevitable in a system like this and we need to just learn to live with them imho.
KuriousGeorge
02 Dec, 2019 16:01
When I look at images, I first compare them to the best professional and best amateur images of the same object. I then look how the subject image is better or unique. IMHO, this takes significant time and experience to do both of these as even many of the best amateur and professional images have significant flaws.

To help our volunteers, I suggest dedicating each week to a specific type of object, condition or imaging device (TBD) similar to how you might search images. Then they're comparing similar images, rather than getting overwhelmed with the huge range of images we see on this site. The database can also help narrow the weekly selections and possibly "mark" images that don't meet some minimum criteria. I also recommend doing a keyword search on the description to help mark "special" images (TBD) that are easy to miss. Finally, I recommend considering older images, not just those posted in the last week or two.
gnomus
02 Dec, 2019 17:02
Peter Goodhew
….

I think the problem stems from an expectation that it should, or could, be perfect. I think it would help if we could avoid taking it too seriously, and stop getting upset by anomalies or "injustices" when specific images don't get the recognition that they deserve. These are inevitable in a system like this and we need to just learn to live with them imho.

What Peter said….

For me the hobby itself is its own reward.  Of course, the occasional 'bauble' is nice, but taste is a highly subjective thing.  Some folks are excited by certain images, others find them too exaggerated.  I don't pretend to know why one of my images gets a TP/IOTD whilst another does not.  Certainly some of my personal favourites have not been selected.  So what?  Does it really matter what other folks think?  I think it would be a mistake if I allowed my enjoyment of this great hobby to be dependent upon such things.

I am sure that no matter what system is put in place, there will always be folks who are disatisfied.

The TP/IOTD system is run entirely by volunteers.  Salvatore struggled (and I believe failed) to get a full complement last time around.  There is little or no incentive for volunteering - indeed the regular criticism is, I would suggest, quite a powerful disincentive.
Edited 02 Dec, 2019 17:13
Snjór
02 Dec, 2019 17:14
Steve Milne
Peter Goodhew
I vote for keeping it, despite it's limitations. My reasons:1. It's massively motivating and encouraging when an image gets recognised as a TP or IOTD.2. If I want to take a look at other good images of a target I will select the TPs or IOTDs and use them as a benchmark against which I can assess my own images. This often results in my seeing how I need to improve my images before completing them.

Yes the system isn't perfect, and like Brad and others I can't see how practically it can be made perfect (although as Barry points out there may be some ways of improving it. But for many of us it is still very valuable despite its imperfections.

I think the problem stems from an expectation that it should, or could, be perfect. I think it would help if we could avoid taking it too seriously, and stop getting upset by anomalies or "injustices" when specific images don't get the recognition that they deserve. These are inevitable in a system like this and we need to just learn to live with them imho.

What Peter said….

For me the hobby itself is its own reward…

I am sure that no matter what system is put in place, there will always be folks who are disatisfied…

The TP/IOTD system is run entirely by volunteers.  Salvatore struggled (and I believe failed) to get a full complement last time around.  There is little or no incentive for volunteering - indeed the regular criticism is, I would suggest, quite a powerful disincentive.

Well stated  Peter and Steve.
Edited 02 Dec, 2019 17:15
a028964
02 Dec, 2019 17:45
Sigga
Steve Milne
Peter Goodhew
I vote for keeping it, despite it's limitations. My reasons:1. It's massively motivating and encouraging when an image gets recognised as a TP or IOTD.2. If I want to take a look at other good images of a target I will select the TPs or IOTDs and use them as a benchmark against which I can assess my own images. This often results in my seeing how I need to improve my images before completing them.Yes the system isn't perfect, and like Brad and others I can't see how practically it can be made perfect (although as Barry points out there may be some ways of improving it. But for many of us it is still very valuable despite its imperfections.

I think the problem stems from an expectation that it should, or could, be perfect. I think it would help if we could avoid taking it too seriously, and stop getting upset by anomalies or "injustices" when specific images don't get the recognition that they deserve. These are inevitable in a system like this and we need to just learn to live with them imho.

What Peter said….

For me the hobby itself is its own reward…

I am sure that no matter what system is put in place, there will always be folks who are disatisfied…

The TP/IOTD system is run entirely by volunteers.  Salvatore struggled (and I believe failed) to get a full complement last time around.  There is little or no incentive for volunteering - indeed the regular criticism is, I would suggest, quite a powerful disincentive.

Well stated  Peter and Steve.

Agreed.

So as an outsider, what is the pain point from a volume perspective (I am not attempting to solve any subjective challenges :-)) if any? It would appear that on the first level the Submitter could have the most demanding task? What are the parameters they face? How big is their daily queue on average? What is the time that images are in their queue? Do they have to perform something to de-queue everything or do they expire if they don't act? Is this activity difficult to get through from a numbers perspective? Do all submitters see all images or are they split among sumbitters?

Also is there any value to the community by providing data metrics (not the individuals doing the evaluations) around the life cycle for their images? i.e submitted, reviewed, etc? I read the google doc (Nicco I think?) If there's some level of guideline's established that is followed, could the data points that are measurable by Submitters, Reviewers, etc be recorded? Again this would have to be manageable by the evaluators. This could be added as feedback to imagers in order to improve?
Barry-Wilson
02 Dec, 2019 17:46
As a case in point, I examine the Submission Queue at roughly the same time every evening.  This afternoon after work I log-in now to see 100+ images in the Submission Queue.  Anything other than a simple visual assessment of thumbnails as a first pass to help decide which images to allocate time to assess in detail will take nearly 1 hour at 30s per image . . . then you do this every day, seven days per week, 365 days per year . . .  I trust that everyone can easily see that it will not be a perfect system even with a determinedly conscientious approach.

To encourage a thriving volunteer community I hope that everyone can enjoy what is achieved by us all and the splendour of our hobby  smile
a028964
02 Dec, 2019 17:52
Barry Wilson
As a case in point, I examine the Submission Queue at roughly the same time every evening.  This afternoon after work I log-in now to see 100+ images in the Submission Queue.  Anything other than a simple visual assessment of thumbnails as a first pass to help decide which images to allocate time to assess in detail will take nearly 1 hour at 30s per image . . . then you do this every day, seven days per week, 365 days per year . . .  I trust that everyone can easily see that it will not be a perfect system even with a determinedly conscientious approach.To encourage a thriving volunteer community I hope that everyone can enjoy what is achieved by us all and the splendour of our hobby  smile

It makes sense, and not surprising at all :-) So of the say 100+ per day average, how many would you say you 'submit'? Asking because the obvious (management) question is let's throw more bodies at it :-). The next level then would be the reviewers … are they overwhelmed.
a028964
02 Dec, 2019 17:53
Brad
Barry Wilson
As a case in point, I examine the Submission Queue at roughly the same time every evening.  This afternoon after work I log-in now to see 100+ images in the Submission Queue.  Anything other than a simple visual assessment of thumbnails as a first pass to help decide which images to allocate time to assess in detail will take nearly 1 hour at 30s per image . . . then you do this every day, seven days per week, 365 days per year . . .  I trust that everyone can easily see that it will not be a perfect system even with a determinedly conscientious approach.To encourage a thriving volunteer community I hope that everyone can enjoy what is achieved by us all and the splendour of our hobby  smile
It makes sense, and not surprising at all :-) So of the say 100+ per day average, how many would you say you 'submit'? Asking because the obvious (management) question is let's throw more bodies at it :-). The next level then would be the reviewers … are they overwhelmed.

And how many current Submitters are there?
gnomus
02 Dec, 2019 18:00
Brad
So as an outsider, what is the pain point from a volume perspective

Yes - it's as Barry  has described.  I've done one year as a submitter and two years as a reviewer.  For me, I found the submitter task far more onerous: each day you get all newly submitted images added to the list (unless the user has excluded the image from competitions).  You cannot possibly go through the 100+ images, viewing them full-size, and then analyse them according to certain criteria - on a daily basis.

As a reviewer, I have a little more than 60 images in my 'Review Queue' today.  Of course, I don't get 60 new images a day, but these are nevertheless the images from which I'm expected to pick.

I cannot speak for the judging task, having never done it.  But I have seen them subject to some criticism for their choices.  I don't necessarily like every image they pick (obviously), but I'm not going to complain about it.  As I've said before, if you don't like a particular IOTD, relax - there will be another one along tomorrow.  There are a number of genuine tragedies that occur on a daily basis.  The fact that the occasional worthy image doesn't get a TP or IOTD is really irrelevant.
a028964
02 Dec, 2019 18:03
Steve Milne
Brad
So as an outsider, what is the pain point from a volume perspective
Yes - it's as Barry  has described.  I've done one year as a submitter and two years as a reviewer.  For me, I found the submitter task far more onerous: each day you get all newly submitted images added to the list (unless the user has excluded the image from competitions).  You cannot possibly go through the 100+ images, viewing them full-size, and then analyse them according to certain criteria - on a daily basis.

As a reviewer, I have a little more than 60 images in my 'Review Queue' today.  Of course, I don't get 60 new images a day, but these are nevertheless the images from which I'm expected to pick.

I cannot speak for the judging task, having never done it.  But I have seen them subject to some criticism for their choices.  I don't necessarily like every image they pick (obviously), but I'm not going to complain about it.  As I've said before, if you don't like a particular IOTD, relax - there will be another one along tomorrow.  There are a number of genuine tragedies that occur on a daily basis.  The fact that the occasional worthy image doesn't get a TP or IOTD is really irrelevant.

Again, makes sense and not surprising about the volume. If you do nothing on a given day, what happens to the 100+? Do they stay or does it all reset the next day?
gnomus
02 Dec, 2019 18:14
Brad
Again, makes sense and not surprising about the volume. If you do nothing on a given day, what happens to the 100+? Do they stay or does it all reset the next day?

In both queues, images 'fall off the list' after a certain period of time.  I think it's two weeks, but don't quote me on that.
Edited 02 Dec, 2019 18:14
jtrezzo
02 Dec, 2019 18:23
First, to answer Ruben's original questions for transparency. I am currently a reviewer and previously a submitter.

NGC 7000:
1. The image has zero no information entered. This is pretty much a disqualification right from the get go for most. Take the time to fill out all the capture details, or at least enter them into the description.
2. Bright and common targets like this have hundreds of versions by different people every year. They get tiring, and it's really hard to come across one where you say "wow, that is so much better than all the others." I think a lot of people just get sick of looking at them. In this sense a rare, difficult, or otherwise more interesting target will probably have a better chance at being selected.

Cone Nebula:
1. Again, lacking full information - the exposure detail would be nice - but I think big remote scopes and Hubble data etc. receive less interest (is this the free public data set they released?).
2. Same as above #2.

Brad
And how many current Submitters are there?
Of the original 20, there are 10 that still submit. I'm not sure how many reviewers. Many have just given up or quit in frustration I believe.

Brad
The next level then would be the reviewers … are they overwhelmed.
Absolutely not - we are underwhelmed if anything. Sometimes it's hard to pick from the review queue on a given day either because the submitted images are not that great or there aren't enough to pick from, or a combination thereof.
Edited 02 Dec, 2019 18:28
 
Register or login to create to post a reply.