Cookie consent

AstroBin saves small pieces of text information (cookies) on your device in order to deliver better content and for statistical purposes. You can disable the usage of cookies by changing the settings of your browser. By browsing AstroBin without changing the browser settings, you grant us permission to store that information on your device.

I agree

Guidelines for Submitters/Reviewers/Judges

sixburg
24 Feb, 2018 22:51
Carole Pope
Thank you Roberto for the acknowledgement.I will get off your thread now which I know was not about this, but was just following on from another person's comments.
I'm not trying to highjack this thread, but it has remained on the topic of segregation.  Unless there's something new there are no objective analyses supporting the segregation of DSW.  I think DSW is a catch-all for remote imaging  /  those who are involved in data subscription models and similar aka "downloaders".  To be clear, DSW houses both traditional remote imagers and subscription models.

In terms of subjective rationale even I could agree that it feels better to imagine a scenario wherein users with difficult situations or some other hardship are compared to users in the same situation.  Segregated from them would be imagers considered to be in more favorable conditions.

One could also consider subdividing the "home" imagers by Bortle scale, perhaps by those with a permanent set up, or create a category for those who have to drive to sites and setup each time.  One might also consider if the system is completely automated by an executive program, or semi-automated.  For the remote imagers one might segregate by equipment sophistication or cost.  Shouldn't forget about average seeing and SQM measures as well for the remote imagers.  Finally there could be a category for just processors.  Maybe even processors depending on where their data came from or total integration of the data set given the relative ease one can process deep data versus short integrations.

The above assumes that there is disparity, unfairness, disproportionate recognition and rewards (which has so far been debunked).  It also assumes the judges cannot tell the difference.  In fact, the judges are the reason there is balance in the selections today.

-Lloyd
Edited 25 Feb, 2018 01:19
tolgagumus
25 Feb, 2018 11:15
Before we demand AB should be segregated into different categories because it's unfair, we need to demonstrate that one category is being selected unfairly. I used to be in this "unfair" camp. I realized this was just a gut feeling. I was falling for my own confirmation bias. I would see an image from a remote system and say aha here we go ahead but not count the 5 images from home systems.

We saw from actual data that judges are doing a good job selecting images. They should scrutinize images with better equipment, better skies etc.
rob77
25 Feb, 2018 12:39
Well, actually I ran a very simple statistic few days ago over the last 360 IOTDs.
Around 7.5% of these IOTDs are images created with DSW data. The incidence of DSW images over all the Astrobin images is around 2.3%.

We can trivially and roughly say that if DSW grew up to 10%, we would see every 3 days a DSW IOTDs. Is that fair ? I don't know.
Personally, I am not too much worried about this topic (I process many kind of data: my data, pro, DSW, public data, etc… ) but I can understand that someone is raising the hand.

Cheers
Edited 25 Feb, 2018 12:39
tolgagumus
25 Feb, 2018 13:08
Roberto Colombari
Well, actually I ran a very simple statistic few days ago over the last 360 IOTDs.Around 7.5% of these IOTDs are images created with DSW data. The incidence of DSW images over all the Astrobin images is around 2.3%.
Actually, not every image that says DSW on it means the data is from DSW's member system. There are 20+ systems there and only 6 of them are open for membership. I have a system there that I set up myself and have nothing to do with them. There is no way for you to know which is which unless you know what equipment is available for members. So the number is probably not accurate.

I have two IOTDs in October for instance marked DSW but it's not DSW data.  This is not a shared system, it is not set up by them.




Here is another one from another imager with same situation.

rob77
25 Feb, 2018 13:55
Not too difficult to do.
I know which are the 27 iotds with the dsw pattern inside.
I will open one by one and exclude these cases.
Jooshs
25 Feb, 2018 14:29
I was thinking about this a lot last night as this topic has veered back towards the old segregating data discussion that always seems to happen…  An idea that struck me that might be interesting is to have the 3 kinds of judges be given a selection of images and to write their feedback on them.  I think my main confusion isn't necessarily how some images are selected to be IOTD, but maybe how some images get judged vs other images.  For example, an interesting exercise might be to pick 6 images over the last 6 months, 2 that weren't top picks, 2 that were, and 2 that were IOTD and just have the current sitting judges for each level give their feedback on those images and describe if they would consider them a submission, a TP, or an IOTD…  I think it could be quite instructive to the community to see what a range of judges sees when they see certain images….
Barry-Wilson
25 Feb, 2018 14:33
I have read this thread (and the IOTD/TP Manifesto) with great interest, reading of constructive collaborative idea building and also surprise at some non-inclusive views that seem, perhaps inadvertantly, to be wringing out the joy and celebration of wonder in our astronomical hobby.  Astrobin has breadth of diversity and is stronger for it.

Taking a step back to analyse the data in order to help define and better understand if there is an issue is entirely appropriate I believe.  It is also I hope instructive for the whole community.

As a DSO imager, I take great delight in viewing planetary, lunar and solar images as I am not familiar with these techniques but fully admire their resultant images.  I have no idea whether stunning images from this section of the AP community are proportionately represented within TPs and IOTDs.  BTW, I'm not suggesting a glut of data analysis here  smile . . . . just a comment, as the data is helping demistify opinions about "downloadable" data it may also help as a broader indicator of fairness.

I would hope that any analysis carried out will help inform the selection process and guidelines for submitters/reviewers/judges: having guidelines seems sensible and will not stifle an individual's judgement as they go about their voluntary role for the benefit of all.  Bravo I say smile .

Barry Wilson
rob77
25 Feb, 2018 14:37
I reviewed the numbers, as Tolga suggested. I am posting here: https://www.astrobin.com/forum/c/astrobin/annoucements/iotd-and-top-picks-manifesto/?page=1#post-9584

Cheers
joelkuiper
25 Feb, 2018 16:46
I basically added two collections to my profile, one for my urban Hyperstar setup and one for the remote observatory that I own at AstroCamp but share with two others. The AstroCamp one is interesting, since I bought it from one of the people who in time wants to leave. But we/I do maintain it proper (in fact, we'll fly out to fix the collimation and replace the imaging train soon). I can tell you, it's not trivial to keep a remote observatory running even if there is on site support. Drilling holes in Carbon Fiber requires careful planning, even moreso because it's in the middle of nowhere. It's a different kind of struggle. And having both I can say processing either is not easy, since I haven't gotten an IOTD (and might never!).  Proper astronomers rarely use their own telescope btw, they all use grant-funded expensive stuff. Is that fair? Maybe every scientist should use their own equipment! Hubble is just unfair to all those struggling back yard star gazers! But you can kinda see the flaw in this reasoning. Astronomers don't care because the output is science. The method to obtaining the data is secondary to the analysis done on it, and the findings that get published. Similarly I feel for astrophotography, the goal is art (or pretty pictures if you will) and that comes down to processing. So why doesn't just everyone chip in to maintain a good set-up and get creative? Well herein lies the important distinction between art and science: a feeling of ownership from the author. I'm sure a lot of highly technical painters feel similarly about DaDa art. "it's not fair this is in MOMA and my thing that is much harder to create is not". But that's obviously not true. And this discussion in the art world has been raging since, well, forever. What is art? Most will postulate that there is at least a subjective part to it and a cultural (inter-subjective) part. The rules change, and visit any art modern art museum and people will disagree even the "professional" critics. And they will disagree over time.
Edited 25 Feb, 2018 16:50
sixburg
25 Feb, 2018 19:17
Joel Kuiper
I can tell you, it's not trivial to keep a remote observatory running even if there is on site support.

Joel, good point.  We live this everyday at DSW.  I'm going to extricate myself from the IOTD debate.  Maybe will start a new thread about good remote imaging protocols.

There's a pervasive sense that once  you're remote you have it made…life is easy.  This is not the case at all.  There's no good reason to get into the back and forth about who's suffering is greatest, but I agree that remote imaging has it's own set of challenges that may be under-appreciated.  By the way, I live about 3,200 km from DSW.  We have local support personnel / employees who are critical when you can't put your hands on your system.  All that being said, I wouldn't trade it for my backyard again.

What do you do when the power fails?  Lost scope?  Poor weather?  WAN goes down?  LAN goes down?  UPS runs out?  Pier crash?  Bad collimation?  PA is off?  Focuser slipping?  Rotator stuck?  Dead PC?  Dead power supply?  Dead anything?  Stuff happens remote or not.

What we've found is many imagers are have a "do it yourself" mentality and like to "tinker".  Those who go remote lose this and sometimes to the point of being unhappy with the remote decision.  Sometimes a remote imager is just not prepared.  Their systems work well out in the field or the backyard, but are not designed for night after night operations.  On the other end of spectrum you have those who've done the proper preparations, install their systems, fire and forget and just let their systems run and run (this is an oversimplification of reality).  Eventually everyone gets to the point of near hands-off operation.

Is this not a feat?  Does this not compare favorably with the person who travels and sets up every night and sits with their system into the wee hours of the morning?

The technology and innovations will continue to advance regardless of the decisions made here.  Not too long ago DSW would not have been practical.  Some will be able to take advantage of those advancements and others will not.  The whole point of the DSW subscription model is to make data available to those who otherwise cannot get it.  The majority of the DSW operation is to provide access to skies that are quickly disappearing.  Neither is concerned with gaining an unfair advantage on AB IOTD, and at present we have no such advantage.  DSW will continue regardless of the decisions made here.  We appreciate the opportunity for our members to display their images in this focused community.

Best to All,
-Lloyd
rob77
25 Feb, 2018 19:37
Lloyd, we are in the same page.
I do not think that the suffering for an image should be a relevant criteria for IOTDs/TPs.

But at the same time, I am quite convinced that we should try, at least for few weeks as a test, to have 2-3 IOTD types. Like I said, - astrophotographers - (backyard or own remote equipments and itinerant imagers) from one side and - processors - (people just processing data (downloaded data, public data, pro data) from the other.
I don't believe that breaking into more specific categories will help. Probably it will generate just more confusion.

Cheers
Edited 25 Feb, 2018 19:38
patrickgilliland
26 Feb, 2018 08:53
I have said elsewhere but worth clarifying here.
As per Jan's concerns - I do not think it is a rulebook Roberto is aiming for or we should have it is two things.
1. A IoTD Criteria - The staff have to accept the boundaries of what they are judging.  E.G. Pro-data, Backyard, min gear listed, dates etc.
2. A list of guiding principles.  E.G. Is it cropped is NR good, is object better than your average version (a text addition would be good potentially and would require the judge to comment on why they selected this one*), is the target rare/unique, has something new been tried, is there an additional level of technicality (E.G. Mosaic, filter combo), is it a target with low SnR etc.  All valid criteria (and there would be more but not too many)
3. Yes I know I said there would be only 2 but by virtue of 1 and 2 being in place the judge then is free to exercise there taste - they will be doing so within the bounds of 1 and 2 (this may seem to limit, change or even expand what people had previously considered) but it means the boundaries are known by all.  No one knows what these are now so they simply assume.

I don't agree 100% with limiting targets per season.  This could actually stop the best image of the year being picked if someone has already taken the accolade for that target.  Normalising the frequency of IoTD's though could be considered.  The image tags (subject to plate solve) the items in the frame if you have just say M42/NGC1796 frame size then these appear as tags.  If other images are of similar frame size and content they can be matched.  If IoTD system find a win with last x days it will still allow IoTD but will push out y days from last time it one.  If a larger frame with more content (level to be worked out) it will have more tags and thus not match so it is allowed without the normalisation.
This same tag approach could be used by judges if a review tool/button was added.  It would simply review IoTD winner tags in last 30 or 365 days and show and matches, maybe with links to the matches.  30 is useful to show what has been seen this year and allow judge to compare, 365 useful if a rarer target and you need to drill back (E.g. LBN 777).  Providing judges with easy to locate comparions would be very useful.

An exceptional target should get IoTD even if M42 - but a very good M42 is just one of a high number of M42's and should not.  All of the above, with the judges discretion, should, in theory, control this.

*Asking a judge to justify a selection after the event is not nice, they are volunteers and even the best of us may pick an image some others don't like. The text line (which could just show with no name as a rollover when you move to the IoTD symbol) is in effect asking the judge to share their thoughts and how they justified their selection in advance.

Cheers

Paddy
Jedi2014
26 Feb, 2018 12:14
I am following the current discussion on the IOTD with great interest and, after a long period of hesitation, I would like to contribute my point of view to the process of opinion-formation. I hope you're not bored of my more detailed statement, because so much has already been written…. :-)

Last year I was a Submitter, this year I'm a Reviewer, I've had many IOTDs myself. So I know a little bit about the current selection process.
Astrobin is a public community for amateur astrophotographers, where we show our homemade Pretty Pictures. I consider the IOTD award to be a great honour and motivation for the intensive work I personally invest in my hobby.

The process that a picture has to go through before it becomes IOTD seems to me to be relatively grassroots democratic. It is clear that not all decisions are made by everyone - even I would never have chosen some of the last IOTD myself. But think about it. It's a three-stage process. A future IOTD must first be approved by a submitter, then also by a reviewer - and last but not least, it requires a majority of the jury's votes. The picture could not have been quite so ugly in the end, if so many people voted for it. That's what democracy is like.

I am in favour of setting minimum technical requirements and recommending them to all those involved. Since we are all volunteers, it can only be a code of honour. I do remember, however, that Salvatore, as a Submitter, pointed me to some of the things that have been written here. In that case, a lot of things could already be in the minds here. Then it would at least be possible to clarify and write down the code of honour again through this discussion.
Personally, I am a friend of pragmatic solutions. Anything that complicates things unnecessarily, I reject. It's still about fun, not winning anything! Unfortunately, I have the feeling that not everyone feels this way.

So how do I decide as a reviewer (or prior to that as a submitter) which image to click on and which not?

1) Since I only have three votes per day, I first scroll through the list in a quick walkthrough to see if there is a picture somewhere, which I immediately notice. If so, continue with 3.
2. I then go through the list from bottom to top slowly picture by picture through whether an image in the list arouses my interest.
3. I'll look at it in full view. I evaluate the overall image quality (noise, stars, color, artifacts, overall processing)
4) I look at the overview with the image data (are they more or less complete).
5. Now I decide whether I like it better than the other pictures in the list. Then I click on it.

This can also be a standard object such as M 106 or M 42. Either the painting has something or it doesn't. This is where the "taste" comes into play. Exclusion criteria for me are

- All images whose data was not acquired by the user (Hubble, downloaded raw data)
- Pictures without the most important information (location, equipment, exposure data)
- Overprocessed images which unclean edges, denoise artefacts, etc.

In my opinion, there is no need to make it more complicated. smile

Greetings
Jens
Astroholic
26 Feb, 2018 14:25
I'm sceptic with the "minimum oft filters" criteria.
With DSLR or simply B&W images we don't have any filters (ok, DSLRs have 3 permanent filters built in, but I think that's not the point).
rob77
26 Feb, 2018 14:36
Jens Zippel
- All images whose data was not acquired by the user (Hubble, downloaded raw data)

This is why we should implement some categories, few ones. IMHO.

Jens Zippel
it requires a majority of the jury's votes.
Nope, it doesn't work in this way. Judges are lone wolves. The IOTD are not selected based on votes.

Cheers
2ghouls
26 Feb, 2018 14:49
Gernot Semmer
I'm sceptic with the "minimum oft filters" criteria.With DSLR or simply B&W images we don't have any filters (ok, DSLRs have 3 permanent filters built in, but I think that's not the point).
I remember being confused at that statement at first too, but I think what was meant was that filters (and integration time per filter) should be listed as a "minimum" requirement for receiving a top pick.  Not that one should strive to reduce the number of filters used.
Jedi2014
26 Feb, 2018 15:30
Roberto Colombari
Nope, it doesn't work in this way. Judges are lone wolves. The IOTD are not selected based on votes.Cheers
Hi Roberto, it does not? But how is the decision made? We have several jury members and several images sent to the jury by the reviewers.
As I am not a jury member I only assumed it must work like this. If it does not work like this, it should be the FIRST thing to be changed!!!
Edited 26 Feb, 2018 15:31
rob77
26 Feb, 2018 16:08
Each judge can select one (per day) of the TP images as IOTD
Edited 26 Feb, 2018 16:10
Jooshs
26 Feb, 2018 18:29
So, I mentioned above that some comparisons of TP vs non TP vs IOTD's may be helpful and I didn't hear a response. (I've found the IOTD's to be nearly universally very good so am limiting this to TPs)  It was encouraging to see some of the judging guidelines that people laid out above, but they seem to be applied inconsistently or maybe they aren't universal.  While the criteria mentioned in this thread seem straight forward (the non classification criteria), I'm still confused at the output of such criteria.  It'd be great to see how that criteria was applied to some of these comparisons from this year.  I think it may help the community that wonders how they can achieve an iotd or Top Pick.  To simplify things, I made sure to show images that were all from individual's equipment and ones that weren't selected as top picks that had enough exposure to be considered.

This is in no way meant as an indictment of the images that were selected, but just a few minutes of searching came up with all of these examples from this year that seem to be inconsistent with the above discussed guidelines. It'd be awesome if some current judges pushing or not pushing these along could weigh in.   (Also, this isn't searching for more TP's for the images below as I'm pretty sure those that weren't selected have plenty of TPs and IOTD's and don't care that much about getting more)…

I also now see that Paddy isn't in favor of describing why a selection was made afterwards, but I guess I respectfully disagree even though I have agreed with most of his other points here.  If it was just that something was missed or overlooked, that's fine too!  We are all human and totally get it's just volunteers' time on this.  Just some real life comparisons may be helpful in this discussion.  If anyone thinks comparing these images is out of line, please let me know and I can remove them, but it seems like this has been an open and friendly conversation so I assume it can continue this way.

Maybe just state below why they weren't selected and why they were?

Here's some recent examples…
Not Top Pick

Top Pick

Not Top Pick


Top Pick

Not Top Pick


Top Pick
2ghouls
26 Feb, 2018 19:09
Josh Smith
It was encouraging to see some of the judging guidelines that people laid out above, but they seem to be applied inconsistently or maybe they aren't universal.  While the criteria mentioned in this thread seem straight forward (the non classification criteria), I'm still confused at the output of such criteria.  It'd be great to see how that criteria was applied to some of these comparisons from this year.

I think this thread is about coming up with some general criteria or "guidelines" that new submitters/reviewers could be given when they start the job as "IOTD Staff" to help them choose Top Picks. I don't think there are any established criteria currently. Jean-Baptiste Auroux shared how he personally evaluates images for Top Picks, but I don't think we know how many of the other submitters/reviewers make their choices. (If I missed some in this thread, I apologize). I think Jean-Baptiste's rationale was very well thought out, but I don't think we know that his thinking on the subject is universal as you say. I think it might be more accurate if you said: It'd be great to see how the criteria Jean-Baptiste shared "would" be applied to some of these comparisons… since we don't know that all submitters/reviewers follow the same guidelines now.

On another note: I would be curious about your analysis of these comparisons for my own edification. Understanding what people see in images, helps me improve. If you don't want to share publicly, I would appreciate a private message.

Cheers, Nico
sixburg
26 Feb, 2018 19:27
2ghouls
I would be curious about your analysis of these comparisons for my own edification. Understanding what people see in images, helps me improve. If you don't want to share publicly, I would appreciate a private message.

Nico, if we received balanced, qualified constructive criticism it would be fantastic.  This would significantly differentiate AB from basic photo sharing sites.  Currently and unfortunately there is no mechanism to do this.  I'm not even certain there is mass appeal for such feedback.
patrickgilliland
26 Feb, 2018 19:34
Josh Smith
I also now see that Paddy isn't in favor of describing why a selection was made afterwards
Sorry - think I misrepresented myself then  smile
Asking why a bad image was included is a fair question.
What I was trying to say was ask for the comment in advance and it turns it into a pro-active positive step (and will add some thinking rather than flippant clicking into the process),  rather than a reactionary reprimand after the event which will always have people on the back foot.  (note to self no comments in morning until after caffeine dose 3!)

A TP as I understand is a submitted image that has been reviewed and passed onto the judging queue.  Personal taste may be an issue, not examining every image possible, viewing the thumbnail and not the full-size image and details possible.  Some of it might be down purely to timing.

Maybe Salvatore will know but I wonder how many images receive 2 submits (could be set as a minimum)  - is their enough reviewers to do the same for judgement queue?  Might not be perfect but will help rationalise those in the queues, not sure how those being missed can be dealt with though.

Paddy
Edited 26 Feb, 2018 19:36
nekitmm
26 Feb, 2018 19:58
The idea with 2 submitters and 2 reviewers is good, I like it a lot.

I also have another idea: what if the option to submit/review image was available to user only after user opened the image in both small and full sizes? Right now you can submit it just looking at its preview and it's sometimes too tempting to click without thinking.

This is a bit harder to implement, but may work well. It will take more time for volunteers, but after all, if someone volunteered as submitter/reviewer, I think it is fair to ask to spend few seconds to see image in all sizes.

I think that submission is the crutial step and we need to put more thinking here. The quality of submissions will mostly determine overall quality of TP and IOTD, because if image was not submitted, then there is nothing other stages can do.
Edited 26 Feb, 2018 19:58
2ghouls
26 Feb, 2018 20:01
Deep Sky West (Lloyd)
Nico, if we received balanced, qualified constructive criticism it would be fantastic.  This would significantly differentiate AB from basic photo sharing sites.  Currently and unfortunately there is no mechanism to do this.  I'm not even certain there is mass appeal for such feedback.

There is one mechanism on AB that I am aware of, and have used myself: https://www.astrobin.com/forum/c/astrophotography/critique-requests/
When I used it, I did find the feedback very helpful.

It would be nice if it were more widely known-about and used. Personally, I feel I am just getting to a point where my constructive criticism on an image might be valuable. It can take a fair amount of experience to recognize what could be improved in an image.

And sorry- didn't mean to derail the thread in any way as this is a bit off-topic now.
Jooshs
26 Feb, 2018 20:15
Nikita Misiura
I also have another idea: what if the option to submit/review image was available to user only after user opened the image in both small and full sizes? Right now you can submit it just looking at its preview and it's sometimes too tempting to click without thinking.

Man, I think that is a huge one personally.  So many images look good until they are opened and the difference in processing effort and skill is not revealed unless an image is fully opened.  Making a thumbnail or small preview look good is easy, making a full image stand out and stand up to scrutiny deserves recognition.

2ghouls
There is one mechanism on AB that I am aware of, and have used myself: https://www.astrobin.com/forum/c/astrophotography/critique-requests/When I used it, I did find the feedback very helpful.
Thanks for sharing!  I forgot about that.

Paddy Gilliland
What I was trying to say was ask for the comment in advance and it turns it into a pro-active positive step (and will add some thinking rather than flippant clicking into the process),
Completely agree.
 
Register or login to create to post a reply.