Cookie consent

AstroBin saves small pieces of text information (cookies) on your device in order to deliver better content and for statistical purposes. You can disable the usage of cookies by changing the settings of your browser. By browsing AstroBin without changing the browser settings, you grant us permission to store that information on your device.

I agree

IOTD and Top Picks Manifesto

Jean-Baptiste_Paris
19 Feb, 2018 21:29
Paddy Gilliland
Personally I feel categorising devalues the final IOTD

I totally agree with that ; same feeling for me.

KuriousGeorge
We just need to be sure we mix it up so no one category is exposed more than another. For IOTD, that might be something like this, depending on # of submissions…Sunday - INDIVIDUALMonday - GROUP/HOSTEDTuesday - PROFESSIONALWednesday - TERRESTRIALThursday - INDIVIDUALFriday - GROUP/HOSTEDSaturday - PROFESSIONAL

IMO, this kind of categorization is the worst possible option :

1/ It cuts the link with the current events ;

2/ It ensures a definite quota for some categories (and quota can be unfair in a way or the other… for exemple only one day per week for "terrestrial" even there's no great pic in a given week in this category ; or just one even if there's 5 admirable terrestrial pictures this week ?) ;

3/ This change is presented as positive for all the users, but is it fair to "garantee" nearly 30% of the IOTD to the "Professional data" ? How many users are concerned by this ? Is there any reliable estimation of the users currently concerned by Professional data processing on AB ?
I seriously doubt it concerns 30%… and even with only 1 IOTD per week, so about 15%, it must still be over-represented…

jb
Edited 19 Feb, 2018 23:08
rob77
19 Feb, 2018 21:44
For what concerns the pro data issue, we have 2 scenarios:

A - the first 2 tiers (submitters and reviewers) don't promote the image to the judges AND/OR
B - the judges believe that the image must not be IOTD because it's pro

In both case, there is a clear categorization that happens in the minds of the staff: pro data compositions must not be awarded.
Well, I know pro data imagers are a few  smile but it would be nice to try to solve this issue.

I'm not speaking too much for my case, as a judge my images can't be IOTD, but rather to encourage the growth of this branch of our hobby which I find very interesting and challenging!

Cheers
Edited 19 Feb, 2018 21:45
astrophotons34
19 Feb, 2018 21:58
AtmosFearIC
I’ll rewrite more of what I did yesterday before the crash when I have some more time but for now I’ll just mentioned that with regards to Roberto and the pro data, if it doesn’t get a Top Pick then it isn’t making it to the judges in the first place.My understanding is that a Top Pick is any image that is referred to the Judges but doesn’t make IOTD.
Hi, i have a question, yesterday my image was in the top pick list and today after the crash it disapeared, do you think someone could take it back ?
Thank you for your answer ! smile

https://www.astrobin.com/334015/?nc=user
Edited 19 Feb, 2018 21:59
sixburg
19 Feb, 2018 22:20
Paddy Gilliland
I see this might be hard to manage on an ongoing basis, I would be delighted to be proved wrong though.  It would also be a shame to lose the peer element of the current set-up - but having them define the scope of IOTD would be a great start, this guide would serve all judges experienced or otherwise well.  It would also start the definition of what 'it' actually is.

I believe getting an "expert panel" might be difficult for sure.  And they too would have inherent biases.  However, I think it could be superior to peer review especially since they would be recognized experts.  How to get them, how to compensate them, etc. could all be ascertained if this direction was chosen.  I think to kill the idea due to perceived difficulties would be premature and we would miss out on the opportunity for some serious feedback and for clearly differentiating AB and making it a go to place for true critique.

Let's take it a step above basic "likes" and all that.  What we do is highly specialized, but the feedback we get is not.  Do we not want or deserve more?  This isn't just a run of the mill social media setting.  We spend serious resources (time, skill, money, etc) and to have our work subjected to the vagaries of classification problems, perceived unfairness, unclear criteria, etc. seems out of step with what we do.
Edited 19 Feb, 2018 22:24
patrickgilliland
19 Feb, 2018 22:44
Roberto Colombari
the growth of this branch of our hobby
Another good point - promoting all areas is important.  I have no issue with pro-data if something good has been done.  A massive mosaic or a detailed look at a rare area.  I don't want to see another M16 HST image though.  I feel I am more than capable of deciding that for myself  smile  I simply looked for excellence when I was a judge and removed all bias from my process, I did use my knowledge and experience to make decisions and weight images but you can't set criteria for all the knowledge we all have.   I would afford this same privilege to the judges.

If pro-data is too often overlooked then the 'guide' should state pro data should be considered on par with am data (although the actual review criteria might be different) - key thing is to consider it.

If the approach was simply to ensure pro-data gets a IoTD now and then once a week might be too often (as JB points out).  No idea on ratio (of am data:pro data) but suspect it is closer to 50+:1 than 6:1.  But then giving it 1 slot every 50 days is not promoting it either!

I wonder if people are trying to solve wider issues by changing IoTD when in fact it might not even be the right vehicle for the change?

Here's another 'idea'  - ready for this  smile  - IoTD stays exactly the same.  But:
1. Have the guide in place to ensure all qualifying images are considered (if you get 3 pro-data's in a row because they are best images so be it, it will balance out in time).
2. An optional category such as pro/remote/owned/backyard/deepsky/nebula/galaxy/lunar/solar/planetary/satilitte/aurora/landscape/comet etc or whatever anyone feels is necessary (staying out of that area as I am really not concerned by it but once demonstrated as valid I would not object either) - I have an issue as the categories listed are not of the same type (object type vs data type - hence why I have trouble seeing how it would work if this granular)
But….
If added and it is selected as TP then the top image from each category shows on homepage on rotating banner or similar.  There are still weighting issues there but the image will get a specialised area's onto the home page and promoted.  You could also have 'top picks by [category]' menu.
You could divide submitted screen into category with x picks per category to ensure a selection in each is encouraged)
3.  Start a new thread on areas of the site that could better promote specialised and niche areas - unlike some I want to see them not exclude them.  Just not sure it is  the same as this topic.  There is no silver bullet, IoTD changes will not on its own encourage more users of niche x or y, I am starting some spectroscopy soon, very niche here but if there is a place for it……..

But - these are just more ideas and again shows the need to define the scope, then analyse not the other way around (I know I am sounding boring now) - Ensuring good exposure for all kinds of data is a good, a new 'requirement'to add to the analysis though, not commonly mentioned previously.

PS: While it is hard work smile ,this is a far more productive (and polite smile, exception the earlier'cheat' statement ) return to this topic for many of us, for those discussing for the first time please note many of us have done it before - the point?  Don't take anything I say too seriously, I will challenge everything I see as lacking a basis until I see that basis, this is only in the interests of what I have previously learnt (right or wrong) and one trying to consider all points openly.  But unless there is demonstrable, unequivocal evidence something is right or needs to change it should not be implemented on a  'to see how it goes' basis, not for my sake but for all users and for Salvatore to a degree, no point him designing something that does not work our costs him users.
Edited 19 Feb, 2018 22:47
patrickgilliland
19 Feb, 2018 22:57
Deep Sky West (Ll...
I think to kill the idea due to perceived difficulties
As I said, I do perceive difficulties but [really] would love to be proven wrong as if it could work as it could be great.
Question: What is an expert? Would Roberto be one on Pro-data, you or I Sara W and others on deepsky? Or do you mean going more professional than that?  If so who?  Are professionals the right people to judge amateur work?  Selecting the wrong pro's could be just as contentious as the wrong peers.
I like the idea don't get me wrong - just trying to get aligned with your thinking on the idea.
Edited 19 Feb, 2018 22:58
AtmosFearIC
19 Feb, 2018 23:13
I’ve mentioned this a few times around the place but I think one of the biggest improvements for fixing the “issues” is to change the judging system so that the judges work as a panel as opposed to the current “lone wolf” situation.
If judges were able to vote, rank or pick images from the Judging Queue this could also be incorporated with a Judging Comments that has been discussed?

I did ask the question but it like three pages of other stuff was lost but along with Paddy, what makes an Expert Judge?
I’ve done some judging with general photography and given talks on astrophotography but I’ve only been doing serious AP for the past 2.5 years.

As for for the “hosted” data situation with DSW of that ilk, my personal bias against that data has nothing to do with who owns the equipment, whether it’s remote or where it’s located.
For me personally it is to do with the number of images I see from the same data. The Rosette Nebula has been a recent example where I have seen a number of IOTD contenders, all very well processed, all slightly different due to colour mapping, sharpening, masking and potential deconvolution.

And this is my point, a handful of very well processed images with the exact same data that all look similar, although having some processing differences. Do you pick one of those because they’re all very good or pick a non-M42/Rosette/M45.
sixburg
19 Feb, 2018 23:29
Paddy Gilliland
I like the idea don't get me wrong - just trying to get aligned with your thinking on the idea.
Hi Paddy,
My thinking on "experts" are some subset of those who have history and a proven track record in the hobby.  Perhaps they're authors, teachers, or widely recognized in the "field".  I think experts are absolutely the right people to judge amateur work.  Amateurs can judge (and have), but it seems to me an expert could offer more.  I consider myself to be an amateur, and I can and have helped others.  But could I do it to the degree of those far better than me?  Probably not.  I'm not talking about processing help here…I'm talking about assessment of the final product.

There are clearly a set of difficulties not limited to:

  1. Who are the experts? Can we even agree?  Do "we" need to, or can Salvatore decide?
  2. Why would they participate?  Why would they?  What's in it for them?  Would they need to be compensated?  If so, how, how much and by whom?
  3. Will they provide critique?  If so, at what level and how?  By some sort of criteria check list?  Will it include free from prose?
  4. How much time will it take on their part?  Can we limit that time by the way we ask them to provide feedback?
  5. And so on…not insurmountable in my opinion, but I have a hard time with "no" and "can't" ;-)
My view of those who could provide valuable feedback don't participate in AB to my knowledge.  And while several of us could likely add substantively to the critique, my vision is that it would come from "the outside"…imagers not normally active on AB which would disqualify some candidates.  For example (in no particular order and not exhaustive and every other qualifier a listing like this might need):

  1. Gendler
  2. Gabany
  3. RBA
  4. Block
  5. Walker
  6. J. Davis
  7. Moore
  8. Crawford
  9. Goldman
  10. Cannistra
What would make these folks participate?  Heck if I know today, but I hope this answers your basic question.  I've no answers (yet) to the larger questions I posed above.
keithlt
19 Feb, 2018 23:37
Its easy peasy for now. IOTD is as its is fun inspiring and quite new every day or so. if you what image of a category make a group. love all your passion and ideas but its time to feed the dogs and do some chores.
Edited 19 Feb, 2018 23:39
patrickgilliland
19 Feb, 2018 23:40
AtmosFearIC
the number of images I see from the same dat
If it is winter you will see a lot of M42 for example.  So for every hosted data image there will be n x non-hosted data versions far outweighing the hosted ones.  It's also rare everyone will publish a shared dataset at the same time, in fact, I try to avoid that scenario.  It will happen but I guarantee there will be many more versions of the same object that week.  I see the point but ultimately we are limited by what up above and there is always overlap shared or backyard.

Likewise, hosted data does not make your output a certain success, you still have to process it well.
For me, it was about the best image in the pool, without bias but with knowledge and experience applied.  I never found I had an issue with the same image from the same source (To be honest I can generally pick the best-processed version quite easily as well.); if it was the best image it won if it was not something else did.

.
AtmosFearIC
judges work as a panel as opposed to the current “lone wolf” situation.
This would be great, the implications are quite onerous though.  All judges need to communicate with each other constantly rather than as it is now where they can do at their leisure with minimal overheads.  I logged in when I had time and reviewed, I simply would not have had time to have spent hours discussing, debating and being part of a team in that process.

Another option is not to award the image with a single click as per current set up. But images in the queue get rated on a simple basis radio button 1-5.  Set a minimum number of voters EG 2 and max of say 5.  The judge applies scores to those they want, not all are mandatory.
15-18 points - auto IoTD (set queue so its working say 7 days in advance - there will thus be a week with no IoTD) - but if on day 7 no image has scored 15-18 it will take the next highest scoring and put in the queue.  All details other than your score hidden from other judges - you will not know if someone else has voted and what score they gave.  Removes lone wolf, at the minimum you have 2 people voting, excellence will push through with higher scores quicker - but safety for when those number not being hit.
sixburg
19 Feb, 2018 23:43
AtmosFearIC
And this is my point, a handful of very well processed images with the exact same data that all look similar, although having some processing differences. Do you pick one of those because they’re all very good or pick a non-M42/Rosette/M45.

You choose the image that you like the best regardless as long as you are clear on its provenance…my opinion.  I respect your opinion, but fundamentally and respectfully disagree with the premise.  I'm not trying to start an argument on this point.  I'd much rather create some support for real analysis of the problem that needs to be solved, and for an expert panel rather than to defend what we do at DSW. I'm glad to have that discussion in another setting, but my continuing engagement in this conversation is not going to be a defense of our approach.
Edited 19 Feb, 2018 23:47
Andys_Astropix
20 Feb, 2018 00:04
Paddy Gilliland
Another option is not to award the image with a single click as per current set up. But images in the queue get rated on a simple basis radio button 1-5.  Set a minimum number of voters EG 2 and max of say 5.  The judge applies scores to those they want, not all are mandatory.15-18 points - auto IoTD (set queue so its working say 7 days in advance - there will thus be a week with no IoTD) - but if on day 7 no image has scored 15-18 it will take the next highest scoring and put in the queue.  All details other than your score hidden from other judges - you will not know if someone else has voted and what score they gave.  Removes lone wolf, at the minimum you have 2 people voting, excellence will push through with higher scores quicker - but safety for when those number not being hit.
Interesting thought Paddy, and similar to some online judging comps (IE: Landscape photographer of the Year, World Photography cup) where a panel of judges around the world score individual images from 50-100, and the highest averaging images make the finals etc.

The same concept is done in person at Pro photography awards judging with 5 judges scoring each print independently, the average being the score (unless someone chooses to debate that final score up or down, which is where the fun begins, hence also having a panel chair & procedures to moderate the debate.)

Then Colin's concerns about "Lone Wolf" judging are addressed, and it's likely that any bias to the image source would be removed.
Please also see Robert's new AB thread regarding judging.
patrickgilliland
20 Feb, 2018 00:11
Deep Sky West (Ll...
My thinking on "experts"
1.  Commonly accepted names - no bones to pick with that list.  As a team, it would be solid.  Various issues I am sure people could raise with each, hence I think it would still need to be a team or rotation to maintain the diversity the peer option provides.
2. Haven't got a clue - it's a challenge for sure, to do every day 365 days a year I think, initially anyway is just too much.  Getting the volume of people for a team monthly would be a challenge let alone daily. I think it is a good idea but too much overhead for an IoTD.
3. Time is the issue - again thinking out loud, maybe for IoTD this is too much. IoTW/M and I could see this being far more achievable.  It then changes to a new layer above IoTD though.  But it would be great to achieve the accolade - though it would likely start yet another thread on what people perceive as its issues  smile
4. See 2, 3
5. I use 'no' and 'can't' most frequently at the gym!  But I like to set achievable targets and build from there - external panels judging every day,  if you have the powers of persuasion to make it happen great, it would not fall into my definition of achievable though.

Gotta be honest - lots of ideas, but I am yet to be convinced what is broken and what this future utopia is.  You will know this all too well, whatever changes there will still be a minority that will not like.  You don't implement a solution to cover every  exception, you target 80-95% hit on the key requirements (with each 1% above 80% typically requiring the same effort, time or monetary as the first 80!)

I would not be disappointed to see it continue as is and for everyone to focus on the enjoyment smile - If it changes so be it, but base this on some actual facts.  Lot's of ideas, comments, opinions, views etc.  We have not got a list of facts so I struggle to see how it can proceed until that list is made.  Define 'it' job 1.  Then move on (sorry two threads in 1 post but late and need sleep now!)
patrickgilliland
20 Feb, 2018 00:18
Andy
Please also see Robert's new AB thread regarding judging
WOuld lobe to but can't find - do you have link - is it just for judges?
Andys_Astropix
20 Feb, 2018 00:26
Oh, sorry, it must be just for the judges.
patrickgilliland
20 Feb, 2018 07:46
Andy
it must be just for the judges
OK - I thought we were all discussing openly here; if separate closed chats happening elsewhere the process is going to become more disjointed/messy and the open chat approach is somewhat negated by that.

I'll step out now and wait to see what happens.
siovene
20 Feb, 2018 09:48
Hi all,
Andy had a page of the forum still open in a tab of his browser pre data loss, so these messages were recovered:

Deep Sky West
@ AtmosFearIC,I've no objection with your stance with respect to DSW and similar.  I would ask that you consider a couple of other things so that no one is unduly excluded from your consideration.  This might not be easy or even possible because…Not everyone at DSW participates in a "data pool".  We have 6 "shared systems" and 18 other systems owned and operated  by individuals.  Maybe other multiple-scope observatories are similar.Not all of our members participate on Astrobin, but I ask that you take a close look from time to time when you see DSW as the location.  It's hard to distinguish between our members I admit, and I don't have a good way for you to tell the difference.  Our shared systems are RCOS, FSQ, Rokinon, AP175, and 2 RH305s.  If you see an image from something other than these, then it's probably not from a "data pool".  I don't expect you to remember this, of course, but it will matter for some if they believe they're being looked at differently and getting "caught up" as it were.We have several FSQs and RC Optical Systems scopes and will soon have another RH305–all owned and operated by individuals.  It will become more and more difficult to segregate our membership.Not every imager / processor is able to field equipment.  Physical disabilities, for one example, make it difficult or impossible for some imagers to participate in the hobby in the traditional ways.  This isn't easy to decipher from the data or the location either.  It would be disappointing if those who want to be considered are skipped over due to their particular circumstance.
Perhaps what I've explained harms the case of those using so-called "data pools".  I hope not.  I don't think data are unique in the way I understand you to mean that.  My point is that some classifications and some criteria don't work well. All that being said I understand where you're coming from and respect your opinion.

KuriousGeorge
AstroBin is considering how we might improve our Image of the Day (IOTD) to appeal to more of our members. We need your help with a simple 3 question poll…1. Which of the following do you MOST agree with?a. IOTD is fine as as. Please don't change it.b. I'd like to see better judging to help ensure IOTD is varied, high-quality, has complete data, and is a relevant subject.c. I'd like to see IOTD for more than one category (e.g., Backyard, Hosted and Professional). AstroBin will clearly define the exact categories and will ensure proper judging.d. I don't like having IOTD. Please remove it.2. If we decide to have IOTD for multiple categories, please select one or more of the following categories you like the best…a. "Backyard". You did it all. This includes equipment setup, capture and processing.b. "Hosted". You paid someone to setup the equipment and/or capture the data. You processed the image yourself.c. "Professional". You obtained data from equipment that's not normally available to an amateur (e.g., Hubble, professional observatory, etc). You processed the data yourself.d. "Terrestrial". The image is related to the earth or people (e.g., landscape, satellites, aurora). You captured the image yourself and processed it.e. Other? Please tell us __________________3. Regarding knowing how dark the sky was for the image…a. It's very important for me to know the SQM and/or Bortle scale for the person's sky (backyard or hosted).b. This is interesting, but not a big deal.c. I don't care how dark the person's sky is.
KuriousGeorge
21 Feb, 2018 02:55
A few more recovered posts for your reading enjoyment…

IOTD and Top Picks Manifesto
dakloifarwa
Andreas Dietz
# today, 10:11
KuriousGeorge
We may need to resolve that dilemma if we see very high-end equipment taking over the DIY category
Another strong argument for the proposal of categorizing the equipment effort instead of remote/local/pro data. I don't care about the owner or the site of a certain stuff if it's compared to a similar kit elsewhere…
CS, Andreas
QUOTE QUOTE SELECTED
gnomus
Steve Milne
# today, 10:17
Would one way forward be to keep IOTD as it is (or virtually as it is), and run alongside it less frequent ‘competitions’ that folks could choose to enter.  These could be (i don’t know) weekly or monthly.  The categories could either be along the lines of those suggested, or they could be more based on Salvatore’s (or the judging panel’s) mood - for example, a ‘heavyweight’ contest where the likes of Paddy, Roberto & Lloyd (and Sara - go on Sara!) duke it out alongside anyone who dares go up against them?
QUOTE QUOTE SELECTED
sixburg
Deep Sky West (Ll…
# today, 10:26
Roberto Colombari
Deep Sky West (Lloyd)
Roberto Colombari
Yeah, it could have some flaws but I think it will be a potential good improvement.If we have no categories, as it is now, for instance my pro data mosaics should be evaluated with the same criteria as the other images around. But actually they aren't.My mosaic over NGC1055 (Subaru) is probably the most detailed view of the entire galaxy, it neither ran as Top Picks smileI mean, it is quite clear that the current system must be enhanced. It has been a really huge improvement 1.5 years ago but now it is mandatory to go some steps further !
In a "fair fight", this image would only be compared to other space-based images with imagers at your same level.  In the current system, it gets passed over because a judge decided it was an unfair fight based on whatever (unknown) criteria.
I would not be in your level, for example, nor would I work with space-based data so we would never compete.
Idea of imager level and types of data within that level can be made to work.  Aspiring imagers would try to get to the next level.  Or, we could just get IOTD for reasons unspoken and just assume, yeah, I'm that good!
I chose NGC1055 as example exactly because it is not space data smile
It is ESO and Subaru and required more or less a couple of months to be put together summing up my work and Gendler's one.
Cheers
My mistake…I did see the Subaru reference.  It would definitely be pro-data.  I still wouldn't be in that competition because I don't even know how to download it much less stitch together hundreds of panels.  It's a tour de force.  You would compete at the highest level of pro-Earth-based processors.  I would one day be content to be at the lowest level of pro-Earth-based…maybe one day.
QUOTE QUOTE SELECTED
sixburg
Deep Sky West (Ll…
# today, 10:33
Steve Milne
a ‘heavyweight’ contest where the likes of Paddy, Roberto & Lloyd (and Sara - go on Sara!)
Uh, yeah.  Yikes!  Not a bad idea all in all.  This could be encompassed in a skill level to data/image type schema.
Skill Levels 1-5 or 1-10
Extraterrestrial data types:  backyard through space-based
Terrestrial, but astro
and do on.
If Sara and I were both level 3 (as determined by some as yet undefined set of test/certification a'la Six Sigma certification), lets say, and we both processed extraterrestrial targets with data acquired on our own systems at a dark site, then it would be fair to compare our entries.
Edited today, 10:33
QUOTE QUOTE SELECTED
patrickgilliland
Paddy Gilliland
# today, 10:35
Steve Milne
and run alongside it less frequent
This could be via the groups as a said earlier with the winner of the group area being promoted to a TP and the IOTD via unique path (it could also get there via current process) it would push more items to TP and ensure specialist areas get into the IOTD queues etc
QUOTE QUOTE SELECTED
rob77
Roberto Colombari
# today, 10:35
Deep Sky West (Ll…
My mistake…I did see the Subaru reference.  It would definitely be pro-data.  I still wouldn't be in that competition because I don't even know how to download it much less stitch together hundreds of panels.  It's a tour de force.  You would compete at the highest level of pro-Earth-based processors.  I would one day be content to be at the lowest level of pro-Earth-based…maybe one day.
Your RCOS data are outstanding! Not so far from these ones smile
________________________
Anyway, I just mentioned this example to say that if no categories will be implemented my NGC1055 should have honestly run as IOTD but neither went in the top picks!
No categories, for what concerns to me, means that the IOTD is just selected based on aesthetical criteria.
If the judges discarded this image because it's pro data (or, even worst, because they thought it was from HST) they are implicitly reasoning based on some "categories" that they have in their minds.
At this stage, IMHO, is far better trying to define them cleary a priori smile and transparently for everyone.
Edited today, 10:40
QUOTE QUOTE SELECTED
swag72
Sara Wager
# today, 10:37
Deep Sky West (Ll…
If Sara and I were both level 3 (as determined by some as yet undefined set of test/certification a'la Six Sigma certification), lets say, and we both processed extraterrestrial targets with data acquired on our own systems at a dark site, then it would be fair to compare our entries.
Thankfully I don't have a dark site (about bortle 7) so count me out   smile
QUOTE QUOTE SELECTED
Jooshs
Josh Smith
# today, 10:39
Just out of curiosity, for those who are proponents of multiple images of the day, how does that end up being different than top picks?  Surely nearly every above average or excellent image gets selected as a top pick, correct?  Does there really need to be resolution between the top picks (5-6 a day?) and 3-4 categories of images of the day?
Again, just my opinion and curiosity on the very strong motives of creating so many competition. I’m interested seeing others thoughts that are driving this.
If anything, in my opinion, kind of in the vein of Steve’s suggestion, I think maybe an image of the week for each category could be cool. Maybe something along the lines of having top picks categories and letting the community vote only once each in each category from the previous week on the next image of the week in each category.
Just ideas on categories might be:
up to 1 degree radius field
up to 10 degree radius field
10+ degree field
solar system
specialty (nightscape, satellites, northern lights, etc… )
Creating a category for 1-10 degree fields of view and wider allows to level the playing field. Narrowband from a small scope with a wide field can be excellent on a cheap lens or doublet from your backyard or from space. It’s a great equalizer.
Edited today, 10:43
QUOTE QUOTE SELECTED
sixburg
Deep Sky West (Ll…
# today, 10:39
Roberto Colombari
Deep Sky West (Ll…
My mistake…I did see the Subaru reference.  It would definitely be pro-data.  I still wouldn't be in that competition because I don't even know how to download it much less stitch together hundreds of panels.  It's a tour de force.  You would compete at the highest level of pro-Earth-based processors.  I would one day be content to be at the lowest level of pro-Earth-based…maybe one day.
Your RCOS data are outstanding! Not so far from these ones smile
________________________
Anyway, I just mentioned this example to say that if no categories will be implemented my NGC1055 should have honestly run as IOTD but neither went in the top picks!
No categories, for what concerns to me, means that the IOTD is just selected based on aesthetical criteria.
If the judges discarded this image because it's pro data (or, even worst, because they thought it was from HST) they implicity are reasoning based on some "categories".
At this stage, IMHO, is far better try to define them cleary a priori smile
Agreed on defined categories to a point.  This business about DIY = remote doesn't hold up though.  One of the current judges mentioned their criteria which is biased against DSW and similar.  I mentioned my own judging criteria (when I was a judge last year) biased against space-based.  Both are valid in the minds of the judges, but in retrospect my approach was not necessarily fair.
Edited today, 10:41
QUOTE QUOTE SELECTED
rob77
Roberto Colombari
# today, 10:48
Deep Sky West (Ll…
This business about DIY = remote doesn't hold up though.
Agreed, this must be discussed a little bit deeper
QUOTE QUOTE SELECTED
patrickgilliland
Paddy Gilliland
# today, 10:57
Josh Smith
up to 1 degree radius field up to 10 degree radius field 10+ degree field solar system specialty (nightscape, satellites, northern lights, etc… )
Another great way of classifying (and one that can be dealt with in part through plate solving  removing user input) - into the pot  - lots of categories building up now.  So many valid perspectives limiting the number may prove a difficult task.
Deep Sky West (Lloyd)
Roberto Colombari
Deep Sky West (Ll…
Both are valid in the minds of the judges.
- having many judges with many tastes helps normalise this to a degree I think.
QUOTE QUOTE SELECTED
sixburg
Deep Sky West (Ll…
# today, 11:19
Paddy Gilliland
- having many judges with many tastes helps normalise this to a degree I think.
I would generally agree; however the M95 situation slipped through.  This is the only example I know about, but assume if an easy one gets through then the more difficult situations may not be expunged through the judges panel.  I would think the law of large numbers would handle this, but either it just didn't there aren't enough judges, or they are similarly biased.  Who knows.  Maybe it actually works and is just a one-off.
Is there no one in support of a panel of professional judges?
QUOTE QUOTE SELECTED
KuriousGeorge
KuriousGeorge
# today, 13:36
Another version based on recent comments…
AstroBin is considering how we might improve our Image of the Day (IOTD) to appeal to more of our members. We need your help with a simple 3 question poll…
1. Which of the following do you MOST agree with?
a. IOTD is fine as is. I'm OK with an occasional controversial image. I know the judges are volunteers with families and other jobs and this is to be expected.
b. I'd like to see better judging to help ensure IOTD is varied, high-quality, has complete data, and is a relevant subject.
c. I'd like to see IOTD for more than one category (e.g., Individual, Group and Professional). AstroBin will clearly define the exact categories and will ensure reasonable judging.
d. I don't like having IOTD. Please remove it.
2. If we decide to have IOTD for multiple categories, please select one or more of the following categories you like the best…
a. "Individual". You did it all yourself (DIY). This includes equipment setup, capture and processing. This may be in your own backyard or at a remote site. You did not delegate any setup, capture or processing to another individual. If someone asked you "Who helped you?", you would answer "I did it all myself with no one helping me".  If you are disabled and direct someone to help overcome your disability (e.g., carry equipment, process data under your direction, etc), you are considered an individual. https://en.wikipedia.org/wiki/Do_it_yourself
b. "Group". You delegated one or more individuals  to help setup the equipment and/or capture the data. This includes hosted facilities, unless no individual at the hosted facility helped you setup equipment or capture data. You processed the data yourself or had others help you. This includes processing data captured by another individual.
c. "Professional". You obtained data from equipment that's not normally available to an amateur (e.g., Hubble, professional observatory, etc). You processed the data yourself. Some individuals with very high-end equipment may be asked to submit under this category.
d. "Terrestrial". The image is related to the earth or people (e.g., landscape, satellites, aurora). You captured the image yourself and processed it.
e. "Other". Unusual situations that don't clearly fall into the above categories.
3. Regarding knowing how dark the sky was for the image…
a. It's very important for me to know the SQM and/or Bortle scale for the person's sky (DIY or Hosted).
b. This is interesting, but not a big deal.
c. I don't care how dark the person's sky is.
Edited today, 13:40
QUOTE QUOTE SELECTED EDIT
patrickgilliland
Paddy Gilliland
# today, 14:21
Deep Sky West (Ll…
Is there no one in support of a panel of professional judges?
Maybe a few more judges and image has to be selected at least twice to get IOTD.  Panels are good but extra admin might make it harder to get commitment from people.
QUOTE QUOTE SELECTED
Jooshs
Josh Smith
# today, 14:37
Paddy Gilliland
Deep Sky West (Ll…
Is there no one in support of a panel of professional judges?
Maybe a few more judges and image has to be selected at least twice to get IOTD.  Panels are good but extra admin might make it harder to get commitment from people.
There has been a ton of discussion so it is entirely possible I missed it…  Professional judges as in paid judges for the every day iotd or professional judges to create a set of criteria that future judges would then adhere to?
QUOTE QUOTE SELECTED
patrickgilliland
Paddy Gilliland
# today, 15:07
Josh Smith
Professional judges
Not sure how I would define a pro judge.  Me, you, Salva or someone else? - still have the issues of their taste and limiting the diversity (potentially). A case in question would be a recent APOD of deepsky and a comet.  Picked by what many would consider pro judges but it would not have made the cut here.
Parking the topic of categories for now - just looking at IOTD/TP or whatever we end up with.
I think a set of criteria would be a good starting place though - it would also be a good exercise that could bring many of the points here to a close if documented and understood.  Ultimately it is Salvatore's site but he could reach out to people he trusts, on and off-site, and ask them each to document the top x items that should be considered for IOTD.  He can then rationalise and distribute to all staff as a guide.  Potentially this could be supported by multi-pick logic if enough staff and high likelihood of getting double hits on images (I don't know current volumes).
One massive plus side is by providing a set of criteria; people who may have less experience could potentially serve as judges as they have a rulebook to follow. Thus the increase in judges makes the multi-match more likely.  It's late here and not really thought this through fully but I can't help but think knowing the criteria 1. Removes a lot of topics here (back yard only, pro-data etc) people will all know the criteria 2. Those serving as judges will know what is expected of them and not to apply if they have a polarised opinion that is contrary to the criteria, 3. You have criteria with which to validate choices against - if someone or something is not working it can then be clearly measured and addressed.
Currently this thread is trying to correct all ills, yet we have no baseline against which to qualify the 'ill'.  Sometimes the best way to solve an issue is to define what the issue is.  In this scenario, an IOTD and the criteria it is considered against is purely what that judge thinks.  I am not talking about creating robot judges just following rules, rather helping all judges by setting a framework within which they can operate better.  Judges with views and tastes create diversity which I think is good.  A playbook defines the team strategy - the players are part of the team but each they each do their individual thing, their way, with the team strategy in mind still though.
Late, I'm waffling now but start with the simple, define what it is we are trying to fix, then define what that actually is with a little more granular detail. smile
Edited today, 15:10
QUOTE QUOTE SELECTED
blueangel
Jim Matzger
# today, 15:32
If I had my way, I would make sure that every Colombari/Wager image was accorded IOTD status.  smile
The current system has improved quite a bit with the recognition for “Top Picks” so that more participants are recognized for their hard work no matter what process they use to get there.  As far as science goes, I will cuddle up with “Annals of the Deep Sky” whenever I need a science fix.  If I want pretty pictures, I will log on to Astrobin.
QUOTE QUOTE SELECTED
Thirteen
Jason Guenzel
# today, 15:52
I read through this, and whew, there is a lot to digest.  But as far as fixing the “problem”, I agree with Lloyd.    The issue needs clear definition.     As Sara stated, it seems to be about fairness.
For a while this whole thread degenerated into how to categorize all the images in an effort to level the field.    I personally feel (and it is evidenced in all the dialog) that lines quickly become fuzzy.   The issue runs right down to the core aspects of the hobby.   There will never be an inherent fairness, and even from my own backyard some nights are more “fair” than others.   I just feel like adding a human construct over top to try and level the field is a complex endeavor that does little to address the actual concern (as I understand it).
QUOTE QUOTE SELECTED
blueangel
Jim Matzger
# today, 16:06
On the surface it seems that those who use third party images have a production advantage in that they can produce many more images if they are not reliant only on their own acquisition capabilities.  OTOH, those who engage in both the acquisition and processing of their own images may have a quality advantage over their peers.  In a contest between quantity and quality, quality usually wins.  Any scheme that penalizes one method versus the other may itself be unfair.
QUOTE QUOTE SELECTED
AtmosFearIC
AtmosFearIC
# today, 16:20
Having played around with some “Professional Telescope” data I do understand how difficult and time consuming it can be deal with. When talking about 100+ panel mosaics that alone is worth recognition but if your images aren’t making it to the Top Pics then they’re not making it to the judges either.
There have been some quite nice mosaics recently that I haven’t picked because of some registration errors or because they’ve been downsized to 2MP in size.
I suppose I do have a bias towards people who can create an excellent image from modest equipment. Having seen some images from a 8” GSO RC and an ASI1600 with a lightweight mount that are brilliant and looking like they could have been taken with a 20” CDK and a KAF-16803. In cases like this it is just just about the battle of the inferior equipment but the extra processing required to get the same quality.
I have no issues with anyone who uses DSW, I don’t care whether a system is backyard or remote or who owns it. For me it is entirely to do with the repetition of data. The perfect exactly can be with the Rosette Nebula which has been doing the rounds recently. Very nice data and there have been about half a dozen of them that are very worthy of an IOTD… and there is lies the problem for me. It’s having a few renditions pop up with the same data. Yes, there are differences in the processing from deconvolution to sharpening and masking.
As for professional judges, what makes a professional judge? I’ve done judging in photography and have been asked to do a few talks on astrophotography. This is the first time I’ve done anything in astrophotography however.
As I mentioned in my previous post, in my opinion one of the biggest changes that could be made would be making a slight change to the way the Judging works and moving towards a panel decision on the IOTD as opposed to the current Lone Wolf situation. Making judging largely the same way the that the Reviewers work; having the judges pick several ones that they believe are worthy and then select from the ones that the group of judges have selected. This would also allow an easy transition into judges comments where we could make comments when making our “vote” on particular images.
QUOTE QUOTE SELECTED
Edited 21 Feb, 2018 02:59
RRBBarbosa
21 Feb, 2018 09:20
I take this discussion to remember a point that I think will also be an improvement: when we send a new revision, this image could enter the list of new images for IOTD purposes.
DavideCoverta
21 Feb, 2018 10:10
Hi Guys,

just wondering how are the TP selected? With wich criteria?
Many amazing pictures are not considered TP from the Astrobin folks and other not so amazing instead yes… why? smile
rob77
21 Feb, 2018 10:16
Yeah, there have been some cases lately.
There are many people involved in the selection process so  I think it's time to write down some guidelines for submitters and reviewers.

We'll do it a while!

Cheers!
DavideCoverta
21 Feb, 2018 10:34
Roberto, how are at the moment those pictures selected? It's just a submitters thinking or sensation or simpaty? How it's possible for example that an image with a very bad denoise is marked as a Top astroimage? Whats wrong? Is TP something that folk should not rely on? Because i used to have a look at Those images just for learn and have some references… But….  smile
Jean-Baptiste_Paris
21 Feb, 2018 10:58
Ruben Barbosa
I take this discussion to remember a point that I think will also be an improvement: when we send a new revision, this image could enter the list of new images for IOTD purposes.
Sorry Ruben, I do not agree (again smile ) with that. There can be so many revisions for only one image that this would lead to increase considerably the number of images that would be eligible each day. As a submitter, it's already difficult to make a choice of only 3 fresh images per day… some days I would like to push 5 or 6 if I could.

This is also a "cheap trick" for a user to push again and again the same image that he believes deserve a TP or IOTD (even if it's not the opinion of the others…smile.

Davide Coverta
Hi Guys,

just wondering how are the TP selected? With wich criteria?
Many amazing pictures are not considered TP from the Astrobin folks and other not so amazing instead yes… why? smile

An image has to be selected by a least on submitter and then selected by at least one reviewer to be a TP.
As a submitter, I notice that most of the images that (imo) deserve a TP are effectively awarded with that. There is also few "unfair oversights", but that's the game…

Roberto Colombari
Yeah, there have been some cases lately.
There are many people involved in the selection process so  I think it's time to write down some guidelines for submitters and reviewers.

We'll do it a while!

Cheers!
Good idea. I presume what you have in mind is a collective work involving all the users and not only the judges ?

JB
DavideCoverta
21 Feb, 2018 11:02
Jean-Baptiste Auroux
I notice that most of the images that (imo) deserve a TP are effectively awarded with that. There is also few "unfair oversights", but that's the game…

Hi JBA,
Is this the answer to my doubts? Ok, good! Understood how it works!
Thanks for reply.
Best Regards
Davide
Edited 21 Feb, 2018 11:05
rob77
21 Feb, 2018 11:08
Jean-Baptiste Auroux
Good idea. I presume what you have in mind is a collective work involving all the users and not only the judges ?

Sure, a collective work
 
Register or login to create to post a reply.