Thanks Dan for writing this post on your vacation.  We really
appreciate your keeping us informed.

On May 3, 9:22 pm, "Dan Morrill" <[EMAIL PROTECTED]> wrote:
> Ahh -- we've not been rigorous in consistently naming these various rounds
> and phases.  Let me try and adopt that terminology for this thread, and
> explain again.
> ADC 1 == this $5,000,000 prize event going on now.
> ADC 2 == the second $5,000,000 prize event that will begin later this year.
> ADC 1 Round 1 == open participation with the deadline of 14 April, with 50
> winners
> ADC 1 Round 2 == participation limited to the winners of ADC 1 Round 1, with
> 20 "final" winners
> ADC 1 Round 1 Phase 1 == reducing the original set of 1,788 submissions to
> 100 finalists
> ADC 1 Round 1 Phase 2 == picking the 50 ADC 1 Round 1 winners from the 100
> finalists
>
> Okay, phew. :)  With those definitions, here is where we are:
>
>    - We sent out the submissions to judging a few days after the submission
>    deadline of 14 April, and judging began.
>    - Our 100 or so judges received the judging guidelines we provided,
>    reviewed their assigned submissions, and reported data back to us.
>    - Late last week, we applied our outlier mitigation techniques,
>    identified the top 100 results, and sent them on to the final, separate
>    panel of 15 or so judges to score and produce the final 50 ADC 1 Round 1
>    award recipients.
>
> So in other words, we are currently in ADC 1 Round 1 Phase 2 as defined
> above.  Once data from the judges comes in, we will notify the 50 award
> recipients and ADC 1 Round 2 will begin.
>
> It has not escaped my notice even on vacation that there have been a number
> of discussions on server hits and so on.  Obviously we don't have access to
> everyone's server logs, and we can't monitor what the judges have actually
> been doing (nor would we snoop if we could, since that seems really
> sketchy.)  We've tried to automate everything we possibly can about the
> judging process, but the one thing we can't automate is the actual act of
> assigning scores, since that requires a human's brain.
>
> The judges were given fairly detailed guidance on how to calibrate their
> scores, and what to review. For instance, they are aware that they are
> supposed to read documentation and do their best to test all the features.
>  In the end, though, each judge is going to test to his or her own
> satisfaction.  I'm not sure how reliable it is to correlate judge reviews
> with observed server hits.  Some apps might have sporadic bugs that prevent
> network accesses.  Some judges may have decided they didn't need to see a
> particular feature.  And before you cry foul, know that some people who have
> inquired about "missing" server hits have actually done quite well. Judges
> are just as likely to say "this is cool, I don't need to see any more" as
> they are to say "this is so uncool, I don't need to see any more."  On the
> whole, our judges have been excited to participate, and I expect that they
> are being as conscientious as they can be.
>
> The one thing I can tell you with certainty is that I have answered quite a
> few private inquiries, and in all but one case the judges responded with
> legitimate scores, rather than scores that say something went wrong or the
> review was incomplete.  Our only data points are what the judges give us,
> because that's the only factor we can't automate.  Since the judges are
> telling us that they reviewed to their satisfaction, we can only take their
> word for it.
>
> We've tried really hard to make sure that the only thing that affects
> scoring is what you put in front of the judges.  But the entire goal of the
> ADC is to leverage plain old human judgment.
>
> - Dan
> P.S. - watch for gory details on the nuts & bolts of all this in the near
> future.
>
> On Sat, May 3, 2008 at 1:58 PM, Finn Kennedy <[EMAIL PROTECTED]> wrote:
> > Dan,
>
> > thank you for the responses.  A couple of follow ups.
>
> > With Phase 2 I meant the 100 being winnowed down to 50.  From the ADC
> > Judging Process page:
>
> > "In Phase 2, the 100 highest-scoring submissions will be all be sent to a
> > new panel of judges (which may or may not include one or more of the judges
> > who participated in Phase I judging).
> > ...
> > The 50 entries with the highest scores in Phase 2 judging will move on to
> > Round 2 of the Challenge..."
>
> > Just for clarification, are there again groups of judges assigned to look
> > at a subset of the top 100 entries?  Or are the entire set of entries judged
> > by all the judges in Phase 2?
>
> > Is the "outlier" procedure still used for Phase 2?  By outlier I mean the
> > review of scores not matching the rest of the scores for the application
> > (mentioned on the board).
>
> > I totally understand the want to have a fair playing field.  It would not
> > be fair to extend advantages to winners that can make the trip to Google
> > I/O.
>
> > Finn
>
> > On Sat, May 3, 2008 at 3:27 PM, Dan Morrill <[EMAIL PROTECTED]> wrote:
>
> >> Well, for all intents and purposes, Phase II begins as soon as we announce
> >> the 50 Phase I winners.  It's not like we could stop the winners from
> >> starting right away, anyway. :)  It looks like we are still on track to
> >> announce those winners next week.
> >> A different set of judges is reviewing the 100 applications.  The top 100
> >> applications are "reset" and rejudged from scratch by a different group of
> >> judges, who have no knowledge of the previous judges' scores.
>
> >> We're thinking about ways to work with the 50 Phase I winners, but that
> >> might not necessarily include anything formal at Google I/O.  (We don't 
> >> want
> >> to require anyone to attend, and we don't want to give any of them an 
> >> unfair
> >> advantage.)
>
> >> - Dan
>
> >> On Fri, May 2, 2008 at 6:58 PM, finnk <[EMAIL PROTECTED]> wrote:
>
> >>> Having reread the Judging Process (http://code.google.com/android/
> >>> adc_judging.html <http://code.google.com/android/adc_judging.html>), I
> >>> have a couple of questions.
>
> >>> Since we are coming up on the week of May 5th, when is Phase II
> >>> starting?
>
> >>> Will the entire panel of judges review the whole set of 100
> >>> applications, or is the set of 100 split into groups and distributed
> >>> randomly again?
>
> >>> Are there any differences between Phase I and Phase II?
>
> >>> On an slightly related note, is there anything planned for the top 50
> >>> planned at Google IO?
>
> >>> Also, for Google IO:  If you are traveling from Austin, TX, there are
> >>> direct flights from Austin to San Jose International.  You can then
> >>> take CalTrain (http://www.caltrain.com) from close to the airport to
> >>> the San Francisco stop.  It is on 4th, same street as the Moscone
> >>> Center.
>
> >>> Of course I am not a travel agent/planner, so please double check
> >>> everything yourself.
>
> >>> Finn
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Android Challenge" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/android-challenge?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to