On 09/29/2016 02:48 AM, Christofer Dutz wrote:
> Hi guys,
> 
> 
> I just wanted to also take the opportunity to give some feedback on the 
> modified review process:
> 
> 
> 1. Seeing that I would have to do 30000 decisions sort of turned me off right 
> away (It's sort of ... "yeah let me help" and then getting a huge pile of 
> work dumped on my desk)
> 

Yes, I expect that the UI could be presented in a way that is not quite
so overwhelming. That said, the 30 reviewers did a HUGE amount of work.
Looking at the "winning" abstracts, and comparing them to various random
"losing" abstracts, it's clear that the well-written abstracts did, in
fact, bubble to the top, and the one-liners, the confusing, and the
illiterate, did indeed sink to the bottom. And we do indeed see a lot of
new faces in the speakers, which was a specific goal.

But, yeah, seeing that note at the top that I only have 30,000 more
comparisons to go, was very disheartening.

> 
> 2. With that huge amount of possible work I could see only little progress 
> for quite some time put into it) ... 30000 decisions would require reading of 
> 60000 applications. If I assume 30 seconds per application that's about 500 
> hours which is about 20 days without doing anything else. I sort of quit at 
> about 400 decisions.
> 

After 15 minutes or so, I began to recognize almost all of the
abstracts, and the comparisons sped up. But, yes, if we were indeed
expected to do 30,000 comparisons each, this would take years. So a goal
for the next time we use this tool (if we do in fact keep using this
tool) is to expand the reviewer pool a lot - perhaps reach out to
everyone that has attended past events?

The benefit of this system is that it can harness the time of 1000
people that have 5 minutes, rather than requiring 5 people to spend 1000
minutes. However, the Big Data event, in particular had a hard time
attracting a reasonable number of reviewers. We need help with that next
time.

> 
> 3. I noticed for myself that at first you start reading the applications 
> carefully but that accuracy goes down very fast as soon as you get a lot of 
> the talks you reviewed earlier ... unfortunately even if you only think you 
> read it before. I noticed me not reading some similar looking applications 
> and voting for one thinking it's the other. Don't know if this is desirable.
> 
> 
> I liked the simple interface however. So how about dropping the Deathmatch 
> approach and just displaying one application, and let the user select how 
> much he likes it (ok ... this is just the way the old version worked, but as 
> I said, I liked the UI ... just clicking once) ... eventually the user could 
> also add tags to the application and suggest tracks.


My biggest frustration with the "rate this from 1 to 5" technique that
we are moving away from, is that I would then be left with a pool of 200
talks, all of which were rated 4 (this is only a slight exaggeration)
that I then had to choose from. Usually in complete ignorance of the
subject material. So it really ended up being myself and 2 or 3 other
people choosing a schedule blind, and deriving almost no benefit from
your reviews.

The DeathMatch approach (I like that name!) makes people more brutal,
and, the evidence suggest, more honest. So abstracts that were real
clunkers did indeed sink to the bottom, and end up with large negative
scores.

Thank you for your work. And thank you for your comments on the
interface (everyone!) There's a lot of change that I would also like to
see, once we've moved past the "O MY GOD I HATE CONFERENCE SCHEDULING"
phase. But, overall, this system was, for me anyways, a tiny fraction of
the stress that we go through every time we need to schedule one of
these things.


-- 
Rich Bowen - rbo...@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon

Reply via email to