Can you predict the degree of consistency of NIPS reviewers?  SciCast wants 
your opinion!

10% of the papers submitted for the 2014 Neural Information Processing Systems 
Conference were duplicated and reviewed by two independent groups of reviewers. 
What percentage of the papers reviewed will yield inconsistent decisions?

Give us your forecast at 
https://scicast.org/#!/questions/1083/trades/create/power

===========

BACKGROUND INFORMATION

The question concerns one of the top machine learning conferences, NIPS. This 
year, as an experiment, 10% of submitted papers were duplicated and reviewed by 
two independent groups of reviewers which led to decision by two independent 
program committees.  As part of the experiment, we want you to estimate the 
percentage of inconsistent decisions in this duplicated set. 

So far only two people have seen both sets of decisions. They will announce the 
results at the conference opening remarks, 6:30pm on Monday 8-DEC.  (They will 
not forecast.)

ACCEPTING FORECASTS UNTIL:
12/08/2014
RESOLUTION SOURCE
Resolution will be based on the official announcement at the NIPS conference. 
It is currently scheduled for 6:30pm on 8-DEC.
FINE PRINT
An inconsistent decision is either accept/reject or reject/accept. Forecasts 
after 6:30pm ET on Monday 8-DEC will not count. 

==============

SciCast is a combinatorial prediction market, enabling probabilistic
forecasts that can depend on assumptions about the outcomes of
relevant events. Market forecasts are updated in real-time using
state-of-the-art computational Bayesian inference techniques. You can
also participate in online experiments that compare different
forecasting and evidential accrual methodologies on the same forecast 
questions.
_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to