Greetings Joe, Thanks for designing this questionnaire. It looks useful.
On 1 June 2013 04:10, Joe Touch <to...@isi.edu> wrote: > TPC meetings in person are much more effective in discussing papers than > any alternative, for the same reasons as in-person conferences. In-person conferences are useful because they promote fruitful unplanned conversations that can generate new ideas and they build relationships. TPC meetings are about having a conversation on a particular topic, which may involve careful re-reading and/or verifying facts. The latter is much more suited to a multi-day on-line discussion than the former is. Another difference is that discussions at in-person conferences are between experts in the area. If all TPC members have read the paper, then I agree that an in-person discussion is the most effective option. However in cases like INFOCOM where the TPC meeting discussions deliberate only involve people who were *not* reviewers (in order to "review the reviews"), I think that the in-person meeting is less useful than a thorough on-line discussion between the reviewers. A third difference is that most conference last more than 10 hours, and so the travel cost is amortized over a much more substantial event. Coming from Australia, that travel cost is typically ~50 hours round trip (more than the hours nominally worked in a week), and equivalent to driving an SUV ~100km each day for an entire year. If that isn't daunting, I'll book you to give us a seminar sometime :) I would strongly recommend that the criterion become "Of the three media (a) long/active on-line discussion phase (b) in-person TPC meeting (c) remote-access TPC meeting, the conference: E Employs all three A Employs two out of three D Employs 0 or 1 of the three" On 31 May 2013 06:16, Joe Touch <to...@isi.edu> wrote: > On 5/30/2013 12:47 AM, Martin Gilje Jaatun wrote: > > >> The problem with acceptance rates is that they are so easy to game - and >> according to this, a conference that receives 100 great papers and >> accepts 60 of them is worse than a conference that gets 1000 junk >> submissions and accepts 400 of them... > > I don't agree that this can be 'gamed' on a persistent basis. > Conferences that get 100 great papers will later get 1000. It's > impossible to target a voluntary audience so directly that this happens > without correction over several events. That is true for conferences with a broad scope such as the flagship conferences, but not true of more specialized conferences, however high the quality. Conversely, a poor conference may continue to attract 1000 submissions because it is known to be easy to get into. I agree that acceptance rate is a useful metric, provided it isn't given undue weight. For conferences that have existed a few years, a more useful metric would be the average citations per paper over some time interval. If the IEEE could provide a script to scrape that from Google Scholar, that would be a great separate contribution. I'd love to be able to distinguish between the many conferences on a new topic (IoT, smart-grid, cloud, ...) without waiting for reputation to spread by word-of-mouth. $0.02, Lachlan -- Lachlan Andrew Centre for Advanced Internet Architectures (CAIA) Swinburne University of Technology, Melbourne, Australia <http://caia.swin.edu.au/cv/landrew> Ph +61 3 9214 4837 _______________________________________________ IEEE Communications Society Tech. Committee on Computer Communications (TCCC) - for discussions on computer networking and communication. Tccc@lists.cs.columbia.edu https://lists.cs.columbia.edu/cucslists/listinfo/tccc