Hi, Anthony, The point of this is to support a bureaucratic process - TCS, and TC endorsement. Right now, endorsement requires naming two TC members who will oversee a meeting. There is no suggestion to what kinds of information they will report back to the TC.
This isn't intended as a fixed form. TC members can report anything they want. TCs can ask anything they want. This is just intended as a place to start. I appreciate that others on this list have various views of whether this can or should be a TC or ComSoc doc. I'll leave that to the TC chairs or the ComSoc to decide. Regardless of whether they decide to move forward with this or "scrap" it, I will be posting it on a site I maintain for conference best practices anyway, FWIW. Joe On 5/30/2013 2:09 PM, Anthony Ephremides wrote: > I think this whole idea is another example of bureaucratic thinking within > the Society. Any such ranking will be fraught with inaccuracies and will > convey erroneous messages. We all know a good conference when we see one. > Rejection rates as a metric of quality?? Holding a TPC meeting as a measure > of quality? Who will measure the quality of the reviews and of the authors? > My suggestion is to scrap the project. > > AE > > > > > > Anthony Ephremides > Distinguished University Professor and > Cynthia Kim Eminent Professor of > Information Technology > ECE dept and ISR > University of Maryland > College Park, MD 20742 > 301-405-3641 > etony(at)umd(dot) edu > > > -----Original Message----- > From: Henning Schulzrinne [mailto:h...@cs.columbia.edu] > Sent: Thursday, May 30, 2013 2:33 PM > To: Ken Calvert > Cc: Tccc@lists.cs.columbia.edu; tc...@ri.uni-tuebingen.de; i...@comsoc.org; > Joe Touch > Subject: Re: [Tccc] ComSoc technical cosponsorship - rating the review process > > Also, ranking could be seen as meaning that a TPC member ranks papers within > their review portfolio (paper #7 is best, #8 second best"). (Infocom tried > this, I believe.) I don't think that works all that well, but "ranking" may > well refer to the usual "definite accept" to "definite reject" scale or "in > top 10% of papers". Given the tendency of the first ranking to concentrate > around the non-committal middle, the latter seems more helpful, but I'm not > sure that's a "best" practice. > > On May 30, 2013, at 11:40 AM, Ken Calvert <calv...@netlab.uky.edu> wrote: > >> Hi Joe - >> >> Good idea, thanks for doing this. I think your proposal is pretty much on >> target. Just a couple of thoughts on #6: >> >>> 6. paper review process E/A/D >>> >>> E = considers average rank AND outlier info, discussion points >>> also based on natural 'gap' in evaluation >>> A = considers average rank based on natural gap in evaluation >>> D = considers rank only >> >> (i) I interpret these criteria as referring to the accept/reject decision >> process, rather than the "paper review process". Perhaps the title should >> be "acceptance decision process" or something like that? >> >> (ii) What about considering the transparency of the decision process? >> I.e., whether all (or almost all) decisions are made with in full view of >> the TPC and with the TPC's approval or at least the opportunity to object. >> >> (iii) Can you please clarify what you mean by "natural gap in evaluation"? I >> would probably interpret this to mean that the accept/reject line is drawn, >> as far as possible, so that there is a clear gap between the (average >> ratings of) the accepted papers and the rejected papers. But I don't think >> that's realistic - especially in large/general conferences, where there are >> papers from many areas, there will be not be a bright line in the >> ratings/rankings between rejected and accepted papers. This also seems to >> conflict with "considers rank only" being Deficient. So maybe I've just not >> understood what this means. >> >> Cheers, >> >> KC >> >> On 29 May 2013, at 14:05 PM, Joe Touch <to...@isi.edu> wrote: >> >>> Hi, all, >>> >>> As part of the ComSoc technical cosponsorship (TCS) process, TCs are >>> supposed to nominate at least two members of the TPC who will monitor >>> the review process. >>> >>> However, there doesn't appear to be any guidelines for providing >>> feedback on that process. >>> >>> I've drafted the following, which I hope will open a discussion on >>> this issue. If it evolves into something useful, perhaps it can be >>> posted on the TC websites for use by those appointed to monitor >>> TC-endorsed TCS'd meetings. >>> >>> NB: I've cross-posted this to TCCC, ITC, and TCHSN, which are where I >>> participate primarily; if any other TC has suggestions, please take >>> the discussion to the TCCC list if possible. >>> >>> Thanks, >>> >>> Joe >>> >>> ----------------------------------------------------------------- >>> >>> Rating system: >>> EXCELLENT best-practice to be aspired to >>> AVERAGE acceptable practice >>> DEFICIENT cause for concern for ComSoc involvement >>> >>> 1. TPC participation invitation E/A/D >>> >>> E = before first Call for Papers (CFP) issued >>> A = before CFP submissions due >>> D = after CFP submissions due >>> >>> 2. involvement in CFP promotion E/A/D >>> >>> E = invited to forward CFP and submit >>> A = invited to submit >>> D = neither >>> >>> 3. paper assignment for review E/A/D >>> >>> E = invited to select papers based on expertise and >>> abstracts/titles >>> A = invited to select based on topic area >>> D = not invited to select >>> >>> NB: "everyone reviews all" = E >>> >>> 4. paper review format E/A/D >>> >>> E = includes rank, feedback for author, and private >>> feedback for TPC discussion >>> A = includes rank and author feedback >>> D = includes only rank >>> >>> 5. TPC meeting E/A/D >>> >>> E = in-person meeting with support for remote >>> A = in-person with no remote support or only telecon or e-mail >>> D = no meeting >>> >>> 6. paper review process E/A/D >>> >>> E = considers average rank AND outlier info, discussion points >>> also based on natural 'gap' in evaluation >>> A = considers average rank based on natural gap in evaluation >>> D = considers rank only >>> >>> 7. paper reviews returned E/A/D >>> >>> E = >=3 substantive reviews returned with rank and >>> comments for the authors >>> A = >=3 substantive reviews returned with rank and >>> at least a rationale for rejects >>> D = <3 reviews for some papers, reviews not returned at all, >>> or only rank provided >>> >>> 7. paper accept rate E/A/D >>> >>> E = <=50%, based on natural gap in paper evaluation >>> A = <=50%, not based on 'gap' >>> D = >50% >>> >>> ------------------------------------------------------ >>> _______________________________________________ >>> IEEE Communications Society Tech. Committee on Computer >>> Communications >>> (TCCC) - for discussions on computer networking and communication. >>> Tccc@lists.cs.columbia.edu >>> https://lists.cs.columbia.edu/cucslists/listinfo/tccc >> >> Ken Calvert >> Professor and Chair, Computer Science Department Acting Director, Vis >> Center University of Kentucky >> >> >> >> >> >> _______________________________________________ >> IEEE Communications Society Tech. Committee on Computer Communications >> (TCCC) - for discussions on computer networking and communication. >> Tccc@lists.cs.columbia.edu >> https://lists.cs.columbia.edu/cucslists/listinfo/tccc >> > > > _______________________________________________ > IEEE Communications Society Tech. Committee on Computer Communications > (TCCC) - for discussions on computer networking and communication. > Tccc@lists.cs.columbia.edu > https://lists.cs.columbia.edu/cucslists/listinfo/tccc > _______________________________________________ IEEE Communications Society Tech. Committee on Computer Communications (TCCC) - for discussions on computer networking and communication. Tccc@lists.cs.columbia.edu https://lists.cs.columbia.edu/cucslists/listinfo/tccc