Thanks, Greg for sending out the Cohen chapter. Cohen and the data he presents seems somewhat down on group-based code reviews as being largely inefficient on the basis of detecting true and false positive bugs relative to having a single reviewer.
In my research group, we’ve been devoting a lab meeting each month to doing a review of someone’s recent code. Each person takes a different line and tries to figure out what it is doing and we discuss how the code all fits together. Aside from finding bugs, I’d like to think that an added benefit of the group approach is the training aspect. A junior person can learn from a more senior person’s code and insights. Similarly, if a junior person can’t “get” the creativity of the senior person then it might be time to simplify - it’s kind of like the arguments for paired programming. Of course I’m in an academic setting where education is part of the mission, but I’d like to think that it also makes people more productive and easier to onboard. I’m curious whether you or anyone knows of any literature looking at the training aspect of group-based code reviews. Thanks! Pat > On Nov 18, 2015, at 2:09 PM, Greg Wilson <[email protected]> > wrote: > > Hi Lex, > > I think this is a great idea - we'd really like a blog post during/after > about how it goes. > > 1. I've attached a copy of Jason Cohen's chapter from "Making Software" about > recent (as of 2011) empirical results on code review - it's a quick read, and > I think it does a good job of explaining why code review is more than just > fashionable. > > 2. The complaint I always had from students when I was teaching code review > in software engineering classes was that they didn't know what to look for, > even when given checklists. I tried to address this by doing a review live > in front of them (and then getting another programmer to come in and do a > review on a different piece of code, thinking aloud). It seemed to make a > big difference to their comfort levels. > > 3. An experiment I *didn't* try, but wanted to, was this: > > a) you pick the piece of code and review it, but don't share the review with > the students > > b) they each review the same piece of code independently > > c) their score is based on how closely their review matches yours, i.e., 100% > - (false positives) - (false negatives). > > I've seen this done in grading short essays, and again, it gives people an > idea of what to look for so that they don't feel they've been thrown in at > the deep end. > > Hope this is helpful - really look forward to hearing how it goes, > Greg > > On 2015-11-18 4:02 PM, Lex Nederbragt wrote: >> Hi, >> >> I have a 20% position at the informatics department here in Oslo (the rest >> is at biology), where I (co)supervise Master and PhD students from the >> Biomedical Informatics group. The group has about a dozen MSc students, >> involved in a range of different projects, but most will have a coding >> component. We recently discussed whether we could have the student review >> each other’s code ('Code peer review'? 'Peer code review'?) as a way to >> improve their coding skill, in a hopefully low-threshold way (rather than >> have the professors look at it). >> >> About the format, we feel students should show each other small bits of code >> (not all of it at once), e.g. one not-too long function that makes for a >> doable review exercise: understand the context and what the code is supposed >> to do, perhaps even test it, should not take too long time for the reviewer. >> The students can do the entire review process as a pair, or the reviewer >> looks at it before they meet to prepare questions and suggestions for >> improvement. Once in a while we could ask students to demonstrate an example >> of what they found to the whole group, e.g. a piece of code before-and-after. >> >> We were wondering whether others have tried such an approach, or whether it >> sounds hopelessly ambitious... >> >> Thanks in advance, >> >> Lex Nederbragt >> >> -- >> Lex Nederbragt >> Centre for Ecological and Evolutionary Synthesis (CEES) >> Dept. of Biosciences, University of Oslo >> P.O. Box 1066 Blindern 0316 Oslo, Norway >> Ph. +47 22844132 +47 48028722 Fax. +47 22854001 >> Email <mailto:[email protected]>[email protected] >> <mailto:[email protected]> >> http://flxlex.flavors.me/ <http://flxlex.flavors.me/> >> >> >> _______________________________________________ >> Discuss mailing list >> [email protected] >> <mailto:[email protected]> >> http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org >> >> <http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org> > -- > Dr. Greg Wilson | [email protected] > <mailto:[email protected]> > Software Carpentry | http://software-carpentry.org > <http://software-carpentry.org/><cohen-code-review-2011.pdf>_______________________________________________ > Discuss mailing list > [email protected] > http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org
_______________________________________________ Discuss mailing list [email protected] http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org
