On 04/23/2013 08:43 AM, Robin Berjon wrote:
On 22/04/2013 13:12 , James Graham wrote:
(as an aside, I note that critic does a much better job here. It allows
reviewers to mark when they have completed reviewing each file in each
commit. It also records exactly how each issue raised was resolved,
either by the commit that fixed it or by the person that decided to mark
the issue as resolved)
You may wish to introduce Critic a bit more than that; I'm pretty sure
that many of the bystanders in this conversation aren't conversant with it.
Right, I lost track of which list this conversation was on :)
"Opera Critic", commonly known as "Critic" is a code review system
written by Jens Lindström to solve the major pain points that we had
experienced with all other code review systems we had tried at Opera.
Critic is written to work with git, and has the following workflow:
* Each set of changes to be reviewed is a single branch containing one
or more commits
* A set of possible reviewers for each file changed is assigned based on
preconfigured, path-based, filters
* The reviewer(s) review the commits, raising issues in general, or
against specific lines, marking each file as "reviewed" once they have
finished commenting on it (irrespective of whether it has issues).
* The submitter (or anyone else) pushes additional commits to the same
branch in order to address the issues flagged by the reviewer.
* Critic automatically resolves issues where the new commits have
altered code that previously had an issue
* The reviewer reviews the new commit, adding new issues or reopening
old issues that were incorrectly marked as resolved.
* In case that clarification is needed, reviewer and author add
additional comments to the various issues. If it turns out that the
issue is not a problem, or that it was fixed in some way undetected by
critic, it is manually marked as resolved.
* Finally all changes are marked as "reviewed" and there are 0 issues
remaining so the code is automatically marked as "Accepted".
* Someone (typically the author, but it could also be the reviewer)
proceeds to integrate the review branch with master in the normal way.
As you see this has a great deal in common with the github pull request
process. Pull requests are branches, to which the author will push more
commits in order to respond to issues. Compared to the github experience
critic offers a number of significant advantages:
* Automatic assignment of reviewers to changes based on
reviewer-supplied filters (so e.g. an XHR test facilitator could set
themselves up to review only XHR-related tests without also getting
email related to 2dcontext tests).
* The ability to review multiple commits squashed together into a single
diff.
* Tracking of which files in which commits have already received review
and which still require review.
* A record of which issues are still to be addressed and which have been
resolved, and how.
* A significantly better UI for doing the actual review, notably
including a side-by-side view of the before/after code (with changes
highlighted of course) so that one can read the final code without
having to mentally apply a diff.
* Less email (for some reason github think it's OK to send one email per
comment on the review, whereas critic allows comments to be sent in
batches).
Noticing the similarity with the github workflow, I have extended critic
to allow it to integrate with github. In particular I added two features:
* Critic authentication against github using OAuth.
* Integration with pull requests so that new pull requests create new
review requests, pushes to pull requests are mirrored in the
corresponding review and closing pull requests closes the review.
I have set up a critic instance for the web-platform-tests repository at
[1] (you have to sign in using your github credentials). So far my
experience is that it makes it possible to review changes that are
essentially unreviewable on github with a minimum of accidental
complexity (e.g. [2]; yes that review still has a lot of work to be done).
There is a certain amount of controversy around using critic for
web-platform-tests, with some people taking the position that we should
exclusively use the undoubtedly weaker, but more familiar, github tools
to make things easier for new contributors. I consider that review has
so much intrinsic difficulty that we should explore all the options we
have for making it easier, especially given the particular dynamics of
test reviews. Testsuites are often developed internally and then
submitted to a standards body at the last moment, so they are commonly
large code dumps rather than small changes that can be reviewed in a
single sitting. This is the situation in which the github tools become
almost useless. Also, even in an ideal situation with many contributors,
I would expect test submissions to follow something like the 80/20 rule
(80% of tests from 20% of submitters) and reviews to be even more
strongly biased to a minority. Therefore having tools that can make
those 20% more efficient is worthwhile, even if we can only use them 80%
of the time.
[1] https://critic.hoppipolla.co.uk/
[2] https://critic.hoppipolla.co.uk/r/5