Hi Ben,
It isn't just the tool itself which has to be maintained: we have commit hooks, integration with other bits of infrastructure and so forth which also needs to both be implemented and maintained.
In case of Gerrit, there is no need for custom hooks as they stay on git.kde.org, and therefore I believe this point is not relevant to its adoption. The whole setup has been designed and implemented in a way suitable for long-term parallel operation alongside KDE's git.
As for the integration bits, they're done now. The tool just talks to LDAP, and maintenance of that connection is effectively zero, unless a dramatic revamp of our LDAP is planned. The repo mirroring was a matter of setting up a single user account and configuring proper ACLs, and these are also finished already.
I can understand the general reasons for limitting the number of services which we offer and support. However, I would appreciate if we mentioned how big these costs are, as there's some room of misinterpretation otherwise.
The more custom work we have, the harder it is to upgrade things.
While true in general, I fail to see how it is relevant to Gerrit. What custom bits are involved here?
We'll confuse newcomers if projects A, B and C are reviewed on tool X while projects D, E and F are reviewed on tool Y.
I haven't received a single complaint from the Trojita GCI students about any hardness in this. They did struggle with making an actual change to the code, with proper coding style, with commit message formatting, with git in general, they even failed to understand the written text about CCBUG and BUG keywords in the our wiki, but nope, I haven't seen them struggle with a need to use Gerrit or its differences to RB. YMMV, of course.
Because the majority of complaints actually came from people who are well-versed with ReviewBoard, my best guess is that there's muscle memory at play here. This is supported by an anecdote -- when I was demoing the RB interface to a colleague who maintains Gerrit at $differentOrg, we both struggled with finding buttons for managing a list of issues within RB. It's been some time since I worked with RB, and it showed.
I remember having hard time grokking the relation between a "review" and "attaching/updating a file" on RB. I didn't read the docs, and it showed.
A single tool would be best here. Let me make clear that it is not a case of Reviewboard vs. Gerrit here - as other options need to be evaluated too.
I understand that people would like to avoid migrating to Gerrit if a migration to a $better-tool was coming. Each migration hurts, and it makes a lot of sense to reduce the number of hops.
However, what I'm slightly worried about is postponing Gerrit indefinitely until all future replacements are evaluated. I don't see people putting significant time into any alternative for code review right now. Do we have any chances of these people making themselves known in close future? How long would be a reasonable waiting time for a testing deployment of alternate tools? When are we going to reduce our candidates just to the contenders which have been deployed and tested by some projects?
In regards to the difficulty of Gerrit - I tend to agree with this argument (it took me at least a minute to find the comment button, and I didn't even get to reviewing a diff).
The documentation, however, explains the functionality in a pretty clean manner, see https://gerrit.vesnicky.cesnet.cz/r/Documentation/user-review-ui.html .
We also aren't the first project trying to work with Gerrit, so there's plenty of tooling available right now, not "to be written". There's text-mode interface, the "gertty" project, there's integration in QtCreator, there are pure-CLI tools for creating reviews, another web UIs in development, there are even Android clients.
Plus there are major concerns with integration into our infrastructure, such as pulling SSH keys from LDAP for instance (no, you can't have the tool maintain them itself - as mainline Git and Subversion need the keys too).
Yes, SSH-keys-in-LDAP is a PITA, but given that one needs a patched OpenSSH to look up keys from LDAP anyway, I don't think this is a blocker issue. The situation is exactly the same with the Gitolite setup which currently runs on git.k.o though, as that doesn't talk to LDAP either. As you mentioned during our IRC chat, there's a Python daemon which polls for changes in LDAP, and propagates these into Gitolite's config backend in a non-public git repo. Why wouldn't this be enough for Gerrit, then?
Gerrit has both SSH-authenticated API and a REST HTTPS API for adding and removing of SSH keys by an admin account. *If* this is needed, I'll be happy to make it work, it's simply a matter of calling two trivial scripts. Would you see any problems with hooking into the identity webapp or its backend, if there's any, for this? An edge trigger would be cool.
Are there any other concerns?
Please note that any discussion of tools should be on the merits of the tools themselves. Things like CI integration are addons, which both Reviewboard and Gerrit are capable of.
With varying levels of ease, I should add. In the end, everything is achievable, and you can write tools which automatically pull patches sent through mailing lists and build that, but the question is who is going to do the work, and when it's going to be ready. I know that the CI+Gerrit thing is now done and solved, and I also know that I won't be spending my time in redoing the same for ReviewBoard. Nobody bothered to do this pre-merge CI for RB in the past years. Do we have some volunteers now?
The only reason we don't have Reviewboard integration yet is a combination of technical issues (lack of SNI support in Java 6)
And nobody caring enough to do the work, I suppose. AFAIK Jenkins runs on Java 7 just fine, but apparently nobody found time for such an upgrade. There's nothing bad with this, of course, but it doesn't suggest that suddenly these people will have time for setting up early CI with RB.
and resource ones (some projects take a long time to complete, and i'm concerned we don't have the processing power).
This is not really an all-or-nothing question. If there's a project which exceeds our current HW possibilities (you mentioned Calligra before, right?) and we cannot easily get more HW (did someone ask the foundation's treasurers for funds for HW rental, or approached some of the obvious candidates such as RedHat or SuSE asking for HW access?), perhaps that project can simply be omitted from these pre-merge CI runs.
We've chatted a couple of times about the limits of the current CI setup, about its inability to perform checks in parallel. Is the existing architecture going to scale with these pre-merge CI runs without substantial changes?
Again, this is something which is solved now with the CI setup that is behind Gerrit. The changes were into the glue code, the code which schedules the builds, distributes the jobs and which decides when to build what. It's still using the KDE CI's Python scripts for managing library deps and for actually launching the build (I sent the necessary patches your way).
In terms of a modern and consistent project tool - I agree here. A long term todo item of sysadmin is to replace projects.kde.org. The question is of course - with what. Chiliproject is now unmaintained, so we do have to migrate off to another solution at some point. If the new tool happens to be more integrated in terms of code review, that is a bonus from my point of view (as it means the integration will be better, and there is one less piece of infrastructure to maintain).
See above for my view of mixing the quest for finding a decent project management tool and for finding a good code review system. To go a bit to the meta side, IMHO, a universal tool which does plenty of things in a not-excellent manner is worse than using diverse set of tools which do one thing each, and do it well. That's why I see a Chilliproject replacement to be an orthogonal topic to the choice of a code review tool.
@Jan: could you please outline what you consider to be the key advantages? At the moment I understand that you are after: 1) CI integration to pre-validate the change before it gets reviewed
- With the CI actually testing not just the change in isolation (as a result of it on top of what was at the repo at the time the change was made), but the result of the change as applied to the current state of the repo at merge time, and with user-visible possibility of retruggering a check job.
- Being able to do "trunk gating" for projects which care enough. That is, there's a tool which makes sure that there are no regressions, and which won't let in commits which break the build or cause tests to fail. (And yes, I know that there'll always be an option of direct pushes, I am not pushing against that, it appears to be a point which people require. OK with me.)
- Being able to do cross-project verification, i.e. "does this change of kio break plasma-framework?"
- Performing builds on various base OSes, against the Qt version provided by system (Trojita aims at supporting Qt 4.6-4.8 and 5.2+), using ancient compilers (yes, C++11 and GCC 4.4 are so much fun when taken together) and different mixes of optional dependencies and features to be built. In short, testing in the env people will be using, including Windows and Mac OS X.
2) Ability to directly "git pull" the patch (which Phabricator's arc tool would meet I believe?)
With Gerrit, one has an access to the full history of each change, including accessing them in offline mode. This is opt-in, so people who don't care will not have their clones "polluted by this nonsense", while people who do care have them "enhanced by this valuable data". This happens with no extra tooling. I'm sure I could come up with e.g. local scripts that build these git refs from the (history of) patches on RB, Phabricator, GitHub, Gitorious or whatever, but native support trumps scripting each time.
According to the docs, `arc` is something which just pushes patches around. Working with patches is different from having a git ref to work on.
Have you ever used OpenSUSE's Build Service and its CLI client, the `osc`? That's what happens when one tries to reimplement a SCM when working on a tool whose primary focus is something else. My favorite misfeature is the "expand and unexpand linked packages" thingy, and the associated hiding of merge conflicts when a source package changes. That's not fun to debug, and it won't ever happen with plain git because git comes with excellent support for merges. That support is a result of many years of extremely heavy use of git by a ton of developers. I don't expect Facebook to be able to cope with *that* regardless of their engineering size, and therefore I expect that `arc` will fail when people use it for non-obvious stuff.
I suppose most of our developers are either already familiar with git, or they have to learn it anyhow to be able to participate in our community in an efficient manner. Introducing another patch manager to the mix doesn't help, IMHO. This is just to illustrate my experience with tools that behave-quite-like-a-SCM but do not actually implement full SCM functionality. It works well for quick demos, but it sucks when you start using them seriously.
I would encourage anyone who evaluates these tools to pay attention to these not-so-obvious usage issues. Having a CLI tool that can fetch a patch and apply it to a local checkout is not equivalent to native git refs. It's an important building block, but not a finished tool.
I know that I won't be in a business of building these tools. I'm quite happy with having them out-of-box with Gerrit.
Cheers, Jan -- Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/