On 19Feb2017 1035, Donald Stufft wrote:

On Feb 18, 2017, at 4:38 PM, Steve Dower <steve.do...@python.org
<mailto:steve.do...@python.org>> wrote:

(I'm not subscribed to this list, so please keep me on CC)

Was there any discussion about allowing core developers to bypass any
of the PR checks? Or was there an assumption that we'd just push
directly to the main repo?

See: https://mail.python.org/pipermail/core-workflow/2017-February/000884.html

Thanks. The outcome of that thread (let's wait for a month) is the right one. Like Barry, I was just impatient.

For example, most of my contributions are to do with the Windows
installer. Due to the nature of installers, virtually nobody else is
confident to review anything complex, or even simple ones (such as
https://github.com/python/cpython/pull/160) due to not being able to
predict the flow on effects. As a result, I make many small changes
that would take a long time to get reviewed.


So as I said to Barry in the linked thread, part of the goal here is to
force the process between committers and non-committers to be as similar
as possible which will solve the real very real problem (that the GH
migration’s goal was to help with) that when you have two processes, a
stream lined one for those in power and a bulky slow one for those not,
that it is hard to get the bulky slow once fixed because nobody in
 power ever experiences it. By aligning the two processes as much as
possible, we identify pain points are forced to solve them or deal with
them.

I have absolutely no problem with requiring core devs to send PRs - I am 100% against *anyone* directly pushing to the main repo - it's just the PR approval process that is the problem.

This sounds like one such problem. If you’re the only person capable
(for skill or confidence) of reviewing these changes, what happens if
you’re unavailable for some reason? (Burnout, illness, whatever). Is
there any effort to mentor or attract additional contributors who are
able to review these changes? I’m guessing there is not, and the risk to
allowing bypassing the requirement is that there never *is* a plan to
fix it.

I'd love to attract additional contributors for this. Unfortunately, working on installers on Windows is even less popular than working on OpenSSL :)

On the other hand, much like I said with Barry, it’s also possible that
these problems are special enough that we can or should just relax
restrictions in order to allow forward movement to still happen.



Similarly, many of my changes are not touched by the automated builds.
Particularly anything to do with the installer is certainly not going
to be built on a Linux box, and most (probably all) Windows buildbots
don't build the installer right now. I've seen some systems where tags
can be applied to a PR to skip build validation - I would certainly
find that valuable.

This also sounds like a problem to me :) Obviously not the Linux part
(but then, burning some CPU on running the test suite on Linux isn’t
really that big of a deal is it?), but probably the installer should be
getting tested in some way?

It's manually tested, and typically on the build machine where it will be actually built. Getting it into a build somewhere may be useful, but adding 5-10 minutes to *every* checkin for something that basically only I ever change seems like a waste.

And you never regret burning some CPU on tests until you have more tests running than available machines. Previously we would batch test commits - I'm not sure if that's still happening or not, but that indicates some value in not spending the full time on each individual commit.

But maybe there's an easy way to only trigger tests when certain files are modified? It may not make sense to set that up for each individual module, but it might be worthwhile to trigger certain extra builds when PCBuild/* or tools/msi/* are modified.

But essentially, I believe it should be within the power of core
developers to bypass the checkin requirements (review + builds) and
force a merge. Currently this is not possible - could it be enabled?
Obviously it's placing trust in our core dev team, but that's the way
it has always been, and we don't lose any traceability from this
(unlike allowing people to delete branches from the repo, which can
permanently lose code).


I mentioned it above, but to my mind (and this is just a personal
opinion) it’s not really anything about *trust*. If we have two
different processes, we are going to make the one that we, the people in
power, use be as streamlined as possible and the other process for non
committers will *likely* end up slowly bitrotting and not keeping up
with no tooling, changes, and ideas that can make it better. Forcing
everyone onto the same process means that as we figure out and
continuously improve the process for us, it also improves it for other
people who do not have a commit bit.

And as I said above, I don't think this is a different process. It's an escape hatch for the single process that's only available to certain trusted people (i.e. those who we trust to say whether code is good enough to be committed). But I'm happy to wait out the month and see if it's an issue.

(Was about to write "we can all complain at the PyCon US language summit", which made me worry that if that's *all* we complain about it may be useless. For those of us who will be there, should we organise a separate workflow summit? Preferably somewhere that serves strong drinks? ;) )

Cheers,
Steve
_______________________________________________
core-workflow mailing list
core-workflow@python.org
https://mail.python.org/mailman/listinfo/core-workflow
This list is governed by the PSF Code of Conduct: 
https://www.python.org/psf/codeofconduct

Reply via email to