On 16 November 2014 03:25, Sean Dague <s...@dague.net> wrote: > Testtools 1.2.0 release apparently broke subunit.run discover --list > which breaks the way that tempest calls the test chain. No tempest runs > have passed since it's release. > > https://review.openstack.org/#/c/134705/ is a requirements pin, though I > think because of grenade this is actually going to have to be laddered > up from icehouse => juno => master. > > https://review.openstack.org/#/q/I2f8737b44c703c3094d6bbb6580993f86a571934,n,z > > It's probably half a day babysitting getting all the pins in place to > make the world work again. I'm offline from here on out for the weekend, > but I'll put a +2/+A on all of these so if someone wants to recheck them > in the right order to land them, they can get things fixed. > > Also... lets try not to release libraries on Fridays before disappearing > for the weekend... please. Pretty please.
So this has tweaked me - I'm going to rant a little here. It wasn't Friday - it was saturday, when I had some time to do personal stuff and instead chose to push forward on a long arc that has been blocking oslo.db changes for a couple months. I released knowing I'd need to be around to follow up, and the very first thing I did when I got up Sunday morning after dealing with nappies etc was to check for fallout. The release was thoroughly tested upstream, and I explicitly tested it with OpenStack trees as well. I didn't disappear for the weekend, and I didn't even disappear straight to bed... and given how well you know me, you know that I rarely do disappear fully *anyway*, and finally - I'm reachable 24x7 if things really need that (but no-one rang me so clearly its not panic button time) - nor did anyone ask any of the other other testing-cabal committers to do an urgent action. And, and this is perhaps the most annoying aspect, noone in OpenStack tried to reproduce this upstream (which is a 4 command test - make a virtualenv, pip install, cd to a tree, perform discovery) and if they had, such a hypothetical person would have seen that the issue *isn't* testtools, its the use of system-site-packages permitting the old (0.5.1 - current is 0.8.0) unittest2 causing the issue. And thats something thats entirely in the OpenStack space to fix - not upstream. So by going 'oh its testtools problem', 8 or so hours when it could have been fixed have passed with noone looking at it. I totally get the omg its broken and that that makes everyone unhappy when it happens. However I don't like the feeling of being accused of irresponsibility when I was still around for some time (just not at 02:58 am when I was first pinged (< sdague> lifeless: https://review.openstack.org/#/c/134705/ should you actually be checking in this weekend). I think everyone is aware that releases have some risk, and doing them cavalierly would be bad. But running with stale dependencies isn't going to be part of the test matrix upstream, since the entire project goal is to fold all its learning into upstream: as things move upstream we have to depend on newer releases of those components (via things like unittest2, importlib2, traceback2 etc etc). -Rob -- Robert Collins <rbtcoll...@hp.com> Distinguished Technologist HP Converged Cloud _______________________________________________ OpenStack-dev mailing list OpenStackemail@example.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev