On Sat, Nov 11, 2017 at 10:47 PM, Alex Schultz <aschu...@redhat.com> wrote:
> Ok so here's the current status of things. I've gone through some of > the pending patches and sent them to the gate over the weekend since > the gate was empty (yay!). We've managed to land a bunch of patches. > That being said for any patch for master with scenario jobs, please do > not recheck/approve. Currently the non-containerized scenario001/004 > jobs are broken due to Bug 1731688[0] (these run on > tripleo-quickstart-extras/tripleo-ci). There is a patch[1] out for a > revert of the breaking change. The scenario001-container job is super > flaky due to Bug 1731063[2] and we could use some help figuring out > what's going on. We're also seeing some issues around heat > interactions[3][4] but those seems to be less of a problem than the > previously mentioned bugs. > > So at the moment any changes that don't have scenario jobs associated > with them may be approved/rechecked freely. We can discuss on Monday > what to do about the scenario jobs if we still are running into issues > without a solution in sight. Also please keep an eye on the gate > queue[5] and don't approve things if it starts getting excessively > long. > > Thanks, > -Alex > > > [0] https://bugs.launchpad.net/tripleo/+bug/1731688 > [1] https://review.openstack.org/#/c/519041/ > [2] https://bugs.launchpad.net/tripleo/+bug/1731063 > [3] https://bugs.launchpad.net/tripleo/+bug/1731032 > [4] https://bugs.launchpad.net/tripleo/+bug/1731540 > [5] http://zuulv3.openstack.org/ > > On Wed, Nov 8, 2017 at 3:39 PM, Alex Schultz <aschu...@redhat.com> wrote: > > So we have some good news and some bad news. The good news is that > > we've managed to get the gate queue[0] under control since we've held > > off on pushing new things to the gate. The bad news is that we've > > still got some random failures occurring during the deployment of > > master. Since we're not seeing infra related issues, we should be OK > > to merge things to stable/* branches. Unfortunately until we resolve > > the issues in master[1] we could potentially backup the queue. Please > > do not merge things that are not critical bugs. I would ask that > > folks please take a look at the open bugs and help figure out what is > > going wrong. I've created two issues today that I've seen in the gate > > that we don't appear to have open patches for. One appears to be an > > issue in the heat deployment process[3] and the other is related to > > the tempest verification of being able to launch a VM & ssh to it[4]. > > > > Thanks, > > -Alex > > > > [3] https://bugs.launchpad.net/tripleo/+bug/1731032 > > [4] https://bugs.launchpad.net/tripleo/+bug/1731063 > > > > On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz <aschu...@redhat.com> > wrote: > >> Hey Folks > >> > >> So we're at 24+ hours again in the gate[0] and the queue only > >> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not > >> approve of recheck anything that isn't related to these bugs. I will > >> most likely need to go through the queue and abandon everything to > >> clear it up as we are consistently hitting timeouts on various jobs > >> which is preventing anything from merging. > >> > >> Thanks, > >> -Alex > >> > > [0] http://zuulv3.openstack.org/ > > [1] https://bugs.launchpad.net/tripleo/+bugs?field.searchtext=&orderby=- > importance&field.status%3Alist=NEW&field.status% > 3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status% > 3Alist=INPROGRESS&field.importance%3Alist=CRITICAL& > assignee_option=any&field.assignee=&field.bug_reporter=& > field.bug_commenter=&field.subscriber=&field.structural_ > subscriber=&field.tag=ci+alert&field.tags_combinator= > ALL&field.has_cve.used=&field.omit_dupes.used=&field.omit_ > dupes=on&field.affects_me.used=&field.has_patch.used=& > field.has_branches.used=&field.has_branches=on&field. > has_no_branches.used=&field.has_no_branches=on&field.has_ > blueprints.used=&field.has_blueprints=on&field.has_no_ > blueprints.used=&field.has_no_blueprints=on&search=Search > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks for continuing to push on this Alex!
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev