> The reality is you're just going to have to triage this and be a *lot*
> more specific with issues.  I find opening an etherpad and going
> through the failures one-by-one helpful (e.g. I keep [2] for centos
> jobs I'm interested in).
> Looking at the top of the console.html log you'll have the host and
> provider/region stamped in there.  If it's timeouts or network issues,
> reporting to infra the time, provider and region of failing jobs will
> help.  If it's network issues similar will help.  Finding patterns is
> the first step to understanding what needs fixing.
Here [1] I collect some fail records from gate
As we can tell, most of environments set-up becomes really slow and failed
at some point with time out error.
In [1] I collect information for failed node. Hope you can find any clue
from it.

[1] https://etherpad.openstack.org/p/heat-gate-fail-2017-08

> If it's due to issues with remote transfers, we can look at either
> adding specific things to mirrors (containers, images, packages are
> all things we've added recently) or adding a caching reverse-proxy for
> them ([3],[4] some examples).
> Questions in #openstack-infra will usually get a helpful response too
> Good luck :)
> -i
> [1] https://bugs.launchpad.net/openstack-gate/+bug/1708707/
> [2] https://etherpad.openstack.org/p/centos7-dsvm-triage
> [3] https://review.openstack.org/491800
> [4] https://review.openstack.org/491466
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to