On 21/09/16 11:41, Joshua Harlow wrote:
tl;dr this appears to have been around forever (at least since we
switched to using a pure-Python MySQL client) and is almost certainly
completely unrelated to any particular release of oslo.db.
Update: Mike was kind enough to run this one to ground and
Sean Dague wrote:
> On 09/15/2016 09:20 AM, Roman Podoliaka wrote:
>> Sean,
>>
>> So currently we have a default timeout of 160s in Nova. And
>> specifically for migration tests we set a scaling factor of 2. Let's
>> maybe give 2.5 or 3 a try (
FWIW, there was no new failures in Nova jobs since then.
I'm confused as well why these tests would sporadically take much
longer time to execute. Perhaps we could install something like atop
on our nodes to answer that question.
On Wed, Sep 21, 2016 at 5:46 PM, Ihar Hrachyshka
Mike Bayer wrote:
On 09/21/2016 11:41 AM, Joshua Harlow wrote:
I've seen something similar at https://review.openstack.org/#/c/316935/
Maybe its time we asked again why are we still using eventlet and do we
need to anymore. What functionality of it are people actually taking
advantage of?
On 09/21/2016 11:41 AM, Joshua Harlow wrote:
I've seen something similar at https://review.openstack.org/#/c/316935/
Maybe its time we asked again why are we still using eventlet and do we
need to anymore. What functionality of it are people actually taking
advantage of? If it's supporting
Zane Bitter wrote:
On 14/09/16 11:44, Mike Bayer wrote:
On 09/14/2016 11:08 AM, Mike Bayer wrote:
On 09/14/2016 09:15 AM, Sean Dague wrote:
I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
I just hit that TimeoutException error in neutron functional tests:
http://logs.openstack.org/68/373868/4/check/gate-neutron-dsvm-functional-ubuntu-trusty/4de275e/testr_results.html.gz
It’s a bit weird that we hit that 180 sec timeout because in good runs, the
test takes ~5 secs.
Do we have
On 14/09/16 11:44, Mike Bayer wrote:
On 09/14/2016 11:08 AM, Mike Bayer wrote:
On 09/14/2016 09:15 AM, Sean Dague wrote:
I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
On 09/14/2016 11:57 PM, Mike Bayer wrote:
>
>
> On 09/14/2016 11:05 PM, Mike Bayer wrote:
>>
>> Are *these* errors also new as of version 4.13.3 of oslo.db ? Because
>> here I have more suspicion of one particular oslo.db change here.
>
> The version in question that has the changes to
On 09/15/2016 09:20 AM, Roman Podoliaka wrote:
> Sean,
>
> So currently we have a default timeout of 160s in Nova. And
> specifically for migration tests we set a scaling factor of 2. Let's
> maybe give 2.5 or 3 a try ( https://review.openstack.org/#/c/370805/ )
> and make a couple of "rechecks"
Sean,
So currently we have a default timeout of 160s in Nova. And
specifically for migration tests we set a scaling factor of 2. Let's
maybe give 2.5 or 3 a try ( https://review.openstack.org/#/c/370805/ )
and make a couple of "rechecks" to see if it helps or not.
In Ocata we could revisit the
Mike,
I think the exact error (InterfaceError vs TimeoutException) varies
depending on what code is being executed at the very moment when a
process receives SIGALRM.
I tried to run the tests against PostgreSQL passing very small timeout
values (OS_TEST_TIMEOUT=5 python -m testtools.run
On 09/15/2016 05:52 AM, Roman Podoliaka wrote:
> Mike,
>
> On Thu, Sep 15, 2016 at 5:48 AM, Mike Bayer wrote:
>
>> * Prior to oslo.db 4.13.3, did we ever see this "timeout" condition occur?
>> If so, was it also accompanied by the same "resource closed" condition or
>> did
Mike,
On Thu, Sep 15, 2016 at 5:48 AM, Mike Bayer wrote:
> * Prior to oslo.db 4.13.3, did we ever see this "timeout" condition occur?
> If so, was it also accompanied by the same "resource closed" condition or
> did this second part of the condition only appear at 4.13.3?
> *
On 09/14/2016 11:05 PM, Mike Bayer wrote:
Are *these* errors also new as of version 4.13.3 of oslo.db ? Because
here I have more suspicion of one particular oslo.db change here.
The version in question that has the changes to provisioning and
anything really to do with this area is
There's a different set of logs attached to the launchpad issue, that's
not what I was looking at before.
These logs are at
http://logs.openstack.org/90/369490/1/check/gate-nova-tox-db-functional-ubuntu-xenial/085ac3e/console.html#_2016-09-13_14_54_18_098031
.In these logs, I see
On 09/14/2016 07:04 PM, Alan Pevec wrote:
Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.
So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked
On Wed, Sep 14, 2016, at 10:01 AM, Sean Dague wrote:
> On 09/14/2016 12:06 PM, Roman Podoliaka wrote:
> > Hmm, looks like we now run more testr workers in parallel (8 instead of 4):
> >
> > http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
> >
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
So that means reverting all stable/newton changes, previous 4.13.x
have been already blocked https://review.openstack.org/365565
How
On 09/14/2016 12:06 PM, Roman Podoliaka wrote:
> Hmm, looks like we now run more testr workers in parallel (8 instead of 4):
>
> http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
>
On 09/14/2016 11:23 AM, Thomas Goirand wrote:
> On 09/14/2016 03:15 PM, Sean Dague wrote:
>> I noticed the following issues happening quite often now in the
>> opportunistic db tests for nova -
>>
Hmm, looks like we now run more testr workers in parallel (8 instead of 4):
http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
http://logs.openstack.org/62/369862/3/check/gate-nova-python27-db-ubuntu-xenial/2784de9/console.html
On my machine running Nova
On 09/14/2016 11:08 AM, Mike Bayer wrote:
On 09/14/2016 09:15 AM, Sean Dague wrote:
I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
On 09/14/2016 03:15 PM, Sean Dague wrote:
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>
>
> It looks like some
On 09/14/2016 09:15 AM, Sean Dague wrote:
I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
It looks like some race has
Sean,
I'll take a closer look, but test execution times and errors look suspicious:
ironic.tests.unit.db.sqlalchemy.test_migrations.TestMigrationsPostgreSQL.test_walk_versions
60.002
2016-09-14 14:21:38.756421 | File
26 matches
Mail list logo