Might be nice to figure out why the code to clean unused locks on worker
startup is not working. It should detect orphaned locks and delete them but it
sounds like it isn't working. Maybe because the pid has been reused by another
process? Maybe we can improve it to make sure the process repres
Ah, good to know. Thanks Yun!
-jay
On 06/20/2012 03:32 PM, Yun Mao wrote:
Jay,
there is a tools/clean_file_locks.py that you might be able to take
advantage of.
Yun
On Wed, Jun 20, 2012 at 3:23 PM, Jay Pipes wrote:
Turns out my issue was a borked run of Tempest that left a
nova-ensure_bridg
Jay,
there is a tools/clean_file_locks.py that you might be able to take
advantage of.
Yun
On Wed, Jun 20, 2012 at 3:23 PM, Jay Pipes wrote:
> Turns out my issue was a borked run of Tempest that left a
> nova-ensure_bridge.lock file around. After manually destroying this lock
> file, Tempest is
Turns out my issue was a borked run of Tempest that left a
nova-ensure_bridge.lock file around. After manually destroying this lock
file, Tempest is running cleanly again.
I'll look into adding a forcible removal of this lockfile to the
unstack.sh script (which I personally use to reset my Dev
On 06/19/2012 03:13 PM, Vishvananda Ishaya wrote:
Sorry, paste fail on the last message.
This seems like a likely culprit:
https://review.openstack.org/#/c/8339/
I'm guessing it only happens on concurrent builds? We probably need a
synchronized somewhere.
I notice the the RPC calls to the ne
Vish, Jay,
OK, this looks promising. A couple of questions…
I'm seeing this RPC timeout on the Essex 2012.1 packages released with Ubuntu
12.04. I'm assuming these packages are affected by this bug?
Why would something this fundamental not show up during Essex RC.X testing?
How best to 'fix'
Sorry, paste fail on the last message.
This seems like a likely culprit:
https://review.openstack.org/#/c/8339/
I'm guessing it only happens on concurrent builds? We probably need a
synchronized somewhere.
Vish
On Jun 19, 2012, at 12:03 PM, Jay Pipes wrote:
> cc'ing Vish on this, as this is
This seems like a likely culprit.
Vish
On Jun 19, 2012, at 12:03 PM, Jay Pipes wrote:
> cc'ing Vish on this, as this is now occurring on every single devstack +
> Tempest run, for multiple servers.
>
> Vish, I am seeing the exact same issue as shown below. Instances end up in
> ERROR state an
cc'ing Vish on this, as this is now occurring on every single devstack +
Tempest run, for multiple servers.
Vish, I am seeing the exact same issue as shown below. Instances end up
in ERROR state and looking into the nova-network log, I find *no* errors
at all, and yet looking at the nova-compu
Hi Ross,
In the process of diagnosing this, but I'm seeing this sporadically when
running Tempest against a devstack install. I'll try to pinpoint the
issue later today and post back my findings.
Best,
-jay
p.s. Sorry for top-posting.
On 06/18/2012 06:03 PM, Lillie Ross-CDSR11 wrote:
I'm r
10 matches
Mail list logo