On Wed, Oct 31, 2012 at 10:54 PM, Vishvananda Ishaya

> My patch here seems to fix the issue in the one scheduler case:
> https://github.com/vishvananda/nova/commit/2eaf796e60bd35319fe6add6dd04359546a21682
> If you could give that a try on your scheduler node and see if it fixes it
> that would be awesome. Also, it would be fery helpful if you can report a
> bug for me to reference in my merge proposal. I will see what I can do to
> write a few tests and have a potential fix for multiple schedulers.

bug is here https://bugs.launchpad.net/nova/+bug/1073956 since you've
reproduced it

If "give it a try" ==  drop that host_manager.py in place of my (folsom)
file and restart the scheduler I'm still getting the same results when
using a 100 iteration for loop around nova boot  --availability-zone <az:host>
all 100 end up on nova-1 but I'm suspicious this may avoid the scheduler
entirely not sure how that  availability zone trick for specifying a target
hso tis implemented. My user's case that I'm trying to make work uses
'euca-run-instances -n 500' using that with a value of 200 (and your
host_manager.py) the scheduler immediately puts them all in error state and
doesn't say any more about it, previously it was scheduling them poorly.
It's quite possible I've knocked something loose while banging around (also
possible I need to pull your whole branch, didn''t look as closely as I
should have at what it was based on) so I'm going to recheck my services
and do some more tests but that's what I see at first.

Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to