On Wed, Oct 31, 2012 at 10:54 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
My patch here seems to fix the issue in the one scheduler case:
https://github.com/vishvananda/nova/commit/2eaf796e60bd35319fe6add6dd04359546a21682
If you could give that a try on your scheduler node and see
On Wed, Oct 31, 2012 at 10:40:57AM +0800, Huang Zhiteng wrote:
On Wed, Oct 31, 2012 at 10:07 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
On Oct 30, 2012, at 7:01 PM, Huang Zhiteng winsto...@gmail.com wrote:
I'd suggest the same ratio too. But besides memory overcommitment, I
Hi All
While the RetryScheduler may not have been designed specifically to
fix this issue https://bugs.launchpad.net/nova/+bug/1011852 suggests
that it is meant to fix it, well if it is a scheduler race condition
which is my suspicion.
This is my current scheduler config which gives the failure
Hi Jonathan,
If I understand correctly, that bug is about multiple scheduler
instances(processes) doing scheduler at the same time. When compute
node found itself unable to fulfil a create_instance request, it'll
resend the request back to scheduler (max_retry is to avoid endless
retry). From
On Wed, Oct 31, 2012 at 1:47 PM, Huang Zhiteng winsto...@gmail.com wrote:
Hi Jonathan,
If I understand correctly, that bug is about multiple scheduler
There is only a single process, I was reading it as relating to
include threads within a single process, but they should clearly be
able to
On Oct 31, 2012, at 1:44 PM, Jonathan Proulx j...@jonproulx.com wrote:
I'd only been pushing these options to the host the scheduler runs on, is it
that simple? I'm delight if I'm an an idiot and just need a few line in a
config file, but puzzled why this was (seemingly at least) working
Hi All,
I'm having what I consider serious issues with teh scheduler in
Folsom. It seems to relate to the introdution of threading in the
scheduler.
For a number of local reason we prefer to have instances start on the
compute node with the least amount of free RAM that is still enough to
The retry scheduler is NOT meant to be a workaround for this. It sounds like
the ram filter is not working properly somehow. Have you changed the setting
for ram_allocation_ratio? It defaults to 1.5 allowing overallocation, but in
your case you may want 1.0.
I would be using the following two
On Wed, Oct 31, 2012 at 6:55 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
The retry scheduler is NOT meant to be a workaround for this. It sounds like
the ram filter is not working properly somehow. Have you changed the setting
for ram_allocation_ratio? It defaults to 1.5 allowing
On Oct 30, 2012, at 7:01 PM, Huang Zhiteng winsto...@gmail.com wrote:
I'd suggest the same ratio too. But besides memory overcommitment, I
suspect this issue is also related to how KVM do memory allocation (it
doesn't do actual allocation of the entire memory for guest when
booting). I've
On Wed, Oct 31, 2012 at 10:07 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
On Oct 30, 2012, at 7:01 PM, Huang Zhiteng winsto...@gmail.com wrote:
I'd suggest the same ratio too. But besides memory overcommitment, I
suspect this issue is also related to how KVM do memory allocation (it
11 matches
Mail list logo