On 10/07/2015 07:23 PM, Ian Wells wrote:
On 7 October 2015 at 16:00, Chris Friesen <chris.frie...@windriver.com
<mailto:chris.frie...@windriver.com>> wrote:

    1) Some resources (RAM) only require tracking amounts.  Other resources
    (CPUs, PCI devices) require tracking allocation of specific individual host
    resources (for CPU pinning, PCI device allocation, etc.).  Presumably for
    the latter we would have to actually do the allocation of resources at the
    time of the scheduling operation in order to update the database with the
    claimed resources in a race-free way.


The whole process is inherently racy (and this is inevitable, and correct),
which is why the scheduler works the way it does:

- scheduler guesses at a host based on (guaranteed - hello distributed systems!)
outdated information
- VM is scheduled to a host that looks like it might work, and host attempts to
run it
- VM run may fail (because the information was outdated or has become outdated),
in which case we retry the schedule

Why is it inevitable?

Theoretically if the DB knew about what resources were originally available and what resources have been consumed, then it should be able to allocate resources race-free (possibly with some retries involved if racing against other schedulers updating the DB, but that would be internal to the scheduler itself).

Or does that just not scale enough and we need to use inherently racy models?

Chris


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to