On 11/19/2013 12:35 PM, Clint Byrum wrote:

Each scheduler process can own a different set of resources. If they
each grab instance requests in a round-robin fashion, then they will
fill their resources up in a relatively well balanced way until one
scheduler's resources are exhausted. At that time it should bow out of
taking new instances. If it can't fit a request in, it should kick the
request out for retry on another scheduler.

In this way, they only need to be in sync in that they need a way to
agree on who owns which resources. A distributed hash table that gets
refreshed whenever schedulers come and go would be fine for that.

That has some potential, but at high occupancy you could end up refusing to schedule something because no one scheduler has sufficient resources even if the cluster as a whole does.

This gets worse once you start factoring in things like heat and instance groups that will want to schedule whole sets of resources (instances, IP addresses, network links, cinder volumes, etc.) at once with constraints on where they can be placed relative to each other.

Chris


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to