Excerpts from Chris Friesen's message of 2013-11-19 11:37:02 -0800: > On 11/19/2013 12:35 PM, Clint Byrum wrote: > > > Each scheduler process can own a different set of resources. If they > > each grab instance requests in a round-robin fashion, then they will > > fill their resources up in a relatively well balanced way until one > > scheduler's resources are exhausted. At that time it should bow out of > > taking new instances. If it can't fit a request in, it should kick the > > request out for retry on another scheduler. > > > > In this way, they only need to be in sync in that they need a way to > > agree on who owns which resources. A distributed hash table that gets > > refreshed whenever schedulers come and go would be fine for that. > > That has some potential, but at high occupancy you could end up refusing > to schedule something because no one scheduler has sufficient resources > even if the cluster as a whole does. >
I'm not sure what you mean here. What resource spans multiple compute hosts? > This gets worse once you start factoring in things like heat and > instance groups that will want to schedule whole sets of resources > (instances, IP addresses, network links, cinder volumes, etc.) at once > with constraints on where they can be placed relative to each other. > Actually that is rather simple. Such requests have to be serialized into a work-flow. So if you say "give me 2 instances in 2 different locations" then you allocate 1 instance, and then another one with 'not_in_location(1)' as a condition. _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
