On Wed, 11 Jan 2017, Chris Dent wrote:
The basis for this conclusion is from three assumptions:* The value of 'local_gb' on the compute_node object is any disk the compute_node can see/use and the concept of associating with shared disk by aggregates is not something that is real yet[0]. * Any query for resources from the scheduler client is going to include a VCPU requirement of at least one (meaning that every resource provider returned will be a compute node[1]). * Claiming the consumption of some of that local_gb by the resource tracker is the resource tracker's problem and not something we're talking about here[2]. If all that's true, then we're getting pretty close for near term joy on limiting the number of hosts the filter scheduler needs to filter[3].
These assumptions don't address any situations where baremetal is what's being requested. Custom resource classes will help address that, but it is not clear what the state of that is (or will be soon) from the scheduler side. -- Chris Dent ¯\_(ツ)_/¯ https://anticdent.org/ freenode: cdent tw: @anticdent
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
