Thanks for the info, so it seems we are not going to implement aggregate
overcommit ratio in placement at least in the near future?

On Wed, Jan 17, 2018 at 5:24 AM, melanie witt <[email protected]> wrote:

> Hello Stackers,
>
> This is a heads up to any of you using the AggregateCoreFilter,
> AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler.
> These filters have effectively allowed operators to set overcommit ratios
> per aggregate rather than per compute node in <= Newton.
>
> Beginning in Ocata, there is a behavior change where aggregate-based
> overcommit ratios will no longer be honored during scheduling. Instead,
> overcommit values must be set on a per compute node basis in nova.conf.
>
> Details: as of Ocata, instead of considering all compute nodes at the
> start of scheduler filtering, an optimization has been added to query
> resource capacity from placement and prune the compute node list with the
> result *before* any filters are applied. Placement tracks resource capacity
> and usage and does *not* track aggregate metadata [1]. Because of this,
> placement cannot consider aggregate-based overcommit and will exclude
> compute nodes that do not have capacity based on per compute node
> overcommit.
>
> How to prepare: if you have been relying on per aggregate overcommit,
> during your upgrade to Ocata, you must change to using per compute node
> overcommit ratios in order for your scheduling behavior to stay consistent.
> Otherwise, you may notice increased NoValidHost scheduling failures as the
> aggregate-based overcommit is no longer being considered. You can safely
> remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter
> from your enabled_filters and you do not need to replace them with any
> other core/ram/disk filters. The placement query takes care of the
> core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter
> are redundant.
>
> Thanks,
> -melanie
>
> [1] Placement has been a new slate for resource management and prior to
> placement, there were conflicts between the different methods for setting
> overcommit ratios that were never addressed, such as, "which value to take
> if a compute node has overcommit set AND the aggregate has it set? Which
> takes precedence?" And, "if a compute node is in more than one aggregate,
> which overcommit value should be taken?" So, the ambiguities were not
> something that was desirable to bring forward into placement.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: [email protected]?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to