On 3/29/2018 12:05 PM, Chris Dent wrote:
Other suggestions? I'm looking at things like turning off scheduler_tracks_instance_changes, since affinity scheduling is not needed (at least so-far), but not sure that that will help with placement load (seems like it might, though?)

This won't impact the placement service itself.

It seemed like it might be causing the compute nodes to make calls to update allocations, so I was thinking it might reduce the load a bit, but I didn't confirm that. This was "clutching at straws" - hopefully I won't need to now.

There's duplication of instance state going to both placement and
the nova-scheduler. The number of calls from nova-compute to
placement reduces a bit as you updgrade to newer releases. It's
still more than we'd prefer.

As Chris said, scheduler_tracks_instance_changes doesn't have anything to do with Placement, and it will add more RPC load to your system because all computes are RPC casting to the scheduler for every instance create/delete/move operation along with a periodic that runs, by default, every minute on each compute service to sync things up.

The primary need for scheduler_tracks_instance_changes is the (anti-)affinity filters in the scheduler (and maybe if you're using the CachingScheduler). If you don't enable the (anti-)affinity filters (they are enabled by default), then you can disable scheduler_tracks_instance_changes.

Note that you can still disable scheduler_tracks_instance_changes and run the affinity filters, but the scheduler will likely make poor decisions in a busy cloud which can result in reschedules, which are also expensive.

Long-term, we hope to remove the need for scheduler_tracks_instance_changes at all because we should have all of the information we need about the instances in the Placement service, which is generally considered global to the deployment. However, we don't yet have a way to model affinity/distance in Placement, and that's what's holding us back from removing scheduler_tracks_instance_changes and the existing affinity filters.

--

Thanks,

Matt

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to