In this case the compute service is managing baremetal nodes using the ironic driver, correct? In that case yes the ComputeFilter will reject all nodes managed by that compute service host because it has been disabled, just like it would in a 1:1 host:node (hypervisor) topology when using the libvirt driver for example.
One question is why are you disabling the hypervisor if you still want things scheduled to it? If you're doing an upgrade / maintenance on the compute service host, then you likely don't want new scheduling requests to go to it because they could fail while the nova-compute service is down. This doesn't really sound like a bug since it is the designed behavior. At most it's an opinion but you haven't really stated a use case besides "I want to be able to schedule baremetal instances on nodes managed by a disabled service" which is probably not something we're going to support. There is the concept of the ironic compute service hashring to have HA management of the same ironic nodes, maybe that's something you should be investigating? https://specs.openstack.org/openstack/nova- specs/specs/newton/implemented/ironic-multiple-compute-hosts.html https://docs.openstack.org/ironic/stein/install/configure- compute.html?highlight=hash%20ring ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1831195 Title: disable one compute service will prevent scheduler to choose the hypervisor the compute service manange Status in OpenStack Compute (nova): Invalid Bug description: Description =========== When one nova compute service manage multi hypervisors, disable the nova-compute service will prevent nova scheduler from choosing all the hypervisor it manages because the code in compute_filter.py : service = host_state.service if service['disabled']: LOG.debug("%(host_state)s is disabled, reason: %(reason)s", {'host_state': host_state, 'reason': service.get('disabled_reason')}) return False and i think this is not reasonable. One solution is to make each compute service manage only one hypervisor, but this result more compute services. Steps to reproduce ================== Expected result =============== operator can disable individual hypervisor Actual result ============= Disable one nova compute prevent nova scheduer choosing all the hypervisors it manages Environment =========== Logs & Configs ============== To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1831195/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : [email protected] Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp

