Public bug reported:
I occasionally got error when I live migrage a instance. After I check
the nova-conductor logs, I found the errors as below:
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None] [instance:
999b1fc9-389b-4a78-b041-da0556e9e42d] Binding failed for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 and host master-node2.:
neutronclient.common.exceptions.Conflict: Binding for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 on host master-node2 already exists.
Sep 18 05:53:07 master-node1 nova-conductor[199668]: Neutron server returns
request_ids: ['req-de9620ea-62fb-4246-9463-80d2f46b3c65']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Traceback (most recent call
last):
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/nova/nova/network/neutron.py", line 1601, in bind_ports_to_host
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] binding =
client.create_port_binding(port_id, data)['binding']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/nova/nova/network/neutron.py", line 198, in wrapper
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] ret = obj(*args, **kwargs)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] ^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 840, in create_port_binding
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] return
self.post(self.port_bindings_path % port_id, body=body)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 364, in post
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] return
self.do_request("POST", action, body=body,
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 300, in do_request
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
self._handle_fault_response(status_code, replybody, resp)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 275, in _handle_fault_response
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
exception_handler_v20(status_code, error_body)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 90, in exception_handler_v20
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] raise
client_exc(message=error_message,
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
neutronclient.common.exceptions.Conflict: Binding for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 on host master-node2 already exists.
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Neutron server returns
request_ids: ['req-de9620ea-62fb-4246-9463-80d2f46b3c65']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR nova.network.neutron
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
Sep 18 05:53:07 master-node1 nova-conductor[199668]: DEBUG
nova.conductor.tasks.live_migrate [None
req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None] Skipping host: master-node2
because: Migration pre-check error: Binding failed for port
774a5c63-1e9f-4f7c-bc37-f038f2330338, please check neutron logs for more
information. {{(pid=199668) _find_destination
/opt/stack/nova/nova/conductor/tasks/live_migrate.py:576}}
Sep 18 05:53:08 master-node1 nova-conductor[199668]: DEBUG
nova.conductor.tasks.migrate [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa
None None] Created allocations for instance
999b1fc9-389b-4a78-b041-da0556e9e42d on 134cc6fb-d42c-4513-9114-1e105de6663a
{{(pid=199668) revert_allocation_for_migration
/opt/stack/nova/nova/conductor/tasks/migrate.py:111}}
Sep 18 05:53:08 master-node1 nova-conductor[199668]: WARNING
nova.scheduler.utils [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None]
Failed to compute_task_migrate_server: No valid host was found. There are not
enough hosts available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: Traceback (most recent
call last):
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/data/venv/lib/python3.12/site-packages/oslo_messaging/rpc/server.py",
line 269, in inner
Sep 18 05:53:08 master-node1 nova-conductor[199668]: return func(*args,
**kwargs)
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 269, in select_destinations
Sep 18 05:53:08 master-node1 nova-conductor[199668]: selections =
self._select_destinations(
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 296, in _select_destinations
Sep 18 05:53:08 master-node1 nova-conductor[199668]: selections =
self._schedule(
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 497, in _schedule
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
self._ensure_sufficient_hosts(
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 544, in
_ensure_sufficient_hosts
Sep 18 05:53:08 master-node1 nova-conductor[199668]: raise
exception.NoValidHost(reason=reason)
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
nova.exception.NoValidHost: No valid host was found. There are not enough hosts
available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: :
nova.exception_Remote.NoValidHost_Remote: No valid host was found. There are
not enough hosts available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: WARNING
nova.scheduler.utils [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None]
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Setting instance to ACTIVE
state.: nova.exception_Remote.NoValidHost_Remote: No valid host was found.
There are not enough hosts available.
After investigation, I found this was due to previous failures of the
instance live migration, which resulted in a redundant port binding.
There is a related bug in neutron:
https://bugs.launchpad.net/neutron/+bug/1979072
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2125034
Title:
During live-migration nova-conductor report 'No valid host was found'
due to duplicated port bindings
Status in neutron:
New
Bug description:
I occasionally got error when I live migrage a instance. After I check
the nova-conductor logs, I found the errors as below:
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None]
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Binding failed for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 and host master-node2.:
neutronclient.common.exceptions.Conflict: Binding for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 on host master-node2 already exists.
Sep 18 05:53:07 master-node1 nova-conductor[199668]: Neutron server returns
request_ids: ['req-de9620ea-62fb-4246-9463-80d2f46b3c65']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Traceback
(most recent call last):
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/nova/nova/network/neutron.py", line 1601, in bind_ports_to_host
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
binding = client.create_port_binding(port_id, data)['binding']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/nova/nova/network/neutron.py", line 198, in wrapper
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] ret =
obj(*args, **kwargs)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 840, in create_port_binding
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
return self.post(self.port_bindings_path % port_id, body=body)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 364, in post
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
return self.do_request("POST", action, body=body,
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 300, in do_request
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
self._handle_fault_response(status_code, replybody, resp)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 275, in _handle_fault_response
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
exception_handler_v20(status_code, error_body)
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] File
"/opt/stack/data/venv/lib/python3.12/site-packages/neutronclient/v2_0/client.py",
line 90, in exception_handler_v20
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] raise
client_exc(message=error_message,
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
neutronclient.common.exceptions.Conflict: Binding for port
774a5c63-1e9f-4f7c-bc37-f038f2330338 on host master-node2 already exists.
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Neutron
server returns request_ids: ['req-de9620ea-62fb-4246-9463-80d2f46b3c65']
Sep 18 05:53:07 master-node1 nova-conductor[199668]: ERROR
nova.network.neutron [instance: 999b1fc9-389b-4a78-b041-da0556e9e42d]
Sep 18 05:53:07 master-node1 nova-conductor[199668]: DEBUG
nova.conductor.tasks.live_migrate [None
req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None] Skipping host: master-node2
because: Migration pre-check error: Binding failed for port
774a5c63-1e9f-4f7c-bc37-f038f2330338, please check neutron logs for more
information. {{(pid=199668) _find_destination
/opt/stack/nova/nova/conductor/tasks/live_migrate.py:576}}
Sep 18 05:53:08 master-node1 nova-conductor[199668]: DEBUG
nova.conductor.tasks.migrate [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa
None None] Created allocations for instance
999b1fc9-389b-4a78-b041-da0556e9e42d on 134cc6fb-d42c-4513-9114-1e105de6663a
{{(pid=199668) revert_allocation_for_migration
/opt/stack/nova/nova/conductor/tasks/migrate.py:111}}
Sep 18 05:53:08 master-node1 nova-conductor[199668]: WARNING
nova.scheduler.utils [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None]
Failed to compute_task_migrate_server: No valid host was found. There are not
enough hosts available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: Traceback (most recent
call last):
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/data/venv/lib/python3.12/site-packages/oslo_messaging/rpc/server.py",
line 269, in inner
Sep 18 05:53:08 master-node1 nova-conductor[199668]: return func(*args,
**kwargs)
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 269, in select_destinations
Sep 18 05:53:08 master-node1 nova-conductor[199668]: selections =
self._select_destinations(
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 296, in _select_destinations
Sep 18 05:53:08 master-node1 nova-conductor[199668]: selections =
self._schedule(
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
^^^^^^^^^^^^^^^
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 497, in _schedule
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
self._ensure_sufficient_hosts(
Sep 18 05:53:08 master-node1 nova-conductor[199668]: File
"/opt/stack/nova/nova/scheduler/manager.py", line 544, in
_ensure_sufficient_hosts
Sep 18 05:53:08 master-node1 nova-conductor[199668]: raise
exception.NoValidHost(reason=reason)
Sep 18 05:53:08 master-node1 nova-conductor[199668]:
nova.exception.NoValidHost: No valid host was found. There are not enough hosts
available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: :
nova.exception_Remote.NoValidHost_Remote: No valid host was found. There are
not enough hosts available.
Sep 18 05:53:08 master-node1 nova-conductor[199668]: WARNING
nova.scheduler.utils [None req-9fa0ab78-a993-47d4-bfe5-7869a499f3fa None None]
[instance: 999b1fc9-389b-4a78-b041-da0556e9e42d] Setting instance to ACTIVE
state.: nova.exception_Remote.NoValidHost_Remote: No valid host was found.
There are not enough hosts available.
After investigation, I found this was due to previous failures of the
instance live migration, which resulted in a redundant port binding.
There is a related bug in neutron:
https://bugs.launchpad.net/neutron/+bug/1979072
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2125034/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp