Public bug reported:
With following reproduce steps, trunk and port will be residual:
1. Operator create trunk , parent port and subport
2. Operator create VM with parent port
3. Port is bound succeed. Host and device information are updated.
4. Operator start to delete VM
5. Due to 3rd party plugin issue, port unbind failed. Host and device
information are not updated. Trunk and port binding information are not
updated. Port's status is normal and active.
Test result:
1. For nova, it will delete VM ignoring port update failed response
code. So VM is deleted successfully.
2. For Neutron, operator's expected behavior is trunk and port will be
deleted while the reality is operator fail to delete trunk as port is in bound
status.
6315384D-49A8-E88F-E911-0FB32A841FBB:/home/fsp # openstack network
trunk delete 8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b
Failed to delete trunk with name or ID
'8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b': Trunk
8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b is currently in use.
Neutron server returns request_ids:
['req-3d0dbe95-250f-4791-835f-ef9617ee7871']
1 of 1 trunks failed to delete.
Alternative:
1. Port bound check is ignored when trunk is deleted just as port. When
trunk delete is triggered, its
trunk_port_validator.can_be_trunked_or_untrunked(context) is removed.
2. Adding a new API:DELETE /v2.0/trunks/{trunk_id}/force_delete_trunk .
Since neutron does not support action in POST as nova, we do not have an
existing structure to add a parameter to support force delete as nova.
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938972
Title:
Trunk and port are residual in VM deleting scenario when southbound
plugin/agent failed
Status in neutron:
New
Bug description:
With following reproduce steps, trunk and port will be residual:
1. Operator create trunk , parent port and subport
2. Operator create VM with parent port
3. Port is bound succeed. Host and device information are updated.
4. Operator start to delete VM
5. Due to 3rd party plugin issue, port unbind failed. Host and device
information are not updated. Trunk and port binding information are not
updated. Port's status is normal and active.
Test result:
1. For nova, it will delete VM ignoring port update failed response
code. So VM is deleted successfully.
2. For Neutron, operator's expected behavior is trunk and port will be
deleted while the reality is operator fail to delete trunk as port is in bound
status.
6315384D-49A8-E88F-E911-0FB32A841FBB:/home/fsp # openstack network
trunk delete 8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b
Failed to delete trunk with name or ID
'8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b': Trunk
8aae96b5-18f1-4b67-b0eb-9fdffb0c9a1b is currently in use.
Neutron server returns request_ids:
['req-3d0dbe95-250f-4791-835f-ef9617ee7871']
1 of 1 trunks failed to delete.
Alternative:
1. Port bound check is ignored when trunk is deleted just as port. When
trunk delete is triggered, its
trunk_port_validator.can_be_trunked_or_untrunked(context) is removed.
2. Adding a new API:DELETE /v2.0/trunks/{trunk_id}/force_delete_trunk .
Since neutron does not support action in POST as nova, we do not have an
existing structure to add a parameter to support force delete as nova.
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938972/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp