Public bug reported:
Currently, the default behavior of neutron when list security groups is:
if you are "admin", response everything we have in the env, and if the
user uses a security group from other project to boot a server, Nova
will show no error in API layer check but error raised in compute layer:
Step1:
get security groups list using admin role:
root@zhenyu-dev:/var/log/nova# neutron security-group-list
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
| id | name | tenant_id
| security_group_rules |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
| 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 | default |
1af7848eeb924fed851dd21bb23bb7c3 | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 |
| | |
| ingress, IPv6, remote_group_id: 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 |
| 74a120bb-e8d3-4337-bebe-d77fa848f55c | default |
16cad1bf21ce4874896c8dc88c89c997 | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: 74a120bb-e8d3-4337-bebe-d77fa848f55c |
| | |
| ingress, IPv6, remote_group_id: 74a120bb-e8d3-4337-bebe-d77fa848f55c |
| e152865b-fc99-4cc7-b9e6-584a800d71bc | default |
| egress, IPv4 |
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: e152865b-fc99-4cc7-b9e6-584a800d71bc |
| | |
| ingress, IPv6, remote_group_id: e152865b-fc99-4cc7-b9e6-584a800d71bc |
| e1cf0509-65c0-4213-9bc4-391554ab1a4a | default |
009f69811d5c40e9968c8d1fda7e222b | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: e1cf0509-65c0-4213-9bc4-391554ab1a4a |
| | |
| ingress, IPv6, remote_group_id: e1cf0509-65c0-4213-9bc4-391554ab1a4a |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
Step 2:
chose a security group from other project to boot a server:
root@zhenyu-dev:/var/log/nova# nova boot --image
572a29b0-7a22-4359-87b1-d30944d7c659 --security-groups
361efa37-1af0-43b2-8fa2-3cc8eccd18c9 --nic net-id=c248898a-4dd4-491a-
a8a9-01810ea338a2 --flavor 1 test
| Property | Value
|
+--------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability_zone |
|
| OS-EXT-SRV-ATTR:host | -
|
| OS-EXT-SRV-ATTR:hostname | test
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | -
|
| OS-EXT-SRV-ATTR:instance_name |
|
| OS-EXT-SRV-ATTR:kernel_id |
|
| OS-EXT-SRV-ATTR:launch_index | 0
|
| OS-EXT-SRV-ATTR:ramdisk_id |
|
| OS-EXT-SRV-ATTR:reservation_id | r-e9ny6bfb
|
| OS-EXT-SRV-ATTR:root_device_name | -
|
| OS-EXT-SRV-ATTR:user_data | -
|
| OS-EXT-STS:power_state | 0
|
| OS-EXT-STS:task_state | scheduling
|
| OS-EXT-STS:vm_state | building
|
| OS-SRV-USG:launched_at | -
|
| OS-SRV-USG:terminated_at | -
|
| accessIPv4 |
|
| accessIPv6 |
|
| adminPass | w5p9NErq77Fn
|
| config_drive |
|
| created | 2017-05-19T01:03:43Z
|
| description | -
|
| flavor | m1.tiny (1)
|
| hostId |
|
| host_status |
|
| id | b8ed1fae-93a3-425c-a4e9-d75c0d1631cd
|
| image | cirros-0.3.5-x86_64-disk
(572a29b0-7a22-4359-87b1-d30944d7c659) |
| key_name | -
|
| locked | False
|
| metadata | {}
|
| name | test
|
| os-extended-volumes:volumes_attached | []
|
| progress | 0
|
| security_groups | 361efa37-1af0-43b2-8fa2-3cc8eccd18c9
|
| status | BUILD
|
| tags | []
|
| tenant_id | 16cad1bf21ce4874896c8dc88c89c997
|
| updated | 2017-05-19T01:03:43Z
|
| user_id | 597ee5c1ea82482ca8aec10b1a688359
|
+--------------------------------------+-----------------------------------------------------------------+
Step3:
check the server again:
root@zhenyu-dev:/var/log/nova# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power
State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| b8ed1fae-93a3-425c-a4e9-d75c0d1631cd | test | ERROR | - | NOSTATE
| |
+--------------------------------------+------+--------+------------+-------------+----------+
Instance in Error state due to:
Nova compute Log:
2017-05-19 09:17:13.734 DEBUG nova.notifications.objects.base
[req-e9870e63-2388-4259-977e-4eab0ae64975 admin admin] Defaulting the value of
the field 'projects' to None i
...skipping...
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] self.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/model.py", line 573, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] self[:] = self._gt.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py"
, line 175, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] return self._exit_event.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line
125, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] current.throw(*self._exc)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py"
, line 214, in main
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] result = function(*args, **kwargs)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File "/opt/stack/nova/nova/utils.py",
line 1056, in context_wrapper
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] return func(*args, **kwargs)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/compute/manager.py", line 1415, in _alloca
te_network_async
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] six.reraise(*exc_info)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/compute/manager.py", line 1398, in _alloca
te_network_async
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] bind_host_id=bind_host_id)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/neutronv2/api.py", line 855, in al
locate_for_instance
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] instance, neutron, security_groups)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/neutronv2/api.py", line 653, in _p
rocess_security_groups
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] security_group_id=security_group)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] SecurityGroupNotFound: Security group
361efa37-1af0-43b2-8fa2-3cc8eccd1
8c9 not found.
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed]
2017-05-19 09:17:15.021 INFO nova.compute.manager
[req-e9870e63-2388-4259-977e-4eab0ae64975 admin admin] [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] Terminating insta
nce
This inconsistancy is caused by:
In API layer, we checked security groups using
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/security_group/neutron_driver.py#n145
it is a "show" API and since we are using admin context, it will get the
response
But in compute layer:
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py#n623
we are getting security group info by "list" API and filtering with
instance.project_id and
obviously we cannot get what we want.
** Affects: nova
Importance: Undecided
Assignee: Zhenyu Zheng (zhengzhenyu)
Status: In Progress
** Changed in: nova
Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691902
Title:
Should not allow security group from other project pass API layer
check when booting
Status in OpenStack Compute (nova):
In Progress
Bug description:
Currently, the default behavior of neutron when list security groups
is: if you are "admin", response everything we have in the env, and if
the user uses a security group from other project to boot a server,
Nova will show no error in API layer check but error raised in compute
layer:
Step1:
get security groups list using admin role:
root@zhenyu-dev:/var/log/nova# neutron security-group-list
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
| id | name | tenant_id
| security_group_rules |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
| 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 | default |
1af7848eeb924fed851dd21bb23bb7c3 | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 |
| | |
| ingress, IPv6, remote_group_id: 361efa37-1af0-43b2-8fa2-3cc8eccd18c9 |
| 74a120bb-e8d3-4337-bebe-d77fa848f55c | default |
16cad1bf21ce4874896c8dc88c89c997 | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: 74a120bb-e8d3-4337-bebe-d77fa848f55c |
| | |
| ingress, IPv6, remote_group_id: 74a120bb-e8d3-4337-bebe-d77fa848f55c |
| e152865b-fc99-4cc7-b9e6-584a800d71bc | default |
| egress, IPv4 |
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: e152865b-fc99-4cc7-b9e6-584a800d71bc |
| | |
| ingress, IPv6, remote_group_id: e152865b-fc99-4cc7-b9e6-584a800d71bc |
| e1cf0509-65c0-4213-9bc4-391554ab1a4a | default |
009f69811d5c40e9968c8d1fda7e222b | egress, IPv4
|
| | |
| egress, IPv6 |
| | |
| ingress, IPv4, remote_group_id: e1cf0509-65c0-4213-9bc4-391554ab1a4a |
| | |
| ingress, IPv6, remote_group_id: e1cf0509-65c0-4213-9bc4-391554ab1a4a |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------------------+
Step 2:
chose a security group from other project to boot a server:
root@zhenyu-dev:/var/log/nova# nova boot --image
572a29b0-7a22-4359-87b1-d30944d7c659 --security-groups
361efa37-1af0-43b2-8fa2-3cc8eccd18c9 --nic net-id=c248898a-4dd4-491a-
a8a9-01810ea338a2 --flavor 1 test
| Property | Value
|
+--------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability_zone |
|
| OS-EXT-SRV-ATTR:host | -
|
| OS-EXT-SRV-ATTR:hostname | test
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | -
|
| OS-EXT-SRV-ATTR:instance_name |
|
| OS-EXT-SRV-ATTR:kernel_id |
|
| OS-EXT-SRV-ATTR:launch_index | 0
|
| OS-EXT-SRV-ATTR:ramdisk_id |
|
| OS-EXT-SRV-ATTR:reservation_id | r-e9ny6bfb
|
| OS-EXT-SRV-ATTR:root_device_name | -
|
| OS-EXT-SRV-ATTR:user_data | -
|
| OS-EXT-STS:power_state | 0
|
| OS-EXT-STS:task_state | scheduling
|
| OS-EXT-STS:vm_state | building
|
| OS-SRV-USG:launched_at | -
|
| OS-SRV-USG:terminated_at | -
|
| accessIPv4 |
|
| accessIPv6 |
|
| adminPass | w5p9NErq77Fn
|
| config_drive |
|
| created | 2017-05-19T01:03:43Z
|
| description | -
|
| flavor | m1.tiny (1)
|
| hostId |
|
| host_status |
|
| id | b8ed1fae-93a3-425c-a4e9-d75c0d1631cd
|
| image | cirros-0.3.5-x86_64-disk
(572a29b0-7a22-4359-87b1-d30944d7c659) |
| key_name | -
|
| locked | False
|
| metadata | {}
|
| name | test
|
| os-extended-volumes:volumes_attached | []
|
| progress | 0
|
| security_groups | 361efa37-1af0-43b2-8fa2-3cc8eccd18c9
|
| status | BUILD
|
| tags | []
|
| tenant_id | 16cad1bf21ce4874896c8dc88c89c997
|
| updated | 2017-05-19T01:03:43Z
|
| user_id | 597ee5c1ea82482ca8aec10b1a688359
|
+--------------------------------------+-----------------------------------------------------------------+
Step3:
check the server again:
root@zhenyu-dev:/var/log/nova# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power
State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| b8ed1fae-93a3-425c-a4e9-d75c0d1631cd | test | ERROR | - | NOSTATE
| |
+--------------------------------------+------+--------+------------+-------------+----------+
Instance in Error state due to:
Nova compute Log:
2017-05-19 09:17:13.734 DEBUG nova.notifications.objects.base
[req-e9870e63-2388-4259-977e-4eab0ae64975 admin admin] Defaulting the value of
the field 'projects' to None i
...skipping...
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] self.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/model.py", line 573, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] self[:] = self._gt.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py"
, line 175, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] return self._exit_event.wait()
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line
125, in wait
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] current.throw(*self._exc)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py"
, line 214, in main
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] result = function(*args, **kwargs)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File "/opt/stack/nova/nova/utils.py",
line 1056, in context_wrapper
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] return func(*args, **kwargs)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/compute/manager.py", line 1415, in _alloca
te_network_async
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] six.reraise(*exc_info)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/compute/manager.py", line 1398, in _alloca
te_network_async
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] bind_host_id=bind_host_id)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/neutronv2/api.py", line 855, in al
locate_for_instance
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] instance, neutron, security_groups)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] File
"/opt/stack/nova/nova/network/neutronv2/api.py", line 653, in _p
rocess_security_groups
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] security_group_id=security_group)
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] SecurityGroupNotFound: Security group
361efa37-1af0-43b2-8fa2-3cc8eccd1
8c9 not found.
2017-05-19 09:17:15.020 TRACE nova.compute.manager [instance:
8668778e-84e3-4935-b428-eb8d907db9ed]
2017-05-19 09:17:15.021 INFO nova.compute.manager
[req-e9870e63-2388-4259-977e-4eab0ae64975 admin admin] [instance:
8668778e-84e3-4935-b428-eb8d907db9ed] Terminating insta
nce
This inconsistancy is caused by:
In API layer, we checked security groups using
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/security_group/neutron_driver.py#n145
it is a "show" API and since we are using admin context, it will get the
response
But in compute layer:
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/neutronv2/api.py#n623
we are getting security group info by "list" API and filtering with
instance.project_id and
obviously we cannot get what we want.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691902/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp