[Yahoo-eng-team] [Bug 1854131] [NEW] Old conjunction left after sg update

2019-11-26 Thread Yang Li
Public bug reported:

1.Create 2 security groups:
test-security1, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security1)
test-security2, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security2)

2.Create a VM(IP: 40.0.0.46) with test-security1, then the open flows showed:
 cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, n_bytes=0, 
idle_age=790, priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2)
 cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, n_bytes=0, 
idle_age=790, priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2)

3.Update VM's sg to test-security2, then the open flows showed:
 cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, n_bytes=0, 
idle_age=814, priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2),conjunction(22,1/2)
 cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, n_bytes=0, 
idle_age=814, priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2),conjunction(23,1/2)

You can see the old conjunction for test-security1 still exists: 
conjunction(14,1/2) and conjunction(15,1/2)
This will cause security problem for VM, because it still can be reached by the 
old sg VMs.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  1.Create 2 security groups:
  test-security1, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security1)
  test-security2, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security2)
  
  2.Create a VM(IP: 40.0.0.46) with test-security1, then the open flows showed:
-  cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2)
-  cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2)
+  cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2)
+  cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2)
  
- 3.Update VM's sg to test-security2, then the open flows showed: 
-  cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, 
n_bytes=0, idle_age=814, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2),conjunction(22,1/2)
-  cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, 
n_bytes=0, idle_age=814, 
priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2),conjunction(23,1/2)
+ 3.Update VM's sg to test-security2, then the open flows showed:
+  cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, 
n_bytes=0, idle_age=814, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2),conjunction(22,1/2)
+  cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, 
n_bytes=0, idle_age=814, 
priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2),conjunction(23,1/2)
  
- You can see the old conjunction for test-security1 still exists: 
conjunction(15,1/2)
+ You can see the old conjunction for test-security1 still exists: 
conjunction(14,1/2) and conjunction(15,1/2)
  This will cause security problem for VM, because it still can be reached by 
the old sg VMs.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854131

Title:
  Old conjunction left after sg update

Status in neutron:
  New

Bug description:
  1.Create 2 security groups:
  test-security1, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security1)
  test-security2, with rule(ingress, IPv4, 1-65535/tcp, remote_group: 
test-security2)

  2.Create a VM(IP: 40.0.0.46) with test-security1, then the open flows showed:
   cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2)
   cookie=0x4fff3d22d8b38f46, duration=52.174s, table=82, n_packets=0, 
n_bytes=0, idle_age=790, 
priority=73,ct_state=+new-est,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(15,1/2)

  3.Update VM's sg to test-security2, then the open flows showed:
   cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packets=0, 
n_bytes=0, idle_age=814, 
priority=73,ct_state=+est-rel-rpl,ip,reg6=0x8,nw_src=40.0.0.46 
actions=conjunction(14,1/2),conjunction(22,1/2)
   cookie=0x12bb9d102f0c8b3b, duration=2.298s, table=82, n_packe

[Yahoo-eng-team] [Bug 1647421] Re: Neutron attempts to schedule DHCP agents even when intentionally not in use

2019-11-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/693645
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=139b496ef957364caee6861fd16676cdfb76a38b
Submitter: Zuul
Branch:master

commit 139b496ef957364caee6861fd16676cdfb76a38b
Author: Reedip 
Date:   Mon Nov 11 10:54:42 2019 +0900

Dont schedule Network, respecting network_auto_schedule config

Currently, if dhcp mapping from a network is removed, it is reassigned
to the network. This is because of the Network Scheduler's schedule
function, which considers balancing the networks with the agents, whether
enable_dhcp is set on its subnets or not. It does not take into account
the network_auto_schedule config option. This is particularly disturbing
when considering backends which have their provide their own DHCP.

With this patch, if network_auto_schedule is set to False, networks wont
be automatically scheduled to DHCP Agents. If DHCP is to be mapped to a
network, it can be mapped using the CLI itself.

While it may seem that this change is breaking what is already working,
but as mentioned earlier, if there are network backends which provide DHCP
support themselves, they wont need the automatic mapping, which the term
"network_auto_schedule" actually stands for.

Closes-Bug: #1647421
Change-Id: If1a6a2a174d0f737415efa2abce518722316a77b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647421

Title:
  Neutron attempts to schedule DHCP agents even when intentionally not
  in use

Status in networking-ovn:
  Confirmed
Status in neutron:
  Fix Released

Bug description:
  OVN has its own native support for DHCP, so the Neutron DHCP agent is
  not in use.  When networks get created, we see warnings in the log
  about Neutron still trying to schedule DHCP agents.

  We should be able to disable this code path completely when the DHCP
  agent is intentionally not in use.

  2016-12-05 16:44:12.252 23149 WARNING neutron.scheduler.dhcp_agent_scheduler 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] No more DHCP agents
  2016-12-05 16:44:12.253 23149 WARNING 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] Unable to schedule network 
45d29a60-a672-429f-b7d7-551ee985c8ca: no agents available; will retry on 
subsequent port and subnet creation events.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1647421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854126] [NEW] s390x: failed to live migrate VM

2019-11-26 Thread jichenjc
Public bug reported:

see following logs when doing live migration on s390s platform with KVM

openstack server migrate --live kvm02 --block-migration d28caa4a-215b-
44c8-bed0-e0e7faca07e5


ogs:

2019-10-10 12:03:25.710 19003 ERROR nova.virt.libvirt.driver [req-
83d11ac0-3414-489e-8ad2-bfd0078e059f 44cdcb0bbe9e40fc91c043533d4dcbac
4067c50d412549c29b2deb58ec400ea1 - default default] CPU doesn't have
compatibility.

XML error: Missing CPU model name

Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult: 
libvirtError: XML error: Missing CPU model name
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
[req-83d11ac0-3414-489e-8ad2-bfd0078e059f 44cdcb0bbe9e40fc91c043533d4dcbac 
4067c50d412549c29b2deb58ec400ea1 - default default] Exception during message 
handling: MigrationPreCheckError: Migration pre-check error: CPU doesn't have 
compatibility.

XML error: Missing CPU model name

Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in 
_process_incoming
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, 
in dispatch
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server return 
self.do_dispatch(endpoint, method, ctxt, args)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in do_dispatch
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 79, in 
wrapped
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary, tb)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in exit
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
six.reraise(self.type, self.value, self.tb)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 69, in 
wrapped
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1418, in 
decorated_function
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 215, in 
decorated_function
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in exit
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
six.reraise(self.type, self.value, self.tb)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 203, in 
decorated_function
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6262, in 
check_can_live_migrate_destination
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server block_migration, 
disk_over_commit)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7205, in 
check_can_live_migrate_destination
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server 
self._compare_cpu(None, source_cpu_info, instance)
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7452, in 
_compare_cpu
2019-10-10 12:03:25.748 19003 ERROR oslo_messaging.rpc.server reas

[Yahoo-eng-team] [Bug 1783565] Re: ServerGroupTestV21.test_evacuate_with_anti_affinity_no_valid_host intermittently fails with "Instance compute service state on host2 expected to be down, but it was

2019-11-26 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/train
   Status: New => Fix Released

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/stein
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1783565

Title:
  ServerGroupTestV21.test_evacuate_with_anti_affinity_no_valid_host
  intermittently fails with "Instance compute service state on host2
  expected to be down, but it was up."

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  Fix Released

Bug description:
  http://logs.openstack.org/32/584032/5/check/nova-tox-functional-
  py35/7061ec1/job-output.txt.gz#_2018-07-25_03_16_46_462415

  18-07-25 03:16:46.418499 | ubuntu-xenial | {5} 
nova.tests.functional.test_server_group.ServerGroupTestV21.test_evacuate_with_anti_affinity_no_valid_host
 [14.070214s] ... FAILED
  2018-07-25 03:16:46.418582 | ubuntu-xenial |
  2018-07-25 03:16:46.418645 | ubuntu-xenial | Captured traceback:
  2018-07-25 03:16:46.418705 | ubuntu-xenial | ~~~
  2018-07-25 03:16:46.418798 | ubuntu-xenial | b'Traceback (most recent 
call last):'
  2018-07-25 03:16:46.419095 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/test_server_group.py",
 line 456, in test_evacuate_with_anti_affinity_no_valid_host'
  2018-07-25 03:16:46.419232 | ubuntu-xenial | b"
self.admin_api.post_server_action(servers[1]['id'], post)"
  2018-07-25 03:16:46.419471 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/api/client.py",
 line 294, in post_server_action'
  2018-07-25 03:16:46.419602 | ubuntu-xenial | b"'/servers/%s/action' % 
server_id, data, **kwargs).body"
  2018-07-25 03:16:46.419841 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/api/client.py",
 line 235, in api_post'
  2018-07-25 03:16:46.419975 | ubuntu-xenial | b'return 
APIResponse(self.api_request(relative_uri, **kwargs))'
  2018-07-25 03:16:46.420187 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/api/client.py",
 line 213, in api_request'
  2018-07-25 03:16:46.420263 | ubuntu-xenial | b'response=response)'
  2018-07-25 03:16:46.420545 | ubuntu-xenial | 
b'nova.tests.functional.api.client.OpenStackApiException: Unexpected status 
code: {"badRequest": {"message": "Compute service of host2 is still in use.", 
"code": 400}}'
  2018-07-25 03:16:46.420581 | ubuntu-xenial | b''
  2018-07-25 03:16:46.420606 | ubuntu-xenial |
  2018-07-25 03:16:46.420654 | ubuntu-xenial | Captured stderr:
  2018-07-25 03:16:46.420702 | ubuntu-xenial | 
  2018-07-25 03:16:46.421102 | ubuntu-xenial | 
b'/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional-py35/lib/python3.5/site-packages/oslo_db/sqlalchemy/enginefacade.py:350:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade'
  2018-07-25 03:16:46.421240 | ubuntu-xenial | b'  self._legacy_facade = 
LegacyEngineFacade(None, _factory=self)'
  2018-07-25 03:16:46.421623 | ubuntu-xenial | 
b'/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional-py35/lib/python3.5/site-packages/oslo_db/sqlalchemy/enginefacade.py:350:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade'
  2018-07-25 03:16:46.421751 | ubuntu-xenial | b'  self._legacy_facade = 
LegacyEngineFacade(None, _factory=self)'
  2018-07-25 03:16:46.422054 | ubuntu-xenial | 
b"/home/zuul/src/git.openstack.org/openstack/nova/nova/test.py:323: 
DeprecationWarning: Using class 'MoxStubout' (either directly or via 
inheritance) is deprecated in version '3.5.0'"
  2018-07-25 03:16:46.422174 | ubuntu-xenial | b'  mox_fixture = 
self.useFixture(moxstubout.MoxStubout())'
  2018-07-25 03:16:46.422537 | ubuntu-xenial | 
b'/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional-py35/lib/python3.5/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.'
  2018-07-25 03:16:46.422664 | ubuntu-xenial | b'  return 
pkg_resources.EntryPoint.parse("x=" + s).load(False)'
  2018-07-25 03:16:46.422928 | ubuntu-xenial | 
b"/home/zuul/src/git.openstack.org/opensta

[Yahoo-eng-team] [Bug 1793207] Re: external_gateway_info enable_snat attribute should be owner-modifiable

2019-11-26 Thread Brian Haley
Was just going through old bugs and patches and noticed this one,
updating based on information I received.

>From Salvatore:

"My recollection is the same as Akihiro. A tenant has no knowledge of IP
addressing beyond the resource it owns, and since a no-snat
configuration implies E-W L3 forwarding an “admin” entity should be
required to set this attribute. Another reason making this capability
self-service was breaking some use cases (more specifically an IPv6 only
cloud service that never did NAT, I think you remember them😉 ). On the
other hand the main driver were other operators complaining that in
their environment they really did not need NAT whereas the reference
implementation was SNATting by default. So limiting the capability to
admins was also one of the many compromises we did back in the heyday of
Neutron…"

So having this an admin-controlled setting is mandatory.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1793207

Title:
  external_gateway_info enable_snat attribute should be owner-modifiable

Status in neutron:
  Won't Fix

Bug description:
  Currently, policy.json restricts who can change the 'enable_snat'
  setting of a router.  For example:

  stack@18-04:~/devstack$ openstack router show -c external_gateway_info router1
  
+---++
  | Field | Value   


   |
  
+---++
  | external_gateway_info | {"network_id": 
"91bdb30f-9be8-45ac-a313-bb33a99e92dc", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "e9b318e1-01af-49a1-90bc-ffe949a42e05", 
"ip_address": "172.24.4.3"}, {"subnet_id": 
"73f36385-d58a-4b74-9262-bcb603e73aee", "ip_address": "2001:db8::6"}]} |
  
+---++
  stack@18-04:~/devstack$ openstack router set --disable-snat 
--external-gateway 91bdb30f-9be8-45ac-a313-bb33a99e92dc router1
  HttpException: 403: Client Error for url: 
http://10.18.57.23:9696/v2.0/routers/783d4563-c4d4-417c-a5de-eb7668373f63, 
{"NeutronError": {"message": "(rule:update_router and 
(rule:update_router:external_gateway_info and 
(rule:update_router:external_gateway_info:network_id and 
rule:update_router:external_gateway_info:enable_snat))) is disallowed by 
policy", "type": "PolicyNotAuthorized", "detail": ""}}

  I'm not sure there's a good reason the owner can't modify this, and
  looking back through the blueprints there was only a mention of it -
  "for instance a provider might want to restrict enable_snat to admin
  only users" - so it seems it was intended for the owner originally
  with the caveat that the admin could restrict if necessary.

  This fix would be as simple as updating these two entries:

  "create_router:external_gateway_info:enable_snat": "rule:admin_only"
  "update_router:external_gateway_info:enable_snat": "rule:admin_only"

  to have:

  "rule:admin_or_owner"

  Perhaps there's something I'm missing, so will need to discuss with
  others to see if this should change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1793207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854084] [NEW] Headers no longer passed through read_file_or_url

2019-11-26 Thread Thomas Stringer
Public bug reported:

This commit (https://git.launchpad.net/cloud-
init/commit/?id=4bc399e0cd0b7e9177f948aecd49f6b8323ff30b) has a diff
that includes the removal of passing headers through to readurl from
read_file_or_url: https://github.com/canonical/cloud-
init/commit/4bc399e0cd0b7e9177f948aecd49f6b8323ff30b#diff-
a779470bb47168497ada0a33f7990b01L104

With this commit, the headers parameter is virtually useless (besides
with raising the exception). This has caused us to no longer pass
through headers (https://github.com/canonical/cloud-
init/commit/4bc399e0cd0b7e9177f948aecd49f6b8323ff30b#diff-
a779470bb47168497ada0a33f7990b01L104)  when we make a request using
read_file_or_url (https://github.com/canonical/cloud-
init/blob/master/cloudinit/sources/helpers/azure.py#L186).

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1854084

Title:
  Headers no longer passed through read_file_or_url

Status in cloud-init:
  New

Bug description:
  This commit (https://git.launchpad.net/cloud-
  init/commit/?id=4bc399e0cd0b7e9177f948aecd49f6b8323ff30b) has a diff
  that includes the removal of passing headers through to readurl from
  read_file_or_url: https://github.com/canonical/cloud-
  init/commit/4bc399e0cd0b7e9177f948aecd49f6b8323ff30b#diff-
  a779470bb47168497ada0a33f7990b01L104

  With this commit, the headers parameter is virtually useless (besides
  with raising the exception). This has caused us to no longer pass
  through headers (https://github.com/canonical/cloud-
  init/commit/4bc399e0cd0b7e9177f948aecd49f6b8323ff30b#diff-
  a779470bb47168497ada0a33f7990b01L104)  when we make a request using
  read_file_or_url (https://github.com/canonical/cloud-
  init/blob/master/cloudinit/sources/helpers/azure.py#L186).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1854084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838392] Re: BDMNotFound raised and stale block devices left over when simultaneously reboot and deleting an instance

2019-11-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/673463
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9ad54f3dacbd372271f441baea5380f913072dde
Submitter: Zuul
Branch:master

commit 9ad54f3dacbd372271f441baea5380f913072dde
Author: Lee Yarwood 
Date:   Mon Jul 29 16:25:45 2019 +0100

compute: Take an instance.uuid lock when rebooting

Previously simultaneous requests to reboot and delete an instance could
race as only the latter took a lock against the uuid of the instance.

With the Libvirt driver this race could potentially result in attempts
being made to reconnect previously disconnected volumes on the host.
Depending on the volume backend being used this could then result in
stale block devices point to unmapped volumes being left on the host
that in turn could cause failures later on when connecting newly mapped
volumes.

This change avoids this race by ensuring any request to reboot an
instance takes an instance.uuid lock within the compute manager,
serialising requests to reboot and then delete the instance.

Closes-Bug: #1838392
Change-Id: Ieb59de10c63bb067f92ec054535766cdd722dae2


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1838392

Title:
  BDMNotFound raised and stale block devices left over when
  simultaneously reboot and deleting an instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Simultaneous requests to reboot and delete an instance _will_ race as only 
the call to delete takes a lock against the instance.uuid.

  One possible outcome of this seen in the wild with the Libvirt driver
  is that the request to soft reboot will eventually turn into a hard
  reboot, reconnecting volumes that the delete request has already
  disconnected. These volumes will eventually be unmapped on the Cinder
  side by the delete request leaving stale devices on the host.
  Additionally BDMNotFound is raised by the reboot operation as the
  delete operation has already deleted the BDMs.

  Steps to reproduce
  ==
  $ nova reboot $instance && nova delete $instance

  Expected result
  ===
  The instance reboots and is then deleted without any errors raised.

  Actual result
  =
  BDMNotFound raised and stale block devices left over.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  1599e3cf68779eafaaa2b13a273d3bebd1379c19 / 19.0.0.0rc1-992-g1599e3cf68

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + QEMU/kvm

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1838392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854050] Re: minor versions 14.0.2 & 14.0.3 are not compatible in dvr-ha

2019-11-26 Thread Nate Johnston
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854050

Title:
  minor versions 14.0.2 & 14.0.3 are not compatible in dvr-ha

Status in neutron:
  Invalid

Bug description:
  Environment is neutron 14.0.2 with DVR and HA (OVS).
  Upgraded a single compute or deployed new with 14.0.3.

  Expected outcome:

  Minor versions should be fully compatible and neutron should work with
  the same major version.

  Actual outcome:

  Can't schedule instances on computes holding this version and neutron
  services spew out errors.

  neutron-server on controller/network node:

  Exception during message handling: InvalidTargetVersion: Invalid target 
version 1.5
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 
**filter_kwargs)]
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server raise 
exception.InvalidTargetVersion(version=target_version)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 
InvalidTargetVersion: Invalid target version 1.5
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 

  
  neutron-openvswitch-agent on compute node:

  Error while processing VIF ports: RemoteError: Remote error: 
InvalidTargetVersion Invalid target version 1.5
  [u'Traceback (most recent call last):\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming\nres = 
self.dispatcher.dispatch(message)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner\nreturn func(*args, **kwargs)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull\n**filter_kwargs)]\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive\nraise 
exception.InvalidTargetVersion(version=target_version)\n', 
u'InvalidTargetVersion: Invalid target version 1.5\n'].
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2278, in rpc_loop
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
provisioning_needed)
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", 
line 160, in wrapper
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args,

[Yahoo-eng-team] [Bug 1847889] Re: Cloud-shell and console break in websockify 0.9.0

2019-11-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/688290
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ea2212ebe59567d8ea21778cff5128e1029be045
Submitter: Zuul
Branch:master

commit ea2212ebe59567d8ea21778cff5128e1029be045
Author: Hongbin Lu 
Date:   Sun Oct 13 02:51:53 2019 +

Send binary frame in websocket client

Websockify 0.9.0 rejected receiving text frame:

https://github.com/novnc/websockify/commit/8eb5cb0cdcd1314d6d763df8f226b587a2396aa2
We have to switch to binary frame instead.

Change-Id: I2677b8879ccb27def22126811c347d5c08f5aada
Closes-Bug: #1847889


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1847889

Title:
  Cloud-shell and console break in websockify 0.9.0

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-zunclient:
  Fix Released
Status in Zun UI:
  New

Bug description:
  Starting from websockify 0.9.0, it rejected text frame:
  
https://github.com/novnc/websockify/commit/8eb5cb0cdcd1314d6d763df8f226b587a2396aa2
  . We have to send binary frame instead.

  This affect CLI client (i.e. python-zunclient) and browser websocket
  client (i.e. zun-ui)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1847889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853926] Re: Failed to build docs cause of InvocationError

2019-11-26 Thread Matt Riedemann
Clearly we're not having problems building docs and this can't be
triaged with the actual doc build logs and the error. Did you have local
changes that might have caused the docs build to fail? If so,
investigate the log for errors. Do you have stale pyc files locally? If
so, do:

find . -name *.pyc -delete

And then try again.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1853926

Title:
  Failed to build docs cause of InvocationError

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When i use `tox -edocs`, got error

  ERROR: InvocationError for command 
/home/src/bug_1853745/.tox/docs/bin/sphinx-build -W --keep-going -b html -d 
doc/build/doctrees doc/source doc/build/html (exited with code 1)
  

 summary 
_
  ERROR:   docs: commands failed

  Environment
  ===
  git log
  commit 3ead7d00a58c445fee8403ef3df41eec586b250d (origin/master, origin/HEAD, 
gerrit/master)
  Merge: 12e0c04dc0 83baeaa9f2
  Author: Zuul 
  Date:   Sun Nov 24 00:31:49 2019 +

  Merge "Remove nova-manage network, floating commands"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1853926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551703] Re: Resize a vm that vm_state is "stopped" failure, vm's task_state rollback to "active"

2019-11-26 Thread Matt Riedemann
** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/train
   Status: New => In Progress

** Changed in: nova/train
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551703

Title:
  Resize a vm that vm_state is "stopped"  failure, vm's task_state
  rollback to "active"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  In Progress

Bug description:
  1. version
  kilo 2015.1.0

  2. Reproduce steps
  2.1 create a instance, then stop it.

  [root@SBCJNailSlot3 ~(keystone_admin)]# nova list
  
+--++-+--+-+---+
  | ID   | Name   | Status  | Task State   | 
Power State | Networks  |
  
+--++-+--+-+---+
  | 6fe59445-fb89-47ab-9ead-2476c4522a61 | njq| SHUTOFF | -| 
Shutdown| test=192.168.1.52 |
  
+--++-+--+-+---+

  2.2 resize the instance use a new flavor which it's disk less than current 
flavor's disk
  [root@SBCJNailSlot3 ~(keystone_admin)]# nova resize 
6fe59445-fb89-47ab-9ead-2476c4522a61 45

  disk value in the current flavor of  instance “njq” is 20
  disk value in the  flavor which id equal 45 is 18.
  So this resize action will  trigger ResizeError that msg is unable to resize 
disk down.
  Then enter the rollback process

  2.3 rollback result:
  [root@SBCJNailSlot3 ~(keystone_admin)]# nova list
  
+--++-+--+-+---+
  | ID   | Name   | Status  | Task State   | 
Power State | Networks  |
  
+--++-+--+-+---+
  | 6fe59445-fb89-47ab-9ead-2476c4522a61 | njq| ACTIVE  | -| 
Shutdown| test=192.168.1.52 |
  
+--++-+--+-+---+

  Although the finally vm_state of instance will be set to stoped by  
heal_instance_state.
  But the process often takes some time.

  IMO, This process is not reasonable, and need fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854051] Re: py36 unit test cases fails

2019-11-26 Thread Nate Johnston
According to the Project Testing Interface (PTI), py36 isrequired for
Ussuri:
https://governance.openstack.org/tc/reference/runtimes/ussuri.html
#python-runtime-for-ussuri

Reopened the bug and marked as critical.

** Changed in: neutron
   Status: Won't Fix => New

** Changed in: neutron
   Importance: Low => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854051

Title:
  py36 unit test cases fails

Status in neutron:
  New

Bug description:
  This should be a NOTE, not a bug in case someone who meets this issue
  someday, since the minimum support python version of neutron is 3.7
  now.

  
  Branch: master
  heads:
  2a8b70d Merge "Update security group rule if port range is all ports"
  fd5e292 Merge "Remove neutron-grenade job from Neutron CI queues"
  f6aef3c Merge "Switch neutron-tempest-with-os-ken-master job to zuul v3"
  2174bb0 Merge "Remove old, legacy experimental CI jobs"
  8672029 Merge "HA race condition test for DHCP scheduling"
  71e3cb0 Merge "Parameter 'fileds' value is not used in _get_subnets"
  b5e5082 Merge "Update networking-bgpvpn and networking-bagpipe liuetenants"
  3c1139c Merge "Make network support read and write separation"
  67b613b Merge "NetcatTester.stop_processes skip "No such process" exception"
  185efb3 Update networking-bgpvpn and networking-bagpipe liuetenants
  728d8ee NetcatTester.stop_processes skip "No such process" exception

  
  Tox env was definitely upgraded to meet the requirements.txt and 
test-requirements.txt

  Exceptions:
  ==
  Failed 2 tests - output below:
  ==

  
neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.test_ovs_neutron_agent.TestOvsDvrNeutronAgentOSKen.test_get_dvr_mac_address_exception
  
--

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 164, in get_dvr_mac_address'
  b'self.get_dvr_mac_address_with_retry()'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
  b'result = f(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 184, in get_dvr_mac_address_with_retry'
  b'self.context, self.host)'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1092, in __call__'
  b'return _mock_self._mock_call(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1143, in _mock_call'
  b'raise effect'
  b'oslo_messaging.rpc.client.RemoteError: Remote error: None None'
  b'None.'
  b''
  b'During handling of the above exception, another exception occurred:'
  b''
  b'Traceback (most recent call last):'
  b'  File "/home/yulong/github/neutron/neutron/tests/base.py", line 182, 
in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py",
 line 3614, in test_get_dvr_mac_address_exception'
  b'self.agent.dvr_agent.get_dvr_mac_address()'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
  b'result = f(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 169, in get_dvr_mac_address'
  b"'message: %s', e)"
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1653, in error'
  b'self.log(ERROR, msg, *args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1674, in log'
  b'self.logger.log(level, msg, *args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1374, in log'
  b'self._log(level, msg, args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1443, in _log'
  b'exc_info, func, extra, sinfo)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1413, in 
makeRecord'
  b'sinfo)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 277, in 
__init__'
  b'if (args and len(args) == 1 and isinstance(args[0], 
collections.Mapping)'
  b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 193, in __instancecheck__'
  b'return cls.__subclasscheck__(subclass)'
  b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.

[Yahoo-eng-team] [Bug 1854053] [NEW] _add_tenant_access silently ignores 403

2019-11-26 Thread Surya Seetharaman
Public bug reported:

Running openstack flavor set from a project in which a user has an admin
role (but the project is not an admin project) allows the provided
project to be mapped to the flavor even if the permissions are
insufficient for the user to verify the project provided i.e the
generated 403 is ignored by nova silently at this point in code:
https://github.com/openstack/nova/blob/d621914442855ce67ce0b99003f7e69e8ee515e6/nova/api/openstack/identity.py#L61.
This can in turn allow random projects to be mapped to flavors.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1854053

Title:
  _add_tenant_access silently ignores 403

Status in OpenStack Compute (nova):
  New

Bug description:
  Running openstack flavor set from a project in which a user has an
  admin role (but the project is not an admin project) allows the
  provided project to be mapped to the flavor even if the permissions
  are insufficient for the user to verify the project provided i.e the
  generated 403 is ignored by nova silently at this point in code:
  
https://github.com/openstack/nova/blob/d621914442855ce67ce0b99003f7e69e8ee515e6/nova/api/openstack/identity.py#L61.
  This can in turn allow random projects to be mapped to flavors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1854053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854050] [NEW] minor versions 14.0.2 & 14.0.3 are not compatible in dvr-ha

2019-11-26 Thread Marek Grudzinski
Public bug reported:

Environment is neutron 14.0.2 with DVR and HA (OVS).
Upgraded a single compute or deployed new with 14.0.3.

Expected outcome:

Minor versions should be fully compatible and neutron should work with
the same major version.

Actual outcome:

Can't schedule instances on computes holding this version and neutron
services spew out errors.

neutron-server on controller/network node:

Exception during message handling: InvalidTargetVersion: Invalid target version 
1.5
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server **filter_kwargs)]
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server raise 
exception.InvalidTargetVersion(version=target_version)
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 
InvalidTargetVersion: Invalid target version 1.5
2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 


neutron-openvswitch-agent on compute node:

Error while processing VIF ports: RemoteError: Remote error: 
InvalidTargetVersion Invalid target version 1.5
[u'Traceback (most recent call last):\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming\nres = 
self.dispatcher.dispatch(message)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner\nreturn func(*args, **kwargs)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull\n**filter_kwargs)]\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive\nraise 
exception.InvalidTargetVersion(version=target_version)\n', 
u'InvalidTargetVersion: Invalid target version 1.5\n'].
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2278, in rpc_loop
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
provisioning_needed)
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", 
line 160, in wrapper
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args, **kwargs)
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1845, in process_network_ports
2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port

[Yahoo-eng-team] [Bug 1854051] [NEW] py36 unit test cases fails

2019-11-26 Thread LIU Yulong
Public bug reported:

This should be a NOTE, not a bug in case someone who meets this issue
someday, since the minimum support python version of neutron is 3.7 now.


Branch: master
heads:
2a8b70d Merge "Update security group rule if port range is all ports"
fd5e292 Merge "Remove neutron-grenade job from Neutron CI queues"
f6aef3c Merge "Switch neutron-tempest-with-os-ken-master job to zuul v3"
2174bb0 Merge "Remove old, legacy experimental CI jobs"
8672029 Merge "HA race condition test for DHCP scheduling"
71e3cb0 Merge "Parameter 'fileds' value is not used in _get_subnets"
b5e5082 Merge "Update networking-bgpvpn and networking-bagpipe liuetenants"
3c1139c Merge "Make network support read and write separation"
67b613b Merge "NetcatTester.stop_processes skip "No such process" exception"
185efb3 Update networking-bgpvpn and networking-bagpipe liuetenants
728d8ee NetcatTester.stop_processes skip "No such process" exception


Tox env was definitely upgraded to meet the requirements.txt and 
test-requirements.txt

Exceptions:
==
Failed 2 tests - output below:
==

neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.test_ovs_neutron_agent.TestOvsDvrNeutronAgentOSKen.test_get_dvr_mac_address_exception
--

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 164, in get_dvr_mac_address'
b'self.get_dvr_mac_address_with_retry()'
b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
b'result = f(*args, **kwargs)'
b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 184, in get_dvr_mac_address_with_retry'
b'self.context, self.host)'
b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1092, in __call__'
b'return _mock_self._mock_call(*args, **kwargs)'
b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1143, in _mock_call'
b'raise effect'
b'oslo_messaging.rpc.client.RemoteError: Remote error: None None'
b'None.'
b''
b'During handling of the above exception, another exception occurred:'
b''
b'Traceback (most recent call last):'
b'  File "/home/yulong/github/neutron/neutron/tests/base.py", line 182, in 
func'
b'return f(self, *args, **kwargs)'
b'  File 
"/home/yulong/github/neutron/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py",
 line 3614, in test_get_dvr_mac_address_exception'
b'self.agent.dvr_agent.get_dvr_mac_address()'
b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
b'result = f(*args, **kwargs)'
b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 169, in get_dvr_mac_address'
b"'message: %s', e)"
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1653, in error'
b'self.log(ERROR, msg, *args, **kwargs)'
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1674, in log'
b'self.logger.log(level, msg, *args, **kwargs)'
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1374, in log'
b'self._log(level, msg, args, **kwargs)'
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1443, in _log'
b'exc_info, func, extra, sinfo)'
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1413, in 
makeRecord'
b'sinfo)'
b'  File "/usr/lib64/python3.6/logging/__init__.py", line 277, in __init__'
b'if (args and len(args) == 1 and isinstance(args[0], 
collections.Mapping)'
b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 193, in __instancecheck__'
b'return cls.__subclasscheck__(subclass)'
b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 228, in __subclasscheck__'
b'if issubclass(subclass, scls):'
b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 228, in __subclasscheck__'
b'if issubclass(subclass, scls):'
b'  File "/usr/lib64/python3.6/typing.py", line 1154, in __subclasscheck__'
b'return super().__subclasscheck__(cls)'
b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 209, in __subclasscheck__'
b'ok = cls.__subclasshook__(subclass)'
b'  File "/usr/lib64/python3.6/typing.py", line 884, in __extrahook__'
b'if issubclass(subclass, scls):'
b'  File "/usr/lib64/python3.6/typing.py"

[Yahoo-eng-team] [Bug 1551703] Re: Resize a vm that vm_state is "stopped" failure, vm's task_state rollback to "active"

2019-11-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/691908
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5a20996405c5788855a2457283bbbe7d78140a9c
Submitter: Zuul
Branch:master

commit 5a20996405c5788855a2457283bbbe7d78140a9c
Author: Matt Riedemann 
Date:   Tue Oct 29 12:10:08 2019 -0400

Reset instance to current vm_state if rolling back in resize_instance

You can resize a stopped instance and if the compute driver raises
InstanceFaultRollback from migrate_disk_and_power_off, the
_error_out_instance_on_exception decorator, used in the _resize_instance
method, will by default reset the instance vm_state to ACTIVE even though
the guest is stopped. The driver could raise InstanceFaultRollback if you
try resizing the root disk down on a non-volume-backed instance.

This builds on [1] and does the same thing as prep_resize [2] for
making sure the original vm_state is reset on InstanceFaultRollback.

[1] Ie4f9177f4d54cbc7dbcf58bd107fd5f24c60d8bb
[2] I17543ecb572934ecc7d0bbc7a4ad2f537fa499bc

Change-Id: Iff1f9f28a1e4ecf00368cbcac27b7687a5eb0dcf
Closes-Bug: #1551703


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551703

Title:
  Resize a vm that vm_state is "stopped"  failure, vm's task_state
  rollback to "active"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. version
  kilo 2015.1.0

  2. Reproduce steps
  2.1 create a instance, then stop it.

  [root@SBCJNailSlot3 ~(keystone_admin)]# nova list
  
+--++-+--+-+---+
  | ID   | Name   | Status  | Task State   | 
Power State | Networks  |
  
+--++-+--+-+---+
  | 6fe59445-fb89-47ab-9ead-2476c4522a61 | njq| SHUTOFF | -| 
Shutdown| test=192.168.1.52 |
  
+--++-+--+-+---+

  2.2 resize the instance use a new flavor which it's disk less than current 
flavor's disk
  [root@SBCJNailSlot3 ~(keystone_admin)]# nova resize 
6fe59445-fb89-47ab-9ead-2476c4522a61 45

  disk value in the current flavor of  instance “njq” is 20
  disk value in the  flavor which id equal 45 is 18.
  So this resize action will  trigger ResizeError that msg is unable to resize 
disk down.
  Then enter the rollback process

  2.3 rollback result:
  [root@SBCJNailSlot3 ~(keystone_admin)]# nova list
  
+--++-+--+-+---+
  | ID   | Name   | Status  | Task State   | 
Power State | Networks  |
  
+--++-+--+-+---+
  | 6fe59445-fb89-47ab-9ead-2476c4522a61 | njq| ACTIVE  | -| 
Shutdown| test=192.168.1.52 |
  
+--++-+--+-+---+

  Although the finally vm_state of instance will be set to stoped by  
heal_instance_state.
  But the process often takes some time.

  IMO, This process is not reasonable, and need fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1551703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854041] [NEW] Keystone should propagate redirect exceptions from auth plugins

2019-11-26 Thread Alvaro Lopez
Public bug reported:

When a developer is implementing an Authentication plugin [1] they can
only return None and setup the relevant information in the auth context
or raise an Unauthorized exception. However, in some cases (like an
OpenID Connect plugin) it is needed to perform a redirect to the
provider to complete the flow. IIRC this was possible in the past
(before moving to Flask) by raising an exception with the proper HTTP
code set, but with the current implementation this is impossible.

[1]: https://docs.openstack.org/keystone/latest/contributor/auth-
plugins.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1854041

Title:
  Keystone should propagate redirect exceptions from auth plugins

Status in OpenStack Identity (keystone):
  New

Bug description:
  When a developer is implementing an Authentication plugin [1] they can
  only return None and setup the relevant information in the auth
  context or raise an Unauthorized exception. However, in some cases
  (like an OpenID Connect plugin) it is needed to perform a redirect to
  the provider to complete the flow. IIRC this was possible in the past
  (before moving to Flask) by raising an exception with the proper HTTP
  code set, but with the current implementation this is impossible.

  [1]: https://docs.openstack.org/keystone/latest/contributor/auth-
  plugins.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1854041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854019] [NEW] Horizon's tests are failing with python3.8

2019-11-26 Thread Michal Arbet
Public bug reported:

Hi,

Horizon's tests are failing with python3.8, see below :

+ http_proxy=127.0.0.1:9 https_proxy=127.0.0.9:9 HTTP_PROXY=127.0.0.1:9 
HTTPS_PROXY=127.0.0.1:9 
PYTHONPATH=/<>/debian/tmp/usr/lib/python3/dist-packages 
PYTHON=python3.8 python3.8 -m coverage run /<>/manage.py test 
horizon --verbosity 2 --settings=horizon.test.settings --exclude-tag selenium 
--exclude-tag integration
/usr/lib/python3/dist-packages/scss/selector.py:26: FutureWarning: Possible 
nested set at position 329
  SELECTOR_TOKENIZER = re.compile(r'''
Creating test database for alias 'default' 
('file:memorydb_default?mode=memory&cache=shared')...
test_legacychoicefield_title 
(horizon.test.unit.forms.test_fields.ChoiceFieldTests) ... ok
.
.
.
.
.
test_call_functions_parallel_with_kwargs 
(openstack_dashboard.test.unit.utils.test_futurist_utils.FuturistUtilsTests) 
... ok

==
ERROR: test_detail_invalid_port_range 
(openstack_dashboard.dashboards.project.security_groups.tests.SecurityGroupsViewTests)
--
Traceback (most recent call last):
  File "/<>/openstack_dashboard/test/helpers.py", line 130, in 
wrapped
return function(inst, *args, **kwargs)
  File 
"/<>/openstack_dashboard/dashboards/project/security_groups/tests.py",
 line 621, in test_detail_invalid_port_range
self.assertContains(res, cgi.escape('"from" port number is invalid',
AttributeError: module 'cgi' has no attribute 'escape'

--
Ran 1522 tests in 448.635s

FAILED (errors=1)

** Affects: horizon
 Importance: Undecided
 Assignee: Michal Arbet (michalarbet)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Michal Arbet (michalarbet)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1854019

Title:
  Horizon's tests are failing with python3.8

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  Horizon's tests are failing with python3.8, see below :

  + http_proxy=127.0.0.1:9 https_proxy=127.0.0.9:9 HTTP_PROXY=127.0.0.1:9 
HTTPS_PROXY=127.0.0.1:9 
PYTHONPATH=/<>/debian/tmp/usr/lib/python3/dist-packages 
PYTHON=python3.8 python3.8 -m coverage run /<>/manage.py test 
horizon --verbosity 2 --settings=horizon.test.settings --exclude-tag selenium 
--exclude-tag integration
  /usr/lib/python3/dist-packages/scss/selector.py:26: FutureWarning: Possible 
nested set at position 329
SELECTOR_TOKENIZER = re.compile(r'''
  Creating test database for alias 'default' 
('file:memorydb_default?mode=memory&cache=shared')...
  test_legacychoicefield_title 
(horizon.test.unit.forms.test_fields.ChoiceFieldTests) ... ok
  .
  .
  .
  .
  .
  test_call_functions_parallel_with_kwargs 
(openstack_dashboard.test.unit.utils.test_futurist_utils.FuturistUtilsTests) 
... ok

  ==
  ERROR: test_detail_invalid_port_range 
(openstack_dashboard.dashboards.project.security_groups.tests.SecurityGroupsViewTests)
  --
  Traceback (most recent call last):
File "/<>/openstack_dashboard/test/helpers.py", line 130, in 
wrapped
  return function(inst, *args, **kwargs)
File 
"/<>/openstack_dashboard/dashboards/project/security_groups/tests.py",
 line 621, in test_detail_invalid_port_range
  self.assertContains(res, cgi.escape('"from" port number is invalid',
  AttributeError: module 'cgi' has no attribute 'escape'

  --
  Ran 1522 tests in 448.635s

  FAILED (errors=1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1854019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854017] [NEW] Document how Nova models resources in Placement

2019-11-26 Thread Balazs Gibizer
Public bug reported:

As the Neutron bug[1] and the similar cyborg problem[2] shows that the
way nova models the compute RP in placement became an external interface
for other OpenStack projects to create child RPs under the compute RP. I
think we don't have this external interface documented in Nova and this
contributed to the wrong assumption lead to [2][1].

[1]https://bugs.launchpad.net/neutron/+bug/1853840
[2]http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html

** Affects: nova
 Importance: Low
 Status: New


** Tags: compute doc resource-tracker

** Tags added: compute doc resource-tracker

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1854017

Title:
  Document how Nova models resources in Placement

Status in OpenStack Compute (nova):
  New

Bug description:
  As the Neutron bug[1] and the similar cyborg problem[2] shows that the
  way nova models the compute RP in placement became an external
  interface for other OpenStack projects to create child RPs under the
  compute RP. I think we don't have this external interface documented
  in Nova and this contributed to the wrong assumption lead to [2][1].

  [1]https://bugs.launchpad.net/neutron/+bug/1853840
  
[2]http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011044.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1854017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp