[Yahoo-eng-team] [Bug 1857139] Re: TypeError: object of type 'object' has no len() from resources_from_request_spec when cells are down

2020-01-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/700186
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0d9622f581e830e7b7bc9763aaa09ba02e99b8bb
Submitter: Zuul
Branch:master

commit 0d9622f581e830e7b7bc9763aaa09ba02e99b8bb
Author: Matt Riedemann 
Date:   Fri Dec 20 10:03:23 2019 -0500

Handle cell failures in get_compute_nodes_by_host_or_node

get_compute_nodes_by_host_or_node uses the scatter_gather_cells
function but was not handling the case that a failure result
was returned, which could be the called function raising some
exception or the cell timing out. This causes issues when the
caller of get_compute_nodes_by_host_or_node expects to get a
ComputeNodeList back and can do something like len(nodes) on it
which fails when the result is not iterable.

To be clear, if a cell is down there are going to be problems
which likely result in a NoValidHost error during scheduling, but
this avoids an ugly TypeError traceback in the scheduler logs.

Change-Id: Ia54b5adf0a125ae1f9b86887a07dd1d79821dd54
Closes-Bug: #1857139


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1857139

Title:
  TypeError: object of type 'object' has no len() from
  resources_from_request_spec when cells are down

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  Confirmed

Bug description:
  Seen here:

  
https://zuul.opendev.org/t/openstack/build/c187e207bc1c48a0a7fa49ef9798b696/log/logs/screen-n-sch.txt.gz#2529

  cell1 is down so the call to scatter_gather_cells in
  get_compute_nodes_by_host_or_node yields a result but it's not a
  ComputeNodeList, it's the did_not_respond_sentinel object:

  
https://github.com/openstack/nova/blob/02019d2660dfce3facdd64ecdb2bd60ba4a91c6d/nova/scheduler/host_manager.py#L705

  
https://github.com/openstack/nova/blob/02019d2660dfce3facdd64ecdb2bd60ba4a91c6d/nova/context.py#L454

  which results in an error here:

  
https://github.com/openstack/nova/blob/02019d2660dfce3facdd64ecdb2bd60ba4a91c6d/nova/scheduler/utils.py#L612

  The HostManager.get_compute_nodes_by_host_or_node method should filter
  out fail/timeout results from the scatter_gather_cells results. We'll
  get a NoValidHost either way but this is better than the traceback
  with the TypeError in it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1857139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861224] [NEW] horizon removing trailing spaces on passwords - auth fails

2020-01-28 Thread Orestes Leal Rodriguez
Public bug reported:

>From the dashboard openstack is removing the trailing spaces from our user's 
>passwords.
We have a modified sql.py backend, that does an ldap bind to an active 
directory data store. And that works almost always. I say almost because for 
some users it doesn't work at all. We figure out (and a co-worker also 
confirmed this) that openstack is removing trailing (also leading?) spaces from 
the password entered in the dashboard. Also, inside the dashboard trailing 
spaces are not accepted even when they are equal byte by byte (including the 
space, I get an error). So this is going on.

Do anybody knows where is this removal performed? (python script
location, line) So I can remove that since I have users (me included, I
have the issue since the very beginning of this deployment) that cannot
login. And they can use their Active Directrory passwords from other
apps without problem.

We are running 'stein' with the latest update for ubuntu 18.04-AMD64.
NOTE: Since passwords can indeed contain spaces anywhere I consider this a bug.

Details:

'openstack token isue' works with spaces at the end so this is
horizon/django related.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1861224

Title:
  horizon removing trailing spaces on passwords - auth fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  From the dashboard openstack is removing the trailing spaces from our user's 
passwords.
  We have a modified sql.py backend, that does an ldap bind to an active 
directory data store. And that works almost always. I say almost because for 
some users it doesn't work at all. We figure out (and a co-worker also 
confirmed this) that openstack is removing trailing (also leading?) spaces from 
the password entered in the dashboard. Also, inside the dashboard trailing 
spaces are not accepted even when they are equal byte by byte (including the 
space, I get an error). So this is going on.

  Do anybody knows where is this removal performed? (python script
  location, line) So I can remove that since I have users (me included,
  I have the issue since the very beginning of this deployment) that
  cannot login. And they can use their Active Directrory passwords from
  other apps without problem.

  We are running 'stein' with the latest update for ubuntu 18.04-AMD64.
  NOTE: Since passwords can indeed contain spaces anywhere I consider this a 
bug.

  Details:

  'openstack token isue' works with spaces at the end so this is
  horizon/django related.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1861224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861221] [NEW] [FT] ncat process do not provide information, if any, about the error during execution

2020-01-28 Thread Rodolfo Alonso
Public bug reported:

Sometimes "ncat" do not start correctly. The test case exits with this 
exception:
RuntimeError: Process ['ncat', '0.0.0.0', '1234', '-l', '-k'] hasn't been 
spawned in 20 seconds

I could be very useful to retrieve the stderr and stdout pipes from the
process, if any, to figure out why a simple process like ncat didn't
start.

Logs:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_58c/704376/5/check/neutron-functional/58c2e19/testr_results.html

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861221

Title:
  [FT] ncat process do not provide information, if any, about the error
  during execution

Status in neutron:
  New

Bug description:
  Sometimes "ncat" do not start correctly. The test case exits with this 
exception:
  RuntimeError: Process ['ncat', '0.0.0.0', '1234', '-l', '-k'] hasn't been 
spawned in 20 seconds

  I could be very useful to retrieve the stderr and stdout pipes from
  the process, if any, to figure out why a simple process like ncat
  didn't start.

  Logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_58c/704376/5/check/neutron-functional/58c2e19/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1861221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860560] Re: [ovn] lsp_set_address Exception possible when passed empty list of addresses

2020-01-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/703703
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c6a5b284b54d59273a387aaf7723391e9100bf3c
Submitter: Zuul
Branch:master

commit c6a5b284b54d59273a387aaf7723391e9100bf3c
Author: Terry Wilson 
Date:   Tue Jan 21 16:12:45 2020 -0600

Ensure we don't pass empty addresses to lsp_set_addresses

If we somehow have an empty set of addresses, lsp_set_addresses
will fail. This instead calls db_clear() on the addresses field if
we are trying to set it to empty.

Closes-bug: #1860560

Change-Id: Ied92eb85b74e63a7317d50970b5131fb49e3e4b0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860560

Title:
  [ovn] lsp_set_address Exception possible when passed empty list of
  addresses

Status in neutron:
  Fix Released

Bug description:
  It is possible for
  maintenance.check_for_port_security_unknown_address() to pass
  addresses=[] to lsp_set_addresses() which will fail the regex check
  for it being a correct address. This matches the behavior of ovn-
  nbctl's lsp-set-addresses. We should ensure that we don't pass [] to
  lsp_set_addresses.

  Example:

  2020-01-17 16:52:32.786 53 ERROR futurist.periodics 
[req-0187e45b-682f-4c1d-ab26-20bc89c223e5 - - - - -] Failed to call periodic 
'networking_ovn.common.maintenance.DBInconsistenciesPeriodics.check_for_port_security_unknown_address'
 (it runs every 600.00 seconds): TypeError: address must be 
router/unknown/dynamic/ethaddr ipaddr...
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics Traceback (most recent 
call last):
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/futurist/periodics.py", line 290, in run
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics work()
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/futurist/periodics.py", line 64, in __call__
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics return 
self.callback(*self.args, **self.kwargs)
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/futurist/periodics.py", line 178, in decorator
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics return f(*args, 
**kwargs)
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/networking_ovn/common/maintenance.py", line 
447, in check_for_port_security_unknown_address
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics port.name, 
addresses=addresses).execute(check_error=True)
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/schema/ovn_northbound/impl_idl.py", 
line 89, in lsp_set_addresses
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics return 
cmd.LspSetAddressesCommand(self, port, addresses)
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/schema/ovn_northbound/commands.py", 
line 305, in __init__
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics "address must be 
router/unknown/dynamic/ethaddr ipaddr...")
  2020-01-17 16:52:32.786 53 ERROR futurist.periodics TypeError: address must 
be router/unknown/dynamic/ethaddr ipaddr...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732067] Re: openvswitch firewall flows cause flooding on integration bridge

2020-01-28 Thread Jeremy Stanley
If mitigating this on some supported stable branches is going to require
the operator to make configuration changes, then this is probably better
handled as a security note than an advisory.

** Changed in: ossa
   Status: Incomplete => Won't Fix

** Also affects: ossn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732067

Title:
  openvswitch firewall flows cause flooding on integration bridge

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  Environment: OpenStack Newton
  Driver: ML2 w/ OVS
  Firewall: openvswitch

  In this environment, we have observed OVS flooding network traffic
  across all ports in a given VLAN on the integration bridge due to the
  lack of a FDB entry for the destination MAC address. Across the large
  fleet of 240+ nodes, this is causing a considerable amount of noise on
  any given node.

  In this test, we have 3 machines:

  Client: fa:16:3e:e8:59:00 (10.10.60.2)
  Server: fa:16:3e:80:cb:0a (10.10.60.9)
  Bystander: fa:16:3e:a0:ee:02 (10.10.60.10)

  The server is running a web server using netcat:

  while true ; do sudo nc -l -p 80 < index.html ; done

  Client requests page using curl:

  ip netns exec qdhcp-b07e6cb3-0943-45a2-b5ff-efb7e99e4d3d curl
  http://10.10.60.9/

  We should expect to see the communication limited to the client and
  server. However, the captures below reflect the server->client
  responses being broadcast out all tap interfaces connected to br-int
  in the same local vlan:

  root@osa-newton-ovs-compute01:~# tcpdump -i tap5f03424d-1c -ne port 80
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on tap5f03424d-1c, link-type EN10MB (Ethernet), capture size 262144 
bytes
  02:20:30.190675 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 
(0x0800), length 74: 10.10.60.2.54796 > 10.10.60.9.80: Flags [S], seq 
213484442, win 29200, options [mss 1460,sackOK,TS val 140883559 ecr 
0,nop,wscale 7], length 0
  02:20:30.191926 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 74: 10.10.60.9.80 > 10.10.60.2.54796: Flags [S.], seq 
90006557, ack 213484443, win 14480, options [mss 1460,sackOK,TS val 95716 ecr 
140883559,nop,wscale 4], length 0
  02:20:30.192837 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 
(0x0800), length 66: 10.10.60.2.54796 > 10.10.60.9.80: Flags [.], ack 1, win 
229, options [nop,nop,TS val 140883560 ecr 95716], length 0
  02:20:30.192986 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 
(0x0800), length 140: 10.10.60.2.54796 > 10.10.60.9.80: Flags [P.], seq 1:75, 
ack 1, win 229, options [nop,nop,TS val 140883560 ecr 95716], length 74: HTTP: 
GET / HTTP/1.1
  02:20:30.195806 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 79: 10.10.60.9.80 > 10.10.60.2.54796: Flags [P.], seq 1:14, 
ack 1, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 13: HTTP
  02:20:30.196207 fa:16:3e:e8:59:00 > fa:16:3e:80:cb:0a, ethertype IPv4 
(0x0800), length 66: 10.10.60.2.54796 > 10.10.60.9.80: Flags [.], ack 14, win 
229, options [nop,nop,TS val 140883561 ecr 95717], length 0
  02:20:30.197481 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 66: 10.10.60.9.80 > 10.10.60.2.54796: Flags [.], ack 75, win 
905, options [nop,nop,TS val 95717 ecr 140883560], length 0

  ^^^ On the server tap we see the bi-directional traffic

  root@osa-newton-ovs-compute01:/home/ubuntu# tcpdump -i tapb8051da9-60 -ne 
port 80
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on tapb8051da9-60, link-type EN10MB (Ethernet), capture size 262144 
bytes
  02:20:30.192165 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 74: 10.10.60.9.80 > 10.10.60.2.54796: Flags [S.], seq 
90006557, ack 213484443, win 14480, options [mss 1460,sackOK,TS val 95716 ecr 
140883559,nop,wscale 4], length 0
  02:20:30.195827 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 79: 10.10.60.9.80 > 10.10.60.2.54796: Flags [P.], seq 1:14, 
ack 1, win 905, options [nop,nop,TS val 95717 ecr 140883560], length 13: HTTP
  02:20:30.197500 fa:16:3e:80:cb:0a > fa:16:3e:e8:59:00, ethertype IPv4 
(0x0800), length 66: 10.10.60.9.80 > 10.10.60.2.54796: Flags [.], ack 75, win 
905, options [nop,nop,TS val 95717 ecr 140883560], length 0

  ^^^ On the bystander tap we see the flooded traffic

  The FDB tables reflect the lack of CAM entry for the client on br-int
  bridge. I would expect to see the MAC address on the patch uplink:

  root@osa-newton-ovs-compute01:/home/ubuntu# ovs-appctl fdb/show br-int | grep 
'fa:16:3e:e8:59:00'
  root@osa-newton-ovs-compute01:/home/ubuntu# ovs-appctl fdb/show br-provider | 
grep 'fa:16:3e:e8:59:00'
  2   850  fa:16:3e:e8:

[Yahoo-eng-team] [Bug 1859988] Re: neutron-tempest-plugin tests fail for stable/queens

2020-01-28 Thread Ghanshyam Mann
** Changed in: devstack
   Status: Confirmed => Fix Released

** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1859988

Title:
  neutron-tempest-plugin tests fail for stable/queens

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Recent stable/queens backport fail on neutron-tempest-plugin scenario tests, 
sample here:
  
https://c896d480cfbd9dee637c-6e2dfe610262db0cf157ed36bc183b08.ssl.cf2.rackcdn.com/688719/5/check/neutron-tempest-plugin-scenario-openvswitch-queens/f080f61/testr_results.html

  Traceback (most recent call last):
File "tempest/common/utils/__init__.py", line 108, in wrapper
  return func(*func_args, **func_kwargs)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/test_internal_dns.py",
 line 72, in test_dns_domain_and_name
  timeout=CONF.validation.ping_timeout * 10)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 309, in check_remote_connectivity
  timeout=timeout))
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 299, in _check_remote_connectivity
  ping_remote, timeout or CONF.validation.ping_timeout, 1)
File "tempest/lib/common/utils/test_utils.py", line 107, in call_until_true
  if func(*args, **kwargs):
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 283, in ping_remote
  fragmentation=fragmentation)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 278, in ping_host
  return source.exec_command(cmd)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 311, in wrapped_f
  return self.call(f, *args, **kw)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 391, in call
  do = self.iter(retry_state=retry_state)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 338, in iter
  return fut.result()
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/concurrent/futures/_base.py",
 line 455, in result
  return self.__get_result()
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 394, in call
  result = fn(*args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 205, in exec_command
  return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
File "tempest/lib/common/ssh.py", line 153, in exec_command
  with transport.open_session() as channel:
  AttributeError: 'NoneType' object has no attribute 'open_session'

  
  Queens jobs were pinned to 0.7.0 version of the plugin in 
a4962ec62808fc469eaad73b1408447d8e3bc7ec it looks like we now need to also pin 
tempest itself to a "queens version"

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1859988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860033] Re: Tempest jobs broken on stable branches due to requirements neutron-lib upgrade (the EOLing python2 drama)

2020-01-28 Thread Ghanshyam Mann
It is not required to backport on stable/queens as such.

stable/queens has been pinned with compatible Tempest tag -
https://review.opendev.org/#/c/703679/


** Changed in: devstack
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860033

Title:
  Tempest jobs broken on stable branches due to requirements neutron-lib
  upgrade (the EOLing python2 drama)

Status in devstack:
  Fix Released
Status in neutron:
  Confirmed
Status in tempest:
  Fix Released

Bug description:
  Updating neutron-lib to 2.0.0 (py3-only release) in upper constraints on 
master [1] killed neutron tempest rocky jobs with:
  2020-01-16 19:07:29.088781 | controller | Processing 
/opt/stack/neutron-tempest-plugin
  2020-01-16 19:07:29.825378 | controller | Requirement already satisfied: 
pbr===5.4.4 in ./.tox/tempest/lib/python2.7/site-packages (from -c u-c-m.txt 
(line 58)) (5.4.4)
  2020-01-16 19:07:29.869691 | controller | Collecting neutron-lib===2.0.0 
(from -c u-c-m.txt (line 79))
  2020-01-16 19:07:30.019373 | controller |   Could not find a version that 
satisfies the requirement neutron-lib===2.0.0 (from -c u-c-m.txt (line 79)) 
(from versions: 0.0.1, 0.0.2, 0.0.3, 0.1.0, 0.2.0, 0.3.0, 0.4.0, 1.0.0, 1.1.0, 
1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.9.1, 1.9.2, 1.10.0, 
1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 
1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.29.1, 
1.30.0, 1.31.0)
  2020-01-16 19:07:30.033128 | controller | No matching distribution found for 
neutron-lib===2.0.0 (from -c u-c-m.txt (line 79))

  see [2] for an example log.

  These jobs run on Ubuntu 16.04 with Python 2.

  [1] https://review.opendev.org/702288
  [2] 
https://f6bb8d2d9ae7bba883a1-1b1c7df97f8efc0930cf31e3404d1843.ssl.cf2.rackcdn.com/702946/1/check/neutron-tempest-plugin-api-rocky/9043fec/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1860033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585699] Re: Neutron Metadata Agent Configuration - nova_metadata_ip

2020-01-28 Thread Tobias Urdin
** Changed in: puppet-neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585699

Title:
  Neutron Metadata Agent Configuration - nova_metadata_ip

Status in neutron:
  Fix Released
Status in puppet-neutron:
  Fix Released

Bug description:
  I am not sure if this constitutes the tag 'bug'. However it has lead
  us to some confusion and I feel it should be updated.

  This option in neutron metadata configuration (and install docs) is
  misleading.

  {{{
  # IP address used by Nova metadata server. (string value)
  #nova_metadata_ip = 127.0.0.1
  }}}

  It implies the need to present an IP address for the nova metadata
  api. Where as in actual fact this can be a hostname or IP address.

  When using TLS encrypted sessions, this 'has' to be a hostname, else
  this ends in a SSL issue, as the hostname is embedded in the
  certificates.

  I am seeing this issue with OpenStack Liberty, however it appears to
  be in the configuration reference for Mitaka too, so I guess this is
  accross the board.

  If this needs to be listed in a different forum, please let me know!

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861092] [NEW] [OVN] Too frequent agent health-checks causes stress on ovsdb-server

2020-01-28 Thread Lucas Alvares Gomes
Public bug reported:

Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=1795198

Looks like neutron-server is pinging agents too frequently as per what's
observed in the logs. nb-cfg being bumped at a non-fixed rate:


For example, in this part of the log I could find 11 updates in less than 2 
minutes:

2020-01-27 12:23:04.247 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49008, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49007) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:23:05.179 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49009, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49008) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:23:32.216 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49010, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49009) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:23:41.248 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49011, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49010) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:23:42.183 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49012, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49011) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:09.210 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49013, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49012) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:18.252 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49014, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49013) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:19.179 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49015, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49014) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:46.205 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49016, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49015) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:55.254 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49017, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49016) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44
2020-01-27 12:24:56.177 43567 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched 
UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', 
conditions=None, old_conditions=None) to row=SB_Global(ipsec=False, ssl=[], 
nb_cfg=49018, options={'mac_prefix': 'b2:64:0d'}, external_ids={}) 
old=SB_Global(nb_cfg=49017) matches 
/usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/event.py:44


This is triggering too frequent writes from *all* metadata-agents and 
ovn-controllers in the cloud which creates a lot of traffic. At scale, this can 
be a problem.

Imagine a 500 node deployment, with one update per 10 seconds as in the
example above. That will translate into 1K (1

[Yahoo-eng-team] [Bug 1861087] [NEW] Fetching metadata via LB fails when query returns large number of networks

2020-01-28 Thread Kobi Samoray
Public bug reported:

While querying metadata via LB object, Neutron queries the attached networks to 
the metadata provider.
However, when there is a massive number of networks in the result, the 
following port query fails with the following error:

requester 10.6.0.6. Error: An unknown exception occurred.:
RequestURITooLong: An unknown exception occurred

** Affects: nova
 Importance: Undecided
 Assignee: Kobi Samoray (ksamoray)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kobi Samoray (ksamoray)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1861087

Title:
  Fetching metadata via LB fails when query returns large number of
  networks

Status in OpenStack Compute (nova):
  New

Bug description:
  While querying metadata via LB object, Neutron queries the attached networks 
to the metadata provider.
  However, when there is a massive number of networks in the result, the 
following port query fails with the following error:

  requester 10.6.0.6. Error: An unknown exception occurred.:
  RequestURITooLong: An unknown exception occurred

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1861087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1856240] Re: wait_for_versioned_notifications gives unhelpful error message "ValueError: Not a text type application/octet-stream" on timeout

2020-01-28 Thread Balazs Gibizer
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1856240

Title:
  wait_for_versioned_notifications gives unhelpful error message
  "ValueError: Not a text type application/octet-stream" on timeout

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  While working on a functional test that relies on the
  fake_notifier.wait_for_versioned_notifications method the notification
  never comes and the waiter times out but dumps an unhelpful
  ValueError:

  {0}
  
nova.tests.functional.wsgi.test_servers.ColdMigrationDisallowSameHost.test_cold_migrate_same_host_disabled
  [] ... inprogress

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/osboxes/git/nova/nova/tests/functional/wsgi/test_servers.py", line 423, 
in test_cold_migrate_same_host_disabled'
  b'self._wait_for_migrate_no_valid_host()'
  b'  File 
"/home/osboxes/git/nova/nova/tests/functional/wsgi/test_servers.py", line 408, 
in _wait_for_migrate_no_valid_host'
  b"'compute_task.migrate_server.error')[0]"
  b'  File "/home/osboxes/git/nova/nova/tests/unit/fake_notifier.py", line 
153, in wait_for_versioned_notifications'
  b'return VERSIONED_SUBS[event_type].wait_n(n_events, event_type, 
timeout)'
  b'  File "/home/osboxes/git/nova/nova/tests/unit/fake_notifier.py", line 
61, in wait_n'
  b"'notifications': notifications,"
  b''
  Not a text type application/octet-stream
  Traceback (most recent call last):
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/cliff/app.py",
 line 401, in run_subcommand
  result = cmd.run(parsed_args)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/cliff/command.py",
 line 185, in run
  return_code = self.take_action(parsed_args) or 0
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/run.py",
 line 235, in take_action
  all_attachments=all_attachments)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/run.py",
 line 484, in run_command
  all_attachments=all_attachments)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/run.py",
 line 550, in _run_tests
  return run_tests()
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/run.py",
 line 547, in run_tests
  all_attachments=all_attachments)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/load.py",
 line 234, in load
  all_attachments)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/commands/load.py",
 line 267, in _load_case
  case.run(result)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testsuite.py",
 line 171, in run
  result.status(**event_dict)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 468, in status
  _strict_map(methodcaller('status', *args, **kwargs), self.targets)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 443, in _strict_map
  return list(map(function, *sequences))
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 570, in status
  target.status(**kwargs)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 468, in status
  _strict_map(methodcaller('status', *args, **kwargs), self.targets)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 443, in _strict_map
  return list(map(function, *sequences))
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 909, in status
  self._hook.status(*args, **kwargs)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 826, in status
  self.on_test(self._inprogress.pop(key))
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/testtools/testresult/real.py",
 line 901, in _handle_test
  self.on_test(test_record.to_dict())
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/subunit_trace.py",
 line 193, in show_outcome
  print_attachments(stream, test, all_channels=True)
File 
"/home/osboxes/git/nova/.tox/functional-py36/lib/python3.6/site-packages/stestr/subunit_trace.py",
 line 120, in print_attachments
  

[Yahoo-eng-team] [Bug 1860991] Re: fail to evacuate instance

2020-01-28 Thread Sam Tseng
sorry, this is no a bug, it's permission deny on share storage. please
close it.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860991

Title:
  fail to evacuate instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When vm host(nova-02) loss network connection, I try to evacuate instance 
from nova-02 to another host(nova-04), the status show no problem, but instance 
is broken.

  instance migrate/live-migrate is no problem.

  Steps to reproduce
  ==
  1. cut off running host networking, after 1 min. verify compute service
  [root@tp-osc-01 ~]# openstack compute service list
  
+-+++--+-+---+-
  |  ID | Binary | Host   | Zone | Status  | State | Updated At 
|
  
+-+++--+-+---+-
  | 293 | nova-compute   | tp-nova-01 | nova | enabled | up| 
2020-01-27T14:05:50.00 |
  | 327 | nova-compute   | tp-nova-02 | nova | enabled | down  | 
2020-01-27T14:04:32.00 |
  | 329 | nova-compute   | tp-nova-04 | nova | enabled | up| 
2020-01-27T14:05:51.00 |
  | 331 | nova-compute   | tp-nova-05 | nova | enabled | up| 
2020-01-27T14:05:53.00 |
  | 333 | nova-compute   | tp-nova-03 | nova | enabled | up| 
2020-01-27T14:05:51.00 |
  
+-+++--+-+---+-
  2. evacuate vm instance
  [root@tp-osc-01 ~]# nova evacuate test1
  3. check vm status
  [root@tp-osc-01 ~]# openstack server list
  
+--+---+-+-
  | ID   | Name  | Status  | Networks   
   | Image| Flavor |
  
+--+---+-+-
  | 1d8d3b6d-34f4-4f49-9c19-72c0e84f498a | test1 | REBUILD | 
net_vlan1040=172.22.40.70 | CentOS-7-x86_64-1907 | m1 |
  
+--+---+-+-
  [root@tp-osc-01 ~]# openstack server list
  
+--+---++--
  | ID   | Name  | Status | Networks
  | Image| Flavor |
  
+--+---++--
  | 1d8d3b6d-34f4-4f49-9c19-72c0e84f498a | test1 | ACTIVE | 
net_vlan1040=172.22.40.70 | CentOS-7-x86_64-1907 | m1 |
  
+--+---++--
  [root@tp-osc-01 ~]# openstack server show test1
  
+-+
  | Field   | Value 
 |
  
+-+
  | OS-DCF:diskConfig   | MANUAL
 |
  | OS-EXT-AZ:availability_zone | nova  
 |
  | OS-EXT-SRV-ATTR:host| tp-nova-04
 |
  | OS-EXT-SRV-ATTR:hypervisor_hostname | tp-nova-04
 |
  | OS-EXT-SRV-ATTR:instance_name   | instance-007c 
 |
  | OS-EXT-STS:power_state  | Running   
 |
  | OS-EXT-STS:task_state   | None  
 |
  | OS-EXT-STS:vm_state | active
 |
  | OS-SRV-USG:launched_at  | 2020-01-27T14:09:19.00
 |
  | OS-SRV-USG:terminated_at| None  
 |
  | accessIPv4  |   
 |

  Expected result
  ===
  instance successfully evacuate from dead host to new one.

  Actual result
  =
  instance console log show fail to mount /sysroot

  [  OK  ] Started File System Check on 
/dev/d...806-efd7-4eef-aaa2-2584909365ff.
   Mounting /sysroot...
  [4.088225] SGI XFS with ACLs, security attributes, no debug enabled
  [4.096798] XFS (vda1): Mounting V5 Filesystem
  [4.245558] blk_update_request: I/O error, dev vda, sector 8395962
  [4.252896] blk_update_request: I/O error, dev vda, sector 8396970
  [4.259931] blk_update_request: I/O error, dev vda, sector 8397994
  [4.266486] blk_update_request: I/O error, dev vda, sector 8399018
  [4.272896] blk_update_request: I/O error, dev vda, sector 8400042
  [4.279461] XFS (vda1): xfs_do_force_shutdown(0x1) called

[Yahoo-eng-team] [Bug 1856839] Re: [L3] router processing time increase if there are large set ports

2020-01-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/701077
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5f2758bb800c0376efd3e0526f808d73b9ad1bc0
Submitter: Zuul
Branch:master

commit 5f2758bb800c0376efd3e0526f808d73b9ad1bc0
Author: LIU Yulong 
Date:   Mon Sep 30 11:03:49 2019 +0800

Move arp device check out of loop

This could be time-consuming if there are lots of ports
under the router. So this patch moves the same device
check out of the loop.

Closes-Bug: #1856839
Change-Id: I2da856712aaafb77878628c52d19e0a5c7cdee0f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856839

Title:
  [L3] router processing time increase if there are large set ports

Status in neutron:
  Fix Released

Bug description:
  The function "_update_arp_entry" [1] was called under a double loop
  [2][3], and it has a "device.exists()" check [4]. When there are tons
  of ports under the router, the router processing time will definitely
  increase.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L249
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L288-L290
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L291
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_local_router.py#L260

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1856839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861071] [NEW] Failing to extend an attached encrypted volume

2020-01-28 Thread Tzach Shefi
Public bug reported:

Description:
While extending an attached encrypted volume, Cinder reports volume had been 
extended. 
However inside the instance lsblk doesn't show new extended size. 
This works fine for none encrypted attached volumes. 

The error on Nova says suggest this isn't cause by Cinder or it's
backend thus also Cinder backend agnostic.

If this won't be fixed or supported, we should block or at the very
least document that it's a known limitation.


Steps to reproduce:
1. Create an encrypted volume say 1G
2. Attach said volume to instance
3. check lsblk inside instance should report ~1G
4. Cinder extend attached encrypted volume example, notice needs Cinder micro 
version
cinder --os-volume-api-version 3.59 extend  [VolID] 3
5. Recheck cinder show [VolID] notice volume size was extended
6. Inside instance lsblk doesn't show the new extended size. 
7. Notice Nova error (seen later below) 


Version:
openstack-nova-compute-20.0.2-0.20191230035951.27bfd0b.el8ost.noarch
openstack-nova-migration-20.0.2-0.20191230035951.27bfd0b.el8ost.noarch
python3-novaclient-15.1.0-0.20190919143437.cd396b8.el8ost.noarch
python3-nova-20.0.2-0.20191230035951.27bfd0b.el8ost.noarch
puppet-nova-15.4.1-0.20191126042922.b1bb388.el8ost.noarch
openstack-nova-common-20.0.2-0.20191230035951.27bfd0b.el8ost.noarch


Expected result:
lsblk should report new size, as it successfully does in case of extending an 
attached none encrypted volume.

Actual result:
Cinder show volume size increase
But lsblk doesn't grow. 

If not supported this should be blocked/alerted on logs.
Or at the very least documented.  


Traceback from nova about failures in resize:

var/log/containers/nova/nova-compute.log:71:2020-01-28 07:14:17.394 7 ERROR 
nova.virt.libvirt.driver [instance: f93e235e-30e9-4bdc-ad48-8e972552bbb5] 
/var/log/containers/nova/nova-compute.log:73:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server [req-ef88174f-6762-4842-a64c-76074a848ac7 
b753e5c55ba94950b5463ae41bb623ab 08d5a55b6cad4413910abc863b4a2b15 - default 
default] Exception during message handling: libvirt.libvirtError: internal 
error: unable to execute QEMU command 'block_resize': Cannot grow device files
/var/log/containers/nova/nova-compute.log:74:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server Traceback (most recent call last):
/var/log/containers/nova/nova-compute.log:75:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in 
_process_incoming
/var/log/containers/nova/nova-compute.log:76:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
/var/log/containers/nova/nova-compute.log:77:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 274, 
in dispatch
/var/log/containers/nova/nova-compute.log:78:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, 
args)
/var/log/containers/nova/nova-compute.log:79:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _do_dispatch
/var/log/containers/nova/nova-compute.log:80:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server result = func(ctxt, **new_args)
/var/log/containers/nova/nova-compute.log:81:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/exception_wrapper.py", line 79, in 
wrapped
/var/log/containers/nova/nova-compute.log:82:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server function_name, call_dict, binary, tb)
/var/log/containers/nova/nova-compute.log:83:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
/var/log/containers/nova/nova-compute.log:84:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server self.force_reraise()
/var/log/containers/nova/nova-compute.log:85:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
/var/log/containers/nova/nova-compute.log:86:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
/var/log/containers/nova/nova-compute.log:87:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py", 
line 693, in reraise
/var/log/containers/nova/nova-compute.log:88:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server raise value
/var/log/containers/nova/nova-compute.log:89:2020-01-28 07:14:17.778 7 ERROR 
oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/nova/exception_wrapper.py", line 69, in 
wrapped
/var/log/containers/nova/nova-compute.log:90:2020-01-28 07:14:17.778 7 ERROR 
oslo_me