[Yahoo-eng-team] [Bug 1862343] Re: Changing the language in GUI has almost no effect

2020-02-25 Thread James Page
Packages have PO files but not MO files - the package build does the
compilation but the install step completely misses them.

** Changed in: horizon (Ubuntu)
   Status: New => Triaged

** Changed in: horizon (Ubuntu)
   Importance: Undecided => High

** Also affects: horizon (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Focal)
   Importance: High
   Status: Triaged

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Changed in: horizon (Ubuntu Eoan)
   Importance: Undecided => High

** Summary changed:

- Changing the language in GUI has almost no effect
+ compiled messages not shipped in packaging resulting in missing translations

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1862343

Title:
  compiled messages not shipped in packaging resulting in missing
  translations

Status in OpenStack openstack-dashboard charm:
  Invalid
Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive rocky series:
  New
Status in Ubuntu Cloud Archive stein series:
  New
Status in Ubuntu Cloud Archive train series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  Triaged
Status in horizon source package in Eoan:
  New
Status in horizon source package in Focal:
  Triaged

Bug description:
  I changed the language in GUI to French but interface stays mostly
  English. Just a few strings are displayed in French, e.g.:

  - "Password" ("Mot de passe") on the login screen,
  - units "GB", "TB" as "Gio" and "Tio" in Compute Overview,
  - "New password" ("Noveau mot de passe") in User Settings.

  All other strings are in English.

  See screenshots attached.

  This is the Stein on Ubuntu Bionic deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1862343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864661] Re: Miss qrouter namespace after the router create and set network gateway/subnet

2020-02-25 Thread Kevin Zhao
Find the root cause.
Deploy it with Kolla and the ip netns can be found at the kolla container, 
which behavior is rather different than before.
Will mark it as invalid

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864661

Title:
  Miss qrouter namespace after the router create and set network
  gateway/subnet

Status in neutron:
  Invalid

Bug description:
  Train release.
  create private and public network and then configure router.

  openstack network create --provider-physical-network physnet1
  --provider-network-type flat --external public

  openstack subnet create --allocation-pool
  start=10.101.133.194,end=10.101.133.222 --network public --subnet-
  range 10.101.133.192/27 --gateway 10.101.133.193   public-subnet

  ip addr add 10.101.133.193/27 dev eth0
  openstack network create private 
  openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
  openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

  openstack router create admin-router
  openstack router set --external-gateway public admin-router 
  openstack router add subnet admin-router private-subnet

  =
  ip netns list:
  return nothing.

  
  l3_agent log:2020-02-25 14:29:49.380 20 INFO neutron.common.config [-] 
Logging enabled!
  2020-02-25 14:29:49.381 20 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 15.0.1
  2020-02-25 14:29:50.206 20 INFO neutron.agent.l3.agent 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Agent HA routers count 0
  2020-02-25 14:29:50.208 20 INFO neutron.agent.agent_extensions_manager 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Loaded agent extensions: []
  2020-02-25 14:29:50.248 20 INFO eventlet.wsgi.server [-] (20) wsgi starting 
up on http:/var/lib/neutron/keepalived-state-change
  2020-02-25 14:29:50.310 20 INFO neutron.agent.l3.agent [-] L3 agent started
  2020-02-25 14:29:55.314 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp5y8x4u6q/privsep.sock']
  2020-02-25 14:29:56.710 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Spawned new privsep daemon 
via rootwrap
  2020-02-25 14:29:56.496 32 INFO oslo.privsep.daemon [-] privsep daemon 
starting
  2020-02-25 14:29:56.506 32 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
  2020-02-25 14:29:56.511 32 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
  2020-02-25 14:29:56.512 32 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 32
  2020-02-25 14:45:05.540 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id 7fe46b58-852b-461b-b9f4-febfadf59343. Wait time elapsed: 0.001
  2020-02-25 14:45:19.815 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
7fe46b58-852b-461b-b9f4-febfadf59343. Time elapsed: 14.275
  2020-02-25 14:45:19.817 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Wait time elapsed: 7.905
  2020-02-25 14:45:23.282 20 INFO neutron.agent.linux.interface [-] Device 
qg-bba540a4-b5 already exists
  2020-02-25 14:45:28.490 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Time elapsed: 8.672
  ~
  ~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864471] Re: [neutron-tempest-plugin] SG quota exceeded

2020-02-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/709478
Committed: 
https://git.openstack.org/cgit/openstack/neutron-tempest-plugin/commit/?id=a4bb258bbc8a432e34baf8033a81348372b98c2a
Submitter: Zuul
Branch:master

commit a4bb258bbc8a432e34baf8033a81348372b98c2a
Author: Rodolfo Alonso Hernandez 
Date:   Mon Feb 24 13:07:21 2020 +

Increase default security group quota up to 150

Change-Id: Ia4bdc53a0d7d537360afc67a1bd61cc3a0eb6da1
Closes-Bug: #1864471


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864471

Title:
  [neutron-tempest-plugin] SG quota exceeded

Status in neutron:
  Fix Released

Bug description:
  When testing "test_two_sec_groups", a "OverQuota" exception was
  raised.

  Log snippet: http://paste.openstack.org/show/789930/

  Log:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_495/709375/1/check
  /neutron-ovn-tempest-ovs-release/4950f68/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1848201] Re: [neutron-vpnaas] Neutron installed inside venv makes VPNaaS broken

2020-02-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/687538
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=e0fb6700b16d4307db033173a1ca330e6ea02ca2
Submitter: Zuul
Branch:master

commit e0fb6700b16d4307db033173a1ca330e6ea02ca2
Author: Dmitriy Rabotyagov 
Date:   Fri Oct 11 13:26:52 2019 +0300

Run neutron-vpn-netns-wrapper in venv

When neutron is installed inside venv, neutron-vpn-netns-wrapper
is placed inside venv as well. Currently vpn creation will fail due to
missing wrapper inside $PATH. So we should respect venvs and launch
neutron-vpn-netns-wrapper from the venv when applicable.

Closes-Bug: 1848201
Change-Id: I9c50bfc2cefdd97c6d54e8bfabe97748c8dfce13


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1848201

Title:
  [neutron-vpnaas] Neutron installed inside venv makes VPNaaS broken

Status in neutron:
  Fix Released

Bug description:
  As location of NS_WRAPPER [1] is not aboslute and relies on $PATH in
  the situation when neutron and vpnaas is installed inside virtualenv,
  NS_WRAPPER won't be able to launch as it won't be found in $PATH which
  will cause a failure:

  2019-10-08 17:37:11.619 17205 ERROR neutron.agent.linux.utils [-] Exit
  code: 1; Stdin: ; Stdout: ; Stderr: exec of "neutron-vpn-netns-
  wrapper" failed: No such file or directory

  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec [-] Failed to enable vpn 
process on router 19514b2a-95bc-499f-9590-a5014ca04e7f: ProcessExecutionError: 
Exit code: 1; Stdin: ; Stdout: ; Stderr: exec of "neutron-vpn-netns-wrapper" 
failed: No such file or directory
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec Traceback (most recent call 
last):
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/openstack/venvs/neutron-19.0.0.0b2.dev70/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 336, in enable
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec self.ensure_configs()
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/openstack/venvs/neutron-19.0.0.0b2.dev70/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py",
 line 91, in ensure_configs
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
self._ipsec_execute(['_stackmanager', 'start'])
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/openstack/venvs/neutron-19.0.0.0b2.dev70/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py",
 line 52, in _ipsec_execute
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
extra_ok_codes=extra_ok_codes)
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/openstack/venvs/neutron-19.0.0.0b2.dev70/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 788, in execute
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec run_as_root=run_as_root)
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec   File 
"/openstack/venvs/neutron-19.0.0.0b2.dev70/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 147, in execute
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec returncode=returncode)
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec ProcessExecutionError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: exec of "neutron-vpn-netns-wrapper" failed: 
No such file or directory
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec 
  2019-10-08 17:37:11.620 17205 ERROR 
neutron_vpnaas.services.vpn.device_drivers.ipsec 


  
  [1] 
https://opendev.org/openstack/neutron-vpnaas/src/branch/master/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py#L22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1848201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864777] [NEW] os-volumes API policy is allowed for everyone even policy defaults is admin_or_owner

2020-02-25 Thread Brin Zhang
Public bug reported:

os-volumes API policy is default to admin_or_owner[1] but API is allowed
for everyone.

This is because API does not pass the server project_id in policy target:
show-https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L107

delete-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L122

details/index-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L148

create-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L161


[1]https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policies/volumes.py#L27

** Affects: nova
 Importance: Undecided
 Assignee: Brin Zhang (zhangbailin)
 Status: New


** Tags: policy policy-defaults-refresh

** Tags added: policy policy-defaults-refresh

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864777

Title:
  os-volumes API policy is allowed for everyone even policy defaults is
  admin_or_owner

Status in OpenStack Compute (nova):
  New

Bug description:
  os-volumes API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  This is because API does not pass the server project_id in policy target:
  
show-https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L107

  delete-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L122

  details/index-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L148

  create-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L161


  
[1]https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policies/volumes.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1864777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864776] [NEW] os-volumes-attachments API policy is allowed for everyone even policy defaults is admin_or_owner

2020-02-25 Thread Brin Zhang
Public bug reported:

os-volumes-attachments list/show/create/delete API policy is default to
admin_or_owner[1] but API is allowed for everyone.

We can see the test trying with other project context can access the API
- 
https://review.opendev.org/#/c/709929/1/nova/tests/unit/policies/test_volumes.py@84

This is because API does not pass the server project_id in policy target
index-https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L282

show-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L307

create-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L337

delete-
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L440

and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

[1]https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policies/volumes_attachments.py#L21

** Affects: nova
 Importance: Undecided
 Assignee: Brin Zhang (zhangbailin)
 Status: Confirmed


** Tags: policy policy-defaults-refresh

** Changed in: nova
 Assignee: (unassigned) => Brin Zhang (zhangbailin)

** Changed in: nova
   Status: New => Confirmed

** Tags added: policy

** Tags added: policy-defaults-refresh

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864776

Title:
  os-volumes-attachments API policy is allowed for everyone even policy
  defaults is admin_or_owner

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  os-volumes-attachments list/show/create/delete API policy is default
  to admin_or_owner[1] but API is allowed for everyone.

  We can see the test trying with other project context can access the API
  - 
https://review.opendev.org/#/c/709929/1/nova/tests/unit/policies/test_volumes.py@84

  This is because API does not pass the server project_id in policy target
  
index-https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L282

  show-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L307

  create-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L337

  delete-
  
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/api/openstack/compute/volumes.py#L440

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  
[1]https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policies/volumes_attachments.py#L21

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1864776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834875] Re: cloud-init growpart race with udev

2020-02-25 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-initramfs-tools - 0.45ubuntu1

---
cloud-initramfs-tools (0.45ubuntu1) focal; urgency=medium

  * Add dependency on flock for growroot's use of growpart.
(LP: #1834875)

 -- Scott Moser   Tue, 25 Feb 2020 13:08:22 -0500

** Changed in: cloud-initramfs-tools (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

Status in cloud-init:
  Incomplete
Status in cloud-utils:
  Fix Committed
Status in cloud-initramfs-tools package in Ubuntu:
  Fix Released
Status in cloud-utils package in Ubuntu:
  Confirmed
Status in linux-azure package in Ubuntu:
  New
Status in systemd package in Ubuntu:
  Incomplete

Bug description:
  On Azure, it happens regularly (20-30%), that cloud-init's growpart
  module fails to extend the partition to full size.

  Such as in this example:

  

  2019-06-28 12:24:18,666 - util.py[DEBUG]: Running command ['growpart', 
'--dry-run', '/dev/sda', '1'] with allowed return codes [0] (shell=False, 
capture=True)
  2019-06-28 12:24:19,157 - util.py[DEBUG]: Running command ['growpart', 
'/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True)
  2019-06-28 12:24:19,726 - util.py[DEBUG]: resize_devices took 1.075 seconds
  2019-06-28 12:24:19,726 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: FAIL: running config-growpart with frequency 
always
  2019-06-28 12:24:19,727 - util.py[WARNING]: Running module growpart () failed
  2019-06-28 12:24:19,727 - util.py[DEBUG]: Running module growpart () failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 812, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 54, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 187, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
351, in handle
  func=resize_devices, args=(resizer, devices))
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2521, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
298, in resize_devices
  (old, new) = resizer.resize(disk, ptnum, blockdev)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
159, in resize
  return (before, get_size(partdev))
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
198, in get_size
  fd = os.open(filename, os.O_RDONLY)
  FileNotFoundError: [Errno 2] No such file or directory: 
'/dev/disk/by-partuuid/a5f2b49f-abd6-427f-bbc4-ba5559235cf3'

  

  @rcj suggested this is a race with udev. This seems to only happen on
  Cosmic and later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864728] [NEW] Unable to create interactive "system" user

2020-02-25 Thread Thomas H Jones II
Public bug reported:

**Problem Description**
The systems I manage are subjected to specific security-hardening guidance that 
causes unwanted alerts for the default-user created by cloud init. 
Specifically, because cloud-init creates the default-user with a userid in the 
non "system" uid-range, the security-hardening validators expect that the 
default-user created by cloud-init will have password-aging attributes set. As 
the default-user account acts as a "break-glass" maintenance account, having 
password-aging is not generally not desirable.

While cloud-init provides the `system` parameter as a seeming out for
this, using this parameter results in an account with no ${HOME} and, by
extension, no ${HOME}/.ssh/authorized keys ...breaking the ability to
configure the default-user account for key-based logins.

Tried using the `no_create_home` parameter and setting its value to
`false` in hopes of overriding the `system` parameter's default
behavior, but it seems like when `system` is set, `no_create_home` is
wholly ignored.

I could probably use the `uid` parameter instead of the `system`
parameter, but I fear that if I set a value like '500', I may cause
problems for applications whose installers expect to be able to create a
service-account with the same uid ('500' being an example value rather
than a specific value).

**Cloud Provider**
AWS

**Version Info**

cloud-init 18.5 from RHEL/CentOS 7 cloud-init-18.5-3 RPM

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1864728

Title:
  Unable to create interactive "system" user

Status in cloud-init:
  New

Bug description:
  **Problem Description**
  The systems I manage are subjected to specific security-hardening guidance 
that causes unwanted alerts for the default-user created by cloud init. 
Specifically, because cloud-init creates the default-user with a userid in the 
non "system" uid-range, the security-hardening validators expect that the 
default-user created by cloud-init will have password-aging attributes set. As 
the default-user account acts as a "break-glass" maintenance account, having 
password-aging is not generally not desirable.

  While cloud-init provides the `system` parameter as a seeming out for
  this, using this parameter results in an account with no ${HOME} and,
  by extension, no ${HOME}/.ssh/authorized keys ...breaking the ability
  to configure the default-user account for key-based logins.

  Tried using the `no_create_home` parameter and setting its value to
  `false` in hopes of overriding the `system` parameter's default
  behavior, but it seems like when `system` is set, `no_create_home` is
  wholly ignored.

  I could probably use the `uid` parameter instead of the `system`
  parameter, but I fear that if I set a value like '500', I may cause
  problems for applications whose installers expect to be able to create
  a service-account with the same uid ('500' being an example value
  rather than a specific value).

  **Cloud Provider**
  AWS

  **Version Info**

  cloud-init 18.5 from RHEL/CentOS 7 cloud-init-18.5-3 RPM

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1864728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864711] [NEW] DHCP port rescheduling causes ports to grow, internal DNS to be broken

2020-02-25 Thread Arjun Baindur
Public bug reported:

Suppose we have DHCP servers per network 2. And we have a # of DHCP
agents > 2.

During a time of network instability, RabbitMQ issues, or even a DHCP
host temporarily going down the DHCP port will get rescheduled.

Except it looks like it's not so much as getting rescheduled, but a
brand new port with IP/MAC is created on a new host. The old port is
only updated and marked as reserved, not deleted.

This causes two issues:

1. The # of DHCP ports grows. Even when the old host starts heartbeating
again, it's port is not deleted. For example we had an environment with
3 DHCP servers per network, and a dozen or so DHCP hosts. It was
observed that for some networks, there were 10+ DHCP ports allocated.

2. DNS is broken temporarily for VMs that still point to the old IPs.
/etc/resolv.conf can only store 3 servers, and either way, Linux's 5
second default DNS timeout means the first server going down or second
server going down causes a 5+ or 10+ delay, which breaks many other
apps.


I'm not sure if this is a bug, or by design. For example if the same IP/mac 
were re-used, we could have a conflict on the data plane. Neutron-server has no 
idea if DHCP/DNS services are actually down - it just knows it's not receiving 
heartbeats over the control plane. Is that why a new port is allocated? Prefer 
to mitigate the risk of conflict?

As for why the old ports aren't deleted or scaled down when connectivity
is restored, is this by design too?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dns l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864711

Title:
  DHCP port rescheduling causes ports to grow, internal DNS to be broken

Status in neutron:
  New

Bug description:
  Suppose we have DHCP servers per network 2. And we have a # of DHCP
  agents > 2.

  During a time of network instability, RabbitMQ issues, or even a DHCP
  host temporarily going down the DHCP port will get rescheduled.

  Except it looks like it's not so much as getting rescheduled, but a
  brand new port with IP/MAC is created on a new host. The old port is
  only updated and marked as reserved, not deleted.

  This causes two issues:

  1. The # of DHCP ports grows. Even when the old host starts
  heartbeating again, it's port is not deleted. For example we had an
  environment with 3 DHCP servers per network, and a dozen or so DHCP
  hosts. It was observed that for some networks, there were 10+ DHCP
  ports allocated.

  2. DNS is broken temporarily for VMs that still point to the old IPs.
  /etc/resolv.conf can only store 3 servers, and either way, Linux's 5
  second default DNS timeout means the first server going down or second
  server going down causes a 5+ or 10+ delay, which breaks many other
  apps.

  
  I'm not sure if this is a bug, or by design. For example if the same IP/mac 
were re-used, we could have a conflict on the data plane. Neutron-server has no 
idea if DHCP/DNS services are actually down - it just knows it's not receiving 
heartbeats over the control plane. Is that why a new port is allocated? Prefer 
to mitigate the risk of conflict?

  As for why the old ports aren't deleted or scaled down when
  connectivity is restored, is this by design too?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862343] Re: Changing the language in GUI has almost no effect

2020-02-25 Thread James Page
it looks like the translation compilation never happens - if you drop
into /usr/share/openstack-dashboard and run:

  sudo python3 manage.py compilemessages

and then restart apache the translations appear to be OK

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: charm-openstack-dashboard
   Status: New => Invalid

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1862343

Title:
  Changing the language in GUI has almost no effect

Status in OpenStack openstack-dashboard charm:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  New

Bug description:
  I changed the language in GUI to French but interface stays mostly
  English. Just a few strings are displayed in French, e.g.:

  - "Password" ("Mot de passe") on the login screen,
  - units "GB", "TB" as "Gio" and "Tio" in Compute Overview,
  - "New password" ("Noveau mot de passe") in User Settings.

  All other strings are in English.

  See screenshots attached.

  This is the Stein on Ubuntu Bionic deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1862343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864675] [NEW] DHCP agent should prioritize new ports when sending RPC messages to server

2020-02-25 Thread Brian Haley
Public bug reported:

When a port is provisioned in the dhcp-agent, for example via a
port_create, it will just be added to the dhcp_ready_ports set and sent
to neutron-server in _dhcp_ready_ports_loop() by popping elements off
the list.  So although it was prioritized when it was received, it is
not prioritized when sent to the server to clear the provisioning block.

It seems like these ports should be sent first, then others behind it if
there is still room in the RPC message.  This could just be done with a
second set() perhaps, unless we want to make it more complicated by
using the priority sent from the server to place ports in different
queues.

This should decrease the time it takes to clear the port provisioning
block when an agent is restarted and gets a port_create message, as it
would help even if it was sent with PRIORITY_PORT_CREATE_HIGH to a
single agent, since the one it chose could still be in the middle of a
full sync.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864675

Title:
  DHCP agent should prioritize new ports when sending RPC messages to
  server

Status in neutron:
  New

Bug description:
  When a port is provisioned in the dhcp-agent, for example via a
  port_create, it will just be added to the dhcp_ready_ports set and
  sent to neutron-server in _dhcp_ready_ports_loop() by popping elements
  off the list.  So although it was prioritized when it was received, it
  is not prioritized when sent to the server to clear the provisioning
  block.

  It seems like these ports should be sent first, then others behind it
  if there is still room in the RPC message.  This could just be done
  with a second set() perhaps, unless we want to make it more
  complicated by using the priority sent from the server to place ports
  in different queues.

  This should decrease the time it takes to clear the port provisioning
  block when an agent is restarted and gets a port_create message, as it
  would help even if it was sent with PRIORITY_PORT_CREATE_HIGH to a
  single agent, since the one it chose could still be in the middle of a
  full sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864665] [NEW] Circular reference error during re-schedule

2020-02-25 Thread Balazs Gibizer
Public bug reported:

Description
===
Server cold migration fails after re-schedule.

Steps to reproduce
==
* create a devstack with two compute hosts with libvirt driver
* set allow_resize_to_same_host=True on both computes
* set up cellsv2 without cell conductor and rabbit separation to allow 
re-schedule logic to call back to the super conductor / scheduler
* enable NUMATopologyFilter and make sure both computes has NUMA resources
* create a flavor with hw:cpu_policy='dedicated' extra spec
* boot a server with the flavor. Check which compute the server is placed 
(let's call it host1)
* boot enough servers on host2 so that the next scheduling request could still 
be fulfilled by both computes but host1 will be preferred by the weighers
* cold migrate the pinned server

Expected result
===
* scheduler selects host1 first but that host fails with UnableToMigrateToSelf 
exception as libvirt does not have the capability
* re-schedule happens
* scheduler selects host2 where the server spawns successfully

Actual result
=
* during the re-schedule when the conductor sends prep_resize RPC to host2 the 
json serialization of the request spec fails with Circural reference error.

Environment
===
* two node devstack with libvirt driver
* stable/pike nova. But expected to be reproduced in newer branches but not 
since stein. See triage part

Triage
==
The json serialization blows up in the migrate conductor task. [1] After 
debugging I see that the infinit loop happens when jsonutils.to_primitive tries 
to serialize a VirtCPUTopology instance.

The problematic piece of code has been removed by
I4244f7dd8fe74565180f73684678027067b4506e in Stein.

[1]
https://github.com/openstack/nova/blob/4224a61b4f3a8b910dcaa498f9663479d61a6060/nova/conductor/tasks/migrate.py#L87

** Affects: nova
 Importance: Medium
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: Invalid

** Affects: nova/ocata
 Importance: Undecided
 Status: New

** Affects: nova/pike
 Importance: Medium
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: Triaged

** Affects: nova/queens
 Importance: Undecided
 Status: New

** Affects: nova/rocky
 Importance: Undecided
 Status: New


** Tags: stable-only

** Tags added: stable-only

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Triaged => Invalid

** Changed in: nova/pike
   Status: New => Triaged

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/pike
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Description changed:

  Description
  ===
  Server cold migration fails after re-schedule.
  
  Steps to reproduce
  ==
  * create a devstack with two compute hosts with libvirt driver
  * set allow_resize_to_same_host=True on both computes
  * set up cellsv2 without cell conductor and rabbit separation to allow 
re-schedule logic to call back to the super conductor / scheduler
  * enable NUMATopologyFilter and make sure both computes has NUMA resources
  * create a flavor with hw:cpu_policy='dedicated' extra spec
- * boot a server with the flavor and ensure that the server. Check which 
compute the server is placed (let's call it host1)
+ * boot a server with the flavor. Check which compute the server is placed 
(let's call it host1)
  * boot enough servers on host2 so that the next scheduling request could 
still be fulfilled by both computes but host1 will be preferred by the weighers
  * cold migrate the pinned server
  
  Expected result
  ===
  * scheduler selects host1 first but that host fails with 
UnableToMigrateToSelf exception as libvirt does not have the capability
  * re-schedule happens
  * scheduler selects host2 where the server spawns successfully
  
  Actual result
  =
  * during the re-schedule when the conductor sends prep_resize RPC to host2 
the json serialization of the request spec fails with Circural reference error.
  
  Environment
  ===
- * two node devstack with libvirt driver 
+ * two node devstack with libvirt driver
  * stable/pike nova. But expected to be reproduced in newer branches but not 
since stein. See triage part
- 
  
  Triage
  ==
  The json serialization blows up in the migrate conductor task. [1] After 
debugging I see that the infinit loop happens when jsonutils.to_primitive tries 
to serialize a VirtCPUTopology instance.
  
  The problematic piece of code has been re

[Yahoo-eng-team] [Bug 1864661] [NEW] Miss qrouter namespace after the router create and set network gateway/subnet

2020-02-25 Thread Kevin Zhao
Public bug reported:

Train release.
create private and public network and then configure router.

openstack network create --provider-physical-network physnet1
--provider-network-type flat --external public

openstack subnet create --allocation-pool
start=10.101.133.194,end=10.101.133.222 --network public --subnet-range
10.101.133.192/27 --gateway 10.101.133.193   public-subnet

ip addr add 10.101.133.193/27 dev eth0
openstack network create private 
openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

openstack router create admin-router
openstack router set --external-gateway public admin-router 
openstack router add subnet admin-router private-subnet

=
ip netns list:
return nothing.


l3_agent log:2020-02-25 14:29:49.380 20 INFO neutron.common.config [-] Logging 
enabled!
2020-02-25 14:29:49.381 20 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 15.0.1
2020-02-25 14:29:50.206 20 INFO neutron.agent.l3.agent 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Agent HA routers count 0
2020-02-25 14:29:50.208 20 INFO neutron.agent.agent_extensions_manager 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Loaded agent extensions: []
2020-02-25 14:29:50.248 20 INFO eventlet.wsgi.server [-] (20) wsgi starting up 
on http:/var/lib/neutron/keepalived-state-change
2020-02-25 14:29:50.310 20 INFO neutron.agent.l3.agent [-] L3 agent started
2020-02-25 14:29:55.314 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp5y8x4u6q/privsep.sock']
2020-02-25 14:29:56.710 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Spawned new privsep daemon 
via rootwrap
2020-02-25 14:29:56.496 32 INFO oslo.privsep.daemon [-] privsep daemon starting
2020-02-25 14:29:56.506 32 INFO oslo.privsep.daemon [-] privsep process running 
with uid/gid: 0/0
2020-02-25 14:29:56.511 32 INFO oslo.privsep.daemon [-] privsep process running 
with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
2020-02-25 14:29:56.512 32 INFO oslo.privsep.daemon [-] privsep daemon running 
as pid 32
2020-02-25 14:45:05.540 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id 7fe46b58-852b-461b-b9f4-febfadf59343. Wait time elapsed: 0.001
2020-02-25 14:45:19.815 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
7fe46b58-852b-461b-b9f4-febfadf59343. Time elapsed: 14.275
2020-02-25 14:45:19.817 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Wait time elapsed: 7.905
2020-02-25 14:45:23.282 20 INFO neutron.agent.linux.interface [-] Device 
qg-bba540a4-b5 already exists
2020-02-25 14:45:28.490 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Time elapsed: 8.672
~
~

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864661

Title:
  Miss qrouter namespace after the router create and set network
  gateway/subnet

Status in neutron:
  New

Bug description:
  Train release.
  create private and public network and then configure router.

  openstack network create --provider-physical-network physnet1
  --provider-network-type flat --external public

  openstack subnet create --allocation-pool
  start=10.101.133.194,end=10.101.133.222 --network public --subnet-
  range 10.101.133.192/27 --gateway 10.101.133.193   public-subnet

  ip addr add 10.101.133.193/27 dev eth0
  openstack network create private 
  openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
  openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

  openstack router create admin-router
  openstack router set --external-gateway public admin-router 
  openstack router add subnet admin-router private-subnet

  =
  ip netns list:
  return nothing.

 

[Yahoo-eng-team] [Bug 1864640] Re: [Ussuri] Neutron API writes to the Southbound DB

2020-02-25 Thread Frode Nordahl
Adding upstream neutron to the bug as this is a regression that will
cause issues for existing deployments with OVN RBAC enabled.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864640

Title:
  [Ussuri] Neutron API writes to the Southbound DB

Status in charm-neutron-api-plugin-ovn:
  Triaged
Status in charm-ovn-central:
  Triaged
Status in neutron:
  New

Bug description:
  At Ussuri Neutron API has begun doing writes directly to the
  Southbound DB, there does not appear to be a accompanying RBAC role
  for this, so do we need to give it access to the private port
  currently reserved for ovn-northd?

  The offending change in upstream Neutron arrived here:
  https://github.com/openstack/networking-
  ovn/commit/70c3d06656e15e11a0daf9c3732a21c8ce601c4d

  Example of an failed transaction:
  2020-02-25 11:04:33.420 1520231 ERROR ovsdbapp.backend.ovs_idl.transaction 
[req-8315d356-f92f-4447-a47b-f724374cfc36 - - - - -] OVSDB Error: 
{"details":"RBAC rules for client \"juju-ef641e-1-lxd-2.maas\" role 
\"ovn-controller\" prohibit modification of table 
\"Chassis\".","error":"permission error"}
  2020-02-25 11:04:33.420 1520231 ERROR ovsdbapp.backend.ovs_idl.transaction 
[req-fbf878ca-f0bc-465c-b173-882d695cb4aa 3ff519473176440bb9678c95051ed627 
dd8f9f301d1e436d8d3a9b695537c897 - cb4c93ee9c98459c8cde54c2c8b0a829 
cb4c93ee9c98459c8cde54c2c8b0a829] Traceback (most recent call last):
    File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
122, in run
  txn.results.put(txn.do_commit())
    File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 
115, in do_commit
  raise RuntimeError(msg)
  RuntimeError: OVSDB Error: {"details":"RBAC rules for client 
\"juju-ef641e-1-lxd-2.maas\" role \"ovn-controller\" prohibit modification of 
table \"Chassis\".","error":"permission error"}

  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
[req-fbf878ca-f0bc-465c-b173-882d695cb4aa 3ff519473176440bb9678c95051ed627 
dd8f9f301d1e436d8d3a9b695537c897 - cb4c93ee9c98459c8cde54c2c8b0a829 
cb4c93ee9c98459c8cde54c2c8b0a829] Error executing command: RuntimeError: OVSDB 
Error: {"details":"RBAC rules for client \"juju-ef641e-1-lxd-2.maas\" role 
\"ovn-controller\" prohibit modification of table 
\"Chassis\".","error":"permission error"}
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
Traceback (most recent call last):
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/command.py", line 40, 
in execute
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
t.add(self)
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3.6/contextlib.py", line 88, in __exit__
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
next(self.gen)
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/api.py", line 119, in transaction
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
del self._nested_txns_map[cur_thread_id]
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/api.py", line 69, in __exit__
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
self.result = self.commit()
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 
62, in commit
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
raise result.ex
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
122, in run
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
txn.results.put(txn.do_commit())
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 
115, in do_commit
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
raise RuntimeError(msg)
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command 
RuntimeError: OVSDB Error: {"details":"RBAC rules for client 
\"juju-ef641e-1-lxd-2.maas\" role \"ovn-controller\" prohibit modification of 
table \"Chassis\".","error":"permission error"}
  2020-02-25 11:04:33.421 1520231 ERROR ovsdbapp.backend.ovs_idl.command
  2020-02-25 11:04:33.486 1520231 ERROR neutron.pecan_wsgi.hooks.translation 
[req-fbf878ca-f0bc-465c-b173-882d695cb4aa 3ff519473176440bb9678c95051ed627 
dd8f9f301d1e436d8d3a9

[Yahoo-eng-team] [Bug 1864641] [NEW] [OVN] Run maintenance task whenever the OVN DB schema has been upgraded

2020-02-25 Thread Daniel Alvarez
Public bug reported:

When OVN DBs are upgraded (and restarted), there might be cases whenever
we want to accommodate things to a new schema. In this situation we
don't want to force a restart of neutron-server (or metadata agent) but
instead, detect it and run whatever is needed.

This can be achieved by checking the schema version via ovsdbapp [0] and
comparing if it's bigger than what we had upon a reconnection to the OVN
DBs.

[0]
https://github.com/openvswitch/ovs/blob/master/python/ovs/db/schema.py#L35

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864641

Title:
  [OVN] Run maintenance task whenever the OVN DB schema has been
  upgraded

Status in neutron:
  New

Bug description:
  When OVN DBs are upgraded (and restarted), there might be cases
  whenever we want to accommodate things to a new schema. In this
  situation we don't want to force a restart of neutron-server (or
  metadata agent) but instead, detect it and run whatever is needed.

  This can be achieved by checking the schema version via ovsdbapp [0]
  and comparing if it's bigger than what we had upon a reconnection to
  the OVN DBs.

  [0]
  https://github.com/openvswitch/ovs/blob/master/python/ovs/db/schema.py#L35

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864553] Re: [OVN] setup.cfg packages should be set to ovn_octavia_provider

2020-02-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/709612
Committed: 
https://git.openstack.org/cgit/openstack/ovn-octavia-provider/commit/?id=acee7c2134acbbd52a4b98d7be123b4d11eb794a
Submitter: Zuul
Branch:master

commit acee7c2134acbbd52a4b98d7be123b4d11eb794a
Author: Corey Bryant 
Date:   Mon Feb 24 16:04:41 2020 -0500

Ensure setup.cfg packages matches root directory

The root directory for the python package is
ovn_octavia_provider so the setup.cfg [files] packages
should match accordingly.

Change-Id: Ief28c375173f869e64aa98a8c972a6f845462217
Closes-Bug: #1864553


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864553

Title:
  [OVN] setup.cfg packages should be set to ovn_octavia_provider

Status in neutron:
  Fix Released

Bug description:
  I think this should be:

  diff --git a/setup.cfg b/setup.cfg
  index 5abfadf..1571fea 100644
  --- a/setup.cfg
  +++ b/setup.cfg
  @@ -20,7 +20,7 @@ classifier =

   [files]
   packages =
  -ovn-octavia-provider
  +ovn_octavia_provider

  to match the root python package directory.

  I hit this when attempting to package ovn-octavia-provider for ubuntu.
  The following command fails to install any of the files under
  ovn_octavia_provider:

  python3.8 setup.py install --install-layout=deb --root /tmp/python3
  -ovn-octavia-provider

  and is fixed with the change above. That can be run on the cloned
  upstream source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864639] [NEW] [OVN] UpdateLRouterPortCommand and AddLRouterPortCommand needs to specify network

2020-02-25 Thread Maciej Jozefczyk
Public bug reported:

On tempest gates there are a few issues related to wrong networks column
value. It cannot be empty [1] - 'set of 1 or more strings'.


Logs:
Feb 24 23:41:49.108841 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
AddLRouterPortCommand(name=lrp-17de4d5e-18a4-42da-be41-897adb24629c, 
lrouter=neutron-830c8317-481b-48c0-89cd-a92ed2e654f3, may_exist=True, 
columns={'mac': 'fa:16:3e:88:43:c8', 'networks': [], 'external_ids': 
{'neutron:revision_number': '1', 'neutron:subnet_ids': '', 
'neutron:network_name': 'neutron-03f4c0b2-c9c7-4318-b1d2-85e1610e35df', 
'neutron:router_name': '830c8317-481b-48c0-89cd-a92ed2e654f3'}, 'options': {}, 
'gateway_chassis': ['39028b55-969f-4d32-bf43-a99fcf6a01ca']}) {{(pid=32265) 
do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
Feb 24 23:41:49.110213 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to write bad value to column 
networks (ovsdb error: 0 values when type requires between 1 and 
9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values when type 
requires between 1 and 9223372036854775807

Feb 24 23:42:22.181527 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(name=lrp-41627bfd-a84f-4ff0-99d2-c41d800bb97b, 
columns={'external_ids': {'neutron:revision_number': '1', 'neutron:subnet_ids': 
'', 'neutron:network_name': 'neutron-d8975d0f-780c-4c64-adf6-26e81d566b14', 
'neutron:router_name': 'd5941192-9084-4d33-8c05-9964e21749e2'}, 'options': {}, 
'networks': [], 'ipv6_ra_configs': {}}, if_exists=True) {{(pid=32270) do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
Feb 24 23:42:22.182042 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to write bad value to column 
networks (ovsdb error: 0 values when type requires between 1 and 
9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values when type 
requires between 1 and 9223372036854775807


[1] http://www.openvswitch.org/support/dist-docs/ovn-nb.5.txt

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864639

Title:
  [OVN] UpdateLRouterPortCommand and AddLRouterPortCommand needs to
  specify network

Status in neutron:
  New

Bug description:
  On tempest gates there are a few issues related to wrong networks
  column value. It cannot be empty [1] - 'set of 1 or more strings'.

  
  Logs:
  Feb 24 23:41:49.108841 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running 
txn n=1 command(idx=1): 
AddLRouterPortCommand(name=lrp-17de4d5e-18a4-42da-be41-897adb24629c, 
lrouter=neutron-830c8317-481b-48c0-89cd-a92ed2e654f3, may_exist=True, 
columns={'mac': 'fa:16:3e:88:43:c8', 'networks': [], 'external_ids': 
{'neutron:revision_number': '1', 'neutron:subnet_ids': '', 
'neutron:network_name': 'neutron-03f4c0b2-c9c7-4318-b1d2-85e1610e35df', 
'neutron:router_name': '830c8317-481b-48c0-89cd-a92ed2e654f3'}, 'options': {}, 
'gateway_chassis': ['39028b55-969f-4d32-bf43-a99fcf6a01ca']}) {{(pid=32265) 
do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
  Feb 24 23:41:49.110213 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to 
write bad value to column networks (ovsdb error: 0 values when type requires 
between 1 and 9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values 
when type requires between 1 and 9223372036854775807

  Feb 24 23:42:22.181527 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running 
txn n=1 command(idx=1): 
UpdateLRouterPortCommand(name=lrp-41627bfd-a84f-4ff0-99d2-c41d800bb97b, 
columns={'external_ids': {'neutron:revision_number': '1', 'neutron:subnet_ids': 
'', 'neutron:network_name': 'neutron-d8975d0f-780c-4c64-adf6-26e81d566b14', 
'neutron:router_name': 'd5941192-9084-4d33-8c05-9964e21749e2'}, 'options': {}, 
'networks': [], 'ipv6_ra_configs': {}}, if_exists=True) {{(pid=32270) do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
  Feb 24 23:42:22.182042 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to 
write bad value to column networks (ovsdb error: 0 values when type requires 
between 1 and 9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values 
when type requires between 1 and 9223372036854775807


  
  [1] http://www.openvswitch.org/support/dist-d

[Yahoo-eng-team] [Bug 1864630] [NEW] Hard Reboot VM with multiple port lost QoS

2020-02-25 Thread Nguyen Thanh Cong
Public bug reported:

I have a VM with multiple ports, assume port_one and port_two. Both port
have QoS. When i hard reboot my VM, port_two still has QoS, port_one
loses.

https://review.opendev.org/#/c/690098/11
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1630
It because both ports are located in ports_re_added. It will loop through the 
ports. port_one is iterated first, events ['re_added'] is assigned port_one, 
events ['removed'] is assigned port_two. In the second loop, events 
['re_added'] is set to port_two instead of adding port_two to list. So after 
the loop, only port_two is left in events ['re_added'].

Reproduce:
- Create VM with multi port
- Hard Reboot Server
- Check QoS with command ovs-vsctl list interface 

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864630

Title:
  Hard Reboot VM with multiple port lost QoS

Status in neutron:
  New

Bug description:
  I have a VM with multiple ports, assume port_one and port_two. Both
  port have QoS. When i hard reboot my VM, port_two still has QoS,
  port_one loses.

  https://review.opendev.org/#/c/690098/11
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1630
  It because both ports are located in ports_re_added. It will loop through the 
ports. port_one is iterated first, events ['re_added'] is assigned port_one, 
events ['removed'] is assigned port_two. In the second loop, events 
['re_added'] is set to port_two instead of adding port_two to list. So after 
the loop, only port_two is left in events ['re_added'].

  Reproduce:
  - Create VM with multi port
  - Hard Reboot Server
  - Check QoS with command ovs-vsctl list interface 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229445] Re: db type could not be determined

2020-02-25 Thread Martin Kopec
python2.7 and 3.3 are deprecated as well as as testr (there's no
.testrepository folder anymore) which was replaced by stestr. The error
and the workaround too are not valid anymore.

** Changed in: tempest
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229445

Title:
  db type could not be determined

Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.versionedobjects:
  Won't Fix
Status in Python client library for Sahara:
  Invalid
Status in tempest:
  Invalid
Status in Testrepository:
  Triaged
Status in Zun:
  Fix Released

Bug description:
  In openstack/python-novaclient project, run test in py27 env, then run
  test in py33 env,  the following error will stop test:

  db type could not be determined

  But, if you run "tox -e py33" fist, then run "tox -e py27", it will be
  fine, no error.

  workaround:
  remove the file in .testrepository/times.dbm, then run py33 test, it will be 
fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1229445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864620] [NEW] [OVN] neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote often fails

2020-02-25 Thread Maciej Jozefczyk
Public bug reported:

We started to see
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
failures on:

Example failure:
https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/709110/1/check/neutron-ovn-tempest-ovs-release/0feed71/testr_results.html


Traceback (most recent call last):
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 385, in test_multiple_ports_portrange_remote
test_ip, port)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 59, in _verify_http_connection
raise e
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 51, in _verify_http_connection
ret = utils.call_url_remote(ssh_client, url)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 128, in call_url_remote
return ssh_client.exec_command(cmd)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 311, 
in wrapped_f
return self.call(f, *args, **kw)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 391, 
in call
do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 338, 
in iter
return fut.result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in 
__get_result
raise self._exception
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 394, 
in call
result = fn(*args, **kwargs)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/ssh.py", line 
178, in exec_command
return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 204, in exec_command
stderr=err_data, stdout=out_data)
neutron_tempest_plugin.common.utils.SSHExecCommandFailed: Command 'curl 
http://10.1.0.11:80 --retry 3 --connect-timeout 2' failed, exit status: 28, 
stderr:
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 1 seconds. 3 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 2 seconds. 2 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 4 seconds. 1 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0curl: (28) Connection timed out after 2002 milliseconds

stdout:

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864620

Title:
  [OVN]
  
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
  often fails

Status in neutron:
  New

Bug description:
  We started to see
  
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
  failures on:

  Example failure:
  
https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/709110/1/check/neutron-ovn-tempest-ovs-release/0feed71/testr_results.html

  
  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 385, in test_multiple_ports_portrange_remote
  test_ip, port)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 59, in _verify_http_connection
  raise e
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 51, in _verify_http_connection
  ret = utils.call_url_remote(ssh_client, url)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 128, in call_url_remote
  return ssh_client.exec_command(cmd)
File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 
311, in wrapped_f
  return self.call(f, *args, **kw)
File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.p