[Yahoo-eng-team] [Bug 2054435] [NEW] Testing errors when using new netaddr library

2024-02-20 Thread Dr. Jens Harbott
Public bug reported:

Failures can be seen in
neutron.tests.unit.agent.linux.openvswitch_firewall.test_firewall.TestConjIPFlowManager
unit tests with netaddr >= 1.0.0 (see e.g.
https://zuul.opendev.org/t/openstack/build/3f23859f8ce44ebbb41eda01b76d1d3b):

netaddr.core.AddrFormatError: '1' is not a valid IPv4 address string!

The code being executed is

  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 426, in _update_flows_for_vlan_subr
removed_ips = set([str(netaddr.IPNetwork(addr[0]).cidr) for addr in (
  ^^^

Debugging shows that in that moment addr=="10.22.3.4", so addr[0]=="1",
which newer netaddr complains about as invalid.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2054435

Title:
  Testing errors when using new netaddr library

Status in neutron:
  New

Bug description:
  Failures can be seen in
  
neutron.tests.unit.agent.linux.openvswitch_firewall.test_firewall.TestConjIPFlowManager
  unit tests with netaddr >= 1.0.0 (see e.g.
  https://zuul.opendev.org/t/openstack/build/3f23859f8ce44ebbb41eda01b76d1d3b):

  netaddr.core.AddrFormatError: '1' is not a valid IPv4 address string!

  The code being executed is

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 426, in _update_flows_for_vlan_subr
  removed_ips = set([str(netaddr.IPNetwork(addr[0]).cidr) for addr in (
^^^

  Debugging shows that in that moment addr=="10.22.3.4", so
  addr[0]=="1", which newer netaddr complains about as invalid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2054435/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2044215] Re: Designate in openstack kolla ansible latest version has issues with dns-integration-domain-keywords. Keyword is replaced by project_name instead of project_id even w

2023-11-22 Thread Dr. Jens Harbott
>From the description this affects the dns-integration in neutron, this
is independent of designate.

** Project changed: designate => neutron

** Tags added: dns

** Summary changed:

- Designate in openstack kolla ansible latest version has issues with 
dns-integration-domain-keywords. Keyword is replaced by project_name instead of 
project_id even when it's written project_id as the keyword
+ dns: Keyword is replaced by project_name instead of project_id

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2044215

Title:
  dns: Keyword is replaced by project_name instead of project_id

Status in neutron:
  New

Bug description:
  Designate in openstack kolla ansible latest version has issues with
  dns-integration-domain-keywords. Keyword is replaced by project_name
  instead of project_id even when it's written project_id as the
  keyword.

  
  I have keycloak SSO integration enabled in OpenStack and configured the user 
email_id as project_name. In such a situation, email id is being added in the A 
records. For example, test.myem...@gmail.com.xyz.com!

  This should not happen ever!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2044215/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2026345] Re: Sphinx raises 'ImageDraw' object has no attribute 'textsize' error

2023-10-06 Thread Dr. Jens Harbott
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2026345

Title:
  Sphinx raises 'ImageDraw' object has no attribute 'textsize' error

Status in Designate:
  Fix Released
Status in Ironic:
  New
Status in OpenStack Identity (keystone):
  New
Status in OpenStack Compute (nova):
  Confirmed
Status in tacker:
  New

Bug description:
  Pillow version 10.0 or higher sphinx raises error.

  '''
   'ImageDraw' object has no attribute 'textsize'
  '''

  
  Tacker specs use sphinx and pillow to build some diagrams in .rst file.
  Pillow remove ImageDraw.textsize() form version 10.0[1],
  but sphinx use ImageDraw.textsize().


  [1]
  https://pillow.readthedocs.io/en/stable/releasenotes/10.0.0.html#font-
  size-and-offset-methods

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/2026345/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036530] [NEW] wrong URLs in upgrade check

2023-09-19 Thread Dr. Jens Harbott
Public bug reported:

The `nova-status upgrade check` command can return a result like:

```

"+-+",  

   
"| Check: Service User Token Configuration  
   |",  
   
"| Result: Failure  
   |",  
   
"| Details:  Service user token configuration is required for all 
Nova |",
 
"|   services. For more details see the following: https://docs 
   |",  
   
"|   .openstack.org/latest/nova/admin/configuration/service-
   |", 
"|   user-token.html
   |",
```

but that URL gives a 404. The correct URL would be
https://docs.openstack.org/nova/latest/admin/configuration/service-user-
token.html

** Affects: nova
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2036530

Title:
  wrong URLs in upgrade check

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The `nova-status upgrade check` command can return a result like:

  ```
  
"+-+",  

   
  "| Check: Service User Token Configuration
 |",
 
  "| Result: Failure
 |",
 
  "| Details:  Service user token configuration is required for all 
Nova |",
 
  "|   services. For more details see the following: https://docs   
 |",
 
  "|   .openstack.org/latest/nova/admin/configuration/service-  
 |", 
  "|   user-token.html  
 |",
  ```

  but that URL gives a 404. The correct URL would be
  https://docs.openstack.org/nova/latest/admin/configuration/service-
  user-token.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2036530/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033980] Re: Neutron fails to respawn radvd due to corrupt pid file

2023-09-18 Thread Dr. Jens Harbott
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033980

Title:
  Neutron fails to respawn radvd due to corrupt pid file

Status in kolla-ansible:
  New
Status in neutron:
  New

Bug description:
  **Bug Report**

  What happened:

  I have had issues periodically where radvd seems to die and neutron is
  not able to respawn it. I'm not sure why it dies.

  In my neutron-l3-agent.log, the following error occurs once per
  minute:

  ```
  2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd
  2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.external_process [-] 
radvd for router with uuid ea759c71-0f4d-4be9-a761-83843ce04d9a not found. The 
process should not have died
  2023-09-03 14:37:07.514 16 WARNING neutron.agent.linux.external_process [-] 
Respawning radvd for uuid ea759c71-0f4d-4be9-a761-83843ce04d9a
  2023-09-03 14:37:07.514 16 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd
  2023-09-03 14:37:07.762 16 ERROR neutron.agent.linux.utils [-] Exit code: 
255; Cmd: ['ip', 'netns', 'exec', 
'qrouter-ea759c71-0f4d-4be9-a761-83843ce04d9a', 'env', 
'PROCESS_TAG=radvd-ea759c71-0f4d-4be9-a761-83843ce04d9a', 'radvd', '-C', 
'/var/lib/neutron/ra/ea759c71-0f4d-4be9-a761-83843ce04d9a.radvd.conf', '-p', 
'/var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd',
 '-m', 'syslog', '-u', 'neutron']; Stdin: ; Stdout: ; Stderr:
  ```

  Inspecting the pid file, it appears to have 2 pids, one on each line:

  ```
  $ docker exec -it neutron_l3_agent cat 
/var/lib/neutron/external/pids/ea759c71-0f4d-4be9-a761-83843ce04d9a.pid.radvd
  853
  1161
  ```

  Deleting the file then properly respawns radvd:

  ```
  2023-09-03 14:38:07.515 16 ERROR neutron.agent.linux.external_process [-] 
radvd for router with uuid ea759c71-0f4d-4be9-a761-83843ce04d9a not found. The 
process should not have died
  2023-09-03 14:38:07.516 16 WARNING neutron.agent.linux.external_process [-] 
Respawning radvd for uuid ea759c71-0f4d-4be9-a761-83843ce04d9a
  ```

  What you expected to happen:

  Radvd is respawned without needing manual intervention. Likely what is
  meant to happen is neutron should write the pid to the file, whereas
  instead it appends it. I'm not sure if this is a kolla issue or a
  neutron issue.

  How to reproduce it (minimal and precise): Unsure, I'm not sure how
  radvd ends up dying in the first place. You could likely reproduce
  this by deploying kolla-ansible and then manually killing radvd.

  **Environment**:
  * OS (e.g. from /etc/os-release):
  NAME="Rocky Linux"
  VERSION="9.2 (Blue Onyx)"
  ID="rocky"
  ID_LIKE="rhel centos fedora"
  VERSION_ID="9.2"
  PLATFORM_ID="platform:el9"
  PRETTY_NAME="Rocky Linux 9.2 (Blue Onyx)"
  ANSI_COLOR="0;32"
  LOGO="fedora-logo-icon"
  CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
  HOME_URL="https://rockylinux.org/;
  BUG_REPORT_URL="https://bugs.rockylinux.org/;
  SUPPORT_END="2032-05-31"
  ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
  ROCKY_SUPPORT_PRODUCT_VERSION="9.2"
  REDHAT_SUPPORT_PRODUCT="Rocky Linux"
  REDHAT_SUPPORT_PRODUCT_VERSION="9.2"

  * Kernel (e.g. `uname -a`):
  Linux lon1 5.14.0-284.25.1.el9_2.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 2 
14:53:30 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  * Docker version if applicable (e.g. `docker version`):
  Client: Docker Engine - Community
   Version:   24.0.5
   API version:   1.43
   Go version:go1.20.6
   Git commit:ced0996
   Built: Fri Jul 21 20:36:54 2023
   OS/Arch:   linux/amd64
   Context:   default

  Server: Docker Engine - Community
   Engine:
Version:  24.0.5
API version:  1.43 (minimum version 1.12)
Go version:   go1.20.6
Git commit:   a61e2b4
Built:Fri Jul 21 20:35:17 2023
OS/Arch:  linux/amd64
Experimental: false
   containerd:
Version:  1.6.22
GitCommit:8165feabfdfe38c65b599c4993d227328c231fca
   runc:
Version:  1.1.8
GitCommit:v1.1.8-0-g82f18fe
   docker-init:
Version:  0.19.0
GitCommit:de40ad0

  * Kolla-Ansible version (e.g. `git head or tag or stable branch` or pip 
package version if using release):
  16.1.0 (stable/2023.1)

  * Docker image Install type (source/binary): Default installed by 
kolla-ansible
  * Docker image distribution: rocky
  * Are you using official images from Docker Hub or self built? official
  * If self built - Kolla version and environment used to build: not applicable
  * Share your inventory file, globals.yml and other configuration files if 
relevant: Likely not relevant.

To 

[Yahoo-eng-team] [Bug 2033556] [NEW] Documentation for DNS integration is incomplete

2023-08-30 Thread Dr. Jens Harbott
Public bug reported:

https://docs.openstack.org/neutron/latest/contributor/internals/external_dns_integration.html
should list in more detail the various available dns extensions and the
features that they provide.

It should also contain a clear warning that only one of the extensions
must be enabled.

https://docs.openstack.org/neutron/latest/admin/config-dns-int.html
should get a reference to that doc.

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: New


** Tags: dns

** Changed in: neutron
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033556

Title:
  Documentation for DNS integration is incomplete

Status in neutron:
  New

Bug description:
  
https://docs.openstack.org/neutron/latest/contributor/internals/external_dns_integration.html
  should list in more detail the various available dns extensions and
  the features that they provide.

  It should also contain a clear warning that only one of the extensions
  must be enabled.

  https://docs.openstack.org/neutron/latest/admin/config-dns-int.html
  should get a reference to that doc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033556/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from code

2023-07-10 Thread Dr. Jens Harbott
** Changed in: grenade
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424728

Title:
  Remove old rpc alias(es) from code

Status in Cinder:
  Fix Released
Status in CloudPulse:
  In Progress
Status in Designate:
  Fix Released
Status in grenade:
  Invalid
Status in IoTronic:
  In Progress
Status in Ironic:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  We have several TRANSPORT_ALIASES entries from way back (Essex, Havana)
  http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48

  We need a way to warn end users that they need to fix their nova.conf
  So these can be removed in a later release (full cycle?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1424728/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025096] Re: test_rebuild_volume_backed_server failing 100% on ceph job

2023-07-10 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2025096

Title:
  test_rebuild_volume_backed_server  failing 100% on ceph job

Status in Cinder:
  Invalid
Status in devstack:
  Fix Released
Status in devstack-plugin-ceph:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Invalid

Bug description:
  There is some issue in ceph job during password injection
  during the rebuild operation, and due to that test is failing 100% failure on 
ceph job.

  These test pass on other jobs like tempest-full-py3

  Failure logs:

  -
  
https://b932a1446345e101b3ef-4740624f0848c8c3257f704064a4516f.ssl.cf5.rackcdn.com/883557/2/check/nova-
  ceph-multistore/d7db64f/testr_results.html

  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:28 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'x-openstack-request-id': 
'req-f707a2bb-a7c6-4e65-93e2-7cb8195dd331', 'connection': 'close', 'status': 
'204', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42'}
  Body: b''
  2023-06-26 01:07:28,442 108489 INFO [tempest.lib.common.rest_client] 
Request (ServerActionsV293TestJSON:_run_cleanups): 404 GET 
https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42
 0.034s
  2023-06-26 01:07:28,442 108489 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:28 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'146', 'x-openstack-request-id': 'req-ae967163-b104-4ddf-b1e8-bb6da298b498', 
'connection': 'close', 'status': '404', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42'}
  Body: b'{"NeutronError": {"type": "SecurityGroupNotFound", "message": 
"Security group 63dc9e50-2d05-4cfa-912d-92a3c9283e42 does not exist", "detail": 
""}}'
  2023-06-26 01:07:29,135 108489 INFO [tempest.lib.common.rest_client] 
Request (ServerActionsV293TestJSON:_run_cleanups): 204 DELETE 
https://10.209.99.44:9696/networking/v2.0/floatingips/c6cc0747-06bd-4783-811d-2408a72db3cc
 0.692s
  2023-06-26 01:07:29,135 108489 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:29 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'x-openstack-request-id': 
'req-e0797282-5cc1-4d86-b2ec-696feed6369a', 'connection': 'close', 'status': 
'204', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/floatingips/c6cc0747-06bd-4783-811d-2408a72db3cc'}
  Body: b''
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in 
_get_ssh_connection
  ssh.connect(self.host, port=self.port, username=self.username,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/paramiko/client.py",
 line 365, in connect
  sock.connect(addr)
  TimeoutError: timed out

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
927, in test_rebuild_volume_backed_server
  linux_client.validate_authentication()
File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 31, in wrapper
  return function(self, *args, **kwargs)
File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 123, in validate_authentication
  self.ssh_client.test_connection_auth()
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 245, in 
test_connection_auth
  connection = self._get_ssh_connection()
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 155, in 
_get_ssh_connection
  raise exceptions.SSHTimeout(host=self.host,
  tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.20 via SSH 
timed out.
  User: cirros, Password: rebuildPassw0rd

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2025096/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025486] Re: [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on ovn git clone

2023-07-10 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025486

Title:
  [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on
  ovn git clone

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Since 2023-06-30, the neutron-tempest-plugin-scenario-ovn-wallaby started to 
fail 100% in stable/wallaby backports:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-plugin-scenario-ovn-wallaby=openstack/neutron

  
  Sample failure grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_288/887253/2/check/neutron-tempest-plugin-scenario-ovn-wallaby/288071d/job-output.txt

  2023-06-30 11:00:07.584319 | controller | + functions-common:git_timed:644
   :   timeout -s SIGINT 0 git clone https://github.com/ovn-org/ovn.git 
/opt/stack/ovn --branch 36e3ab9b47e93af0599a818e9d6b2930e49473f0
  2023-06-30 11:00:07.587213 | controller | Cloning into '/opt/stack/ovn'...
  2023-06-30 11:00:07.828809 | controller | fatal: Remote branch 
36e3ab9b47e93af0599a818e9d6b2930e49473f0 not found in upstream origin

  I think I recall some recent fixes (to devstack maybe) to change git
  clone/checkout, is it related and just a missing backport to wallaby?
  Newer branches are fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2025486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024921] [NEW] Formalize use of subnet service-type for draining subnets

2023-06-23 Thread Dr. Jens Harbott
Public bug reported:

As documented in https://docs.openstack.org/neutron/latest/admin/config-
service-subnets.html, subnets can be assigned a service-type which
ensures that they are only used to allocate addresses to a specific
device owner. But the current implementation also allows this feature to
be used to ensure that no addresses at all are assigned from a subnet by
setting the service type to an invalid owner like "compute:bogus" or
"network:drain".

One use case for this is extending or reducing FIP pools in a
deployment. Assume there is a /24 in use as public subnet which is
running full. Adding a second /24 is possible, but will waste some IPs
for network, gateway and broadcast address. So the better solution will
be to add a /23, gradually migrate the existing users away from the /24
and finally remove the old /24. In order for this to be feasible, one
must prevent allocation from the old subnet to happen during the
migration phase. The same applies when an operator wants to reduce the
size of a pool.

Since the above solution is undocumented, it would be useful to make it
documented and thus ensure that this stays a dependable workflow for
operators. Maybe one can also define a well-known "bogus" owner that
could be added in case the verification of device owners was to be made
more strict. Having some functional testing for this scenario might be
an extra bonus.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024921

Title:
  Formalize use of subnet service-type for draining subnets

Status in neutron:
  New

Bug description:
  As documented in
  https://docs.openstack.org/neutron/latest/admin/config-service-
  subnets.html, subnets can be assigned a service-type which ensures
  that they are only used to allocate addresses to a specific device
  owner. But the current implementation also allows this feature to be
  used to ensure that no addresses at all are assigned from a subnet by
  setting the service type to an invalid owner like "compute:bogus" or
  "network:drain".

  One use case for this is extending or reducing FIP pools in a
  deployment. Assume there is a /24 in use as public subnet which is
  running full. Adding a second /24 is possible, but will waste some IPs
  for network, gateway and broadcast address. So the better solution
  will be to add a /23, gradually migrate the existing users away from
  the /24 and finally remove the old /24. In order for this to be
  feasible, one must prevent allocation from the old subnet to happen
  during the migration phase. The same applies when an operator wants to
  reduce the size of a pool.

  Since the above solution is undocumented, it would be useful to make
  it documented and thus ensure that this stays a dependable workflow
  for operators. Maybe one can also define a well-known "bogus" owner
  that could be added in case the verification of device owners was to
  be made more strict. Having some functional testing for this scenario
  might be an extra bonus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024921/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959666] Re: Neutron-dynamic-routing does not work with OVN

2023-06-20 Thread Dr. Jens Harbott
Ack, OVN+NDR is working in general now, some specific use case may still
need additional work.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959666

Title:
  Neutron-dynamic-routing does not work with OVN

Status in OpenStack Neutron Dynamic Routing charm:
  New
Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released

Bug description:
  When using OVN as Neutron backend, announcing prefixes with neutron-
  dynamic-routing is currently not working due to changes in the
  database structure. Some attempt to fix this has been made in
  https://review.opendev.org/c/openstack/neutron-dynamic-
  routing/+/814055 but wasn't successful.

  This is a major stop gap for production deployments which are using
  BGP to provide connectivity for IPv6 subnets in tenant networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-dynamic-routing/+bug/1959666/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2023632] [NEW] n-d-r: Peering failing on mixed IPv4 + IPv6 updates

2023-06-13 Thread Dr. Jens Harbott
Public bug reported:

Scenario: Having BGP peers defined for IPv6 to advertise tenant
networks, like for a standard deployment with public IPv6 connectivity.
When a router has both IPv4 and IPv6 subnets attached, updates like
adding a new IPv6 subnet also create updates for the IPv4 prefixes, but
these crash the peering:

Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: netaddr.core.AddrFormatError: 
base address '2001:db8::308' is not IPv4
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: During handling of the above 
exception, another exception occurred:
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: Traceback (most recent call 
last):
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/hub.py", line 69, in _launch
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: return func(*args, 
**kwargs)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/base.py",
 line 253, in start
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: self._run(*args, **kwargs)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/peer.py",
 line 683, in _run
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: 
self._process_outgoing_msg_list()
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/peer.py",
 line 769, in _process_outgoing_msg_list
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: 
self._send_outgoing_route(outgoing_msg)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/peer.py",
 line 729, in _send_outgoing_route
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: 
self._protocol.send(update_msg)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/speaker.py",
 line 395, in send
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: self._send_with_lock(msg)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/services/protocols/bgp/speaker.py",
 line 384, in _send_with_lock
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: 
self._socket.sendall(msg.serialize())
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/packet/bgp.py", line 5245, 
in serialize
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: tail = 
self.serialize_tail()
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/packet/bgp.py", line 5465, 
in serialize_tail
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: binpathattrs += 
pa.serialize()
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/packet/bgp.py", line 3661, 
in serialize
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: value = 
self.serialize_value()
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/packet/bgp.py", line 3871, 
in serialize_value
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: 
addrconv.ipv4.text_to_bin(self.value))
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/os_ken/lib/addrconv.py", line 36, in 
text_to_bin
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: ip = self._fallback(text, 
**self._addr_kwargs)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/ip/__init__.py", line 930, in 
__init__
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: value, prefixlen = 
parse_ip_network(_ipv4, addr,
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/ip/__init__.py", line 803, in 
parse_ip_network
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: expanded_addr = 
_ipv4.expand_partial_address(val1)
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]:   File 
"/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", line 259, 
in expand_partial_address
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: raise error
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: netaddr.core.AddrFormatError: 
invalid partial IPv4 address: '2001:db8::308'!
Jun 13 08:06:03 vm2 neutron-bgp-dragent[3280003]: : 
netaddr.core.AddrFormatError: invalid partial IPv4 address: '2001:db8::308'!

Full logs attached.

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: New


** Tags: l3-bgp

** Tags added: l3-bgp

-- 
You received this bug notification because

[Yahoo-eng-team] [Bug 1841788] Re: neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin DBError

2023-05-23 Thread Dr. Jens Harbott
The original issue seems to have been related to python2.7, which is no
longer supported. It was also never reproduced. Please reopen if you
still see this issue and have a way to reproduce it.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841788

Title:
  neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin  DBError

Status in neutron:
  Invalid

Bug description:
  It would appear that the bgp agent does not recognize translate
  attributes

  
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
[req-c5bda00a-2fb2-4f1e-8583-cfa842c97d30 1034301cea4d41c2ae979cc80d0c9221 
44651bdb0d7a4d28adecd7653d39a38c - default default] Error during notification 
for 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin.port_callback--9223372036854769834
 port, after_update: DBError: 'result' object has no attribute 'translate'
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py", line 197, 
in _notify_loop
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py",
 line 376, in port_callback
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager routes 
= self.get_advertised_routes(ctx, bgp_speaker)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py",
 line 225, in get_advertised_routes
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
bgp_speaker_id)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py", line 
315, in get_advertised_routes
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager routes 
= self.get_routes_by_bgp_speaker_id(context, bgp_speaker_id)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py", line 
477, in get_routes_by_bgp_speaker_id
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
bgp_speaker_id)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py", line 
864, in _get_tenant_network_routes_by_bgp_speaker
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
bgp_speaker_id)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py", line 
921, in _tenant_networks_by_bgp_speaker_query
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager 
bgp_speaker_id)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/db/bgp_db.py", line 
470, in _get_address_scope_ids_for_bgp_speaker
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager return 
[scope.id for scope in query.all()]
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2925, in all
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager return 
list(self)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 3081, in 
__iter__
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager return 
self._execute_and_instances(context)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 3106, in 
_execute_and_instances
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager result 
= conn.execute(querycontext.statement, self._params)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 980, in 
execute
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager return 
meth(self, multiparams, params)
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 273, in 
_execute_on_connection
  2019-08-28 09:56:26.025 46862 ERROR neutron_lib.callbacks.manager return 
connection._execute_clauseelement(self, multiparams, params)
  2019-08-28 09:56:26.025 46862 ERROR 

[Yahoo-eng-team] [Bug 1898634] Re: BGP peer is not working

2023-05-18 Thread Dr. Jens Harbott
Seems this should be resolved by now.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898634

Title:
  BGP peer is not working

Status in neutron:
  Fix Released

Bug description:
  I´m trying to configure dynamic routing, but when I associate provider
  network with the bgp speaker I start to receive these errors:

  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in 
_process_incoming
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 276, 
in dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 196, 
in _do_dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/api/rpc/handlers/bgp_speaker_rpc.py",
 line 65, in get_bgp_speakers
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self.plugin.get_bgp_speakers_for_agent_host(context, host)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_dragentscheduler_db.py",
 line 263, in get_bgp_speakers_for_agent_host
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server context, 
binding['bgp_speaker_id'])
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 165, in get_bgp_speaker_with_advertised_routes
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 479, in get_routes_by_bgp_speaker_id
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 673, in _get_central_fip_host_routes_by_bgp_speaker
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
l3_db.Router.id == router_attrs.router_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2259, in 
outerjoin
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
from_joinpoint=from_joinpoint,
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"", line 2, in _join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 220, in 
generate
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server fn(self, 
*args[1:], **kw)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2414, in 
_join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server left, 
right, onclause, prop, create_aliases, outerjoin, full
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2437, in 
_join_left_to_right
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server ) = 
self._join_determine_implicit_left_side(left, right, onclause)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2526, in 
_join_determine_implicit_left_side
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server "Can't 
determine which FROM clause to join "
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server


  
  I made manual installation, ussuri. Couldn´t find any workaround.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 2016414] [NEW] Message when creating new neutron bugs has outdated links

2023-04-16 Thread Dr. Jens Harbott
Public bug reported:

When creating a new neutron bug, this info message is shown:

-+-+-+-
Thanks for reporting a bug, the Neutron team loves you!

Please check our defect management process, to see what to expect next:

http://docs.openstack.org/developer/neutron/policies/bugs.html#bug-triage-process
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
-+-+-+-

Both of these links are outdated, they simply redirect to the top-level
neutron docs. These would seem to be the matching current links:

https://docs.openstack.org/neutron/latest/contributor/policies/bugs.html#bug-triage-process
https://docs.openstack.org/neutron/latest/contributor/policies/blueprints.html#neutron-request-for-feature-enhancements

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2016414

Title:
  Message when creating new neutron bugs has outdated links

Status in neutron:
  New

Bug description:
  When creating a new neutron bug, this info message is shown:

  -+-+-+-
  Thanks for reporting a bug, the Neutron team loves you!

  Please check our defect management process, to see what to expect
  next:

  
http://docs.openstack.org/developer/neutron/policies/bugs.html#bug-triage-process
  
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
  -+-+-+-

  Both of these links are outdated, they simply redirect to the top-
  level neutron docs. These would seem to be the matching current links:

  
https://docs.openstack.org/neutron/latest/contributor/policies/bugs.html#bug-triage-process
  
https://docs.openstack.org/neutron/latest/contributor/policies/blueprints.html#neutron-request-for-feature-enhancements

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2016414/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2016413] [NEW] api-ref: missing documentation for subnet-onboard

2023-04-16 Thread Dr. Jens Harbott
Public bug reported:

The subnet-onboard API extension should be mentioned in the API ref for
subnet pools together with the API call for
networking/v2.0/subnetpools//onboard_network_subnets

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2016413

Title:
  api-ref: missing documentation for subnet-onboard

Status in neutron:
  New

Bug description:
  The subnet-onboard API extension should be mentioned in the API ref
  for subnet pools together with the API call for
  networking/v2.0/subnetpools//onboard_network_subnets

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2016413/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967268] Re: glance install with `USE_VENV=True` fails

2023-03-31 Thread Dr. Jens Harbott
Not sure why this was reopened.

** Changed in: devstack
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1967268

Title:
  glance install with `USE_VENV=True` fails

Status in devstack:
  Won't Fix
Status in Glance:
  New

Bug description:
  On a fresh Fedora 35 installation `devstack` installation fails with 
`USE_VENV=True`.
  Current blocking bug is with `glance`:
  ```
  cp: cannot stat '/usr/local/etc/glance/rootwrap.*': No such file or directory
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1967268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998927] Re: openstack server add fixed ip --fixed-ip does not set correct ip address

2022-12-06 Thread Dr. Jens Harbott
Regression introduced by https://review.opendev.org/c/openstack/python-
openstackclient/+/820050

** Project changed: nova => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1998927

Title:
  openstack server add fixed ip --fixed-ip does not set correct ip
  address

Status in python-openstackclient:
  Confirmed

Bug description:
  Description
  ===
  When trying to create attach a fixed ip (be it a provider IP) to an instance 
directly, I see different behaviour when using `nova interface-attach` and 
`openstack server add fixed ip`. The former assigns the ip successfully whereas 
the latter assigns a different ip in the same network

  Steps to reproduce
  ==
  1. Create VM
  2. Attach interface with `nova --debug interface-attach --net-id  
--fixed-ip  `
  3. Remove interface with `openstack server remove network  `
  4. Attach interface with  openstack --debug server add fixed ip  
--fixed-ip-address  `

  Expected result
  ===
  Both 2) and 4) should lead to the requested IP be attached

  Actual result
  =
  Only 2) assigns requested ip. 4) assigns a different ip in the same network.

  Environment
  ===
  1. Exact version of OpenStack you are running? Yoga (nova 25.0.1)

  2. Which hypervisor did you use? Libvirt 8.0.0 + KVM 4.2.1
 What's the version of that? 

  2. Which storage type did you use? Ceph

  3. Which networking type did you use? Neutron with OVN

  Logs & Configs
  ==

  Please see attached file for debug logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1998927/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990809] Re: multinode setup, devstack scheduler fails to start after controller restart

2022-10-11 Thread Dr. Jens Harbott
Devstack isn't meant to be rebooted. If you can come up with a patch to
improve this issue, we will review it, but otherwise redeploying after a
reboot is the expected solution.

** Changed in: devstack
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990809

Title:
  multinode setup, devstack scheduler fails to start after controller
  restart

Status in devstack:
  Won't Fix
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In multinode devstack setup nova scheduler fails to start after reboot


  Steps to reproduce
  ==

  1 - deploy multinode devstack
  https://docs.openstack.org/devstack/latest/guides/multinode-lab.html

  2 - Verify all compute nodes are listed and setup is working as expected
  $ openstack compute service list
  
  create vm, assign floating IP and access VM

  3 - Restart compute nodes, and controller node
  $ sudo init 6

  4 - Once controller and all other nodes are rebooted, check whether all nova 
services are running
  $ openstack compute service list

  $ sudo systemctl status devstack@n-*


  Expected result
  ===
  $ sudo systemctl status devstack@n-*

  All services should be running

  
  $ openstack compute service list

  openstack cmds should run without a issue,


  Actual result
  =
  nova-schduler fails to start with error:
  
  Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
self._init_plugins(extensions)
  Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 113, in 
_init_plugins
  Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
raise NoMatches('No %r driver found, looking for %r' %
  Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
stevedore.exception.NoMatches: No 'nova.scheduler.driver' driver found, looking 
for 'filter_scheduler'
  Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: INFO 
oslo_service.periodic_task [-] Skipping periodic task _discover_hosts_in_cells 
because its interval is negative
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: WARNING 
stevedore.named [-] Could not load filter_scheduler
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: CRITICAL nova 
[-] Unhandled error: stevedore.exception.NoMatches: No 'nova.scheduler.driver' 
driver found, looking for 'filter_scheduler'
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
Traceback (most recent call last):
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/bin/nova-scheduler", line 10, in 
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 sys.exit(main())
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/cmd/scheduler.py", line 47, in main
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 server = service.Service.create(binary='nova-scheduler',
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/service.py", line 252, in create
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 service_obj = cls(host, binary, topic, manager,
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/service.py", line 116, in __init__
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 self.manager = manager_class(host=self.host, *args, **kwargs)
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/scheduler/manager.py", line 60, in __init__
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 self.driver = driver.DriverManager(
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in 
__init__
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 super(DriverManager, self).__init__(
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 89, in 
__init__
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova
 self._init_plugins(extensions)
  Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 113, in 
_init_plugins
  Sep 26 05:09:16 

[Yahoo-eng-team] [Bug 1967268] Re: glance install with `USE_VENV=True` fails

2022-09-24 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1967268

Title:
  glance install with `USE_VENV=True` fails

Status in devstack:
  Won't Fix
Status in Glance:
  New

Bug description:
  On a fresh Fedora 35 installation `devstack` installation fails with 
`USE_VENV=True`.
  Current blocking bug is with `glance`:
  ```
  cp: cannot stat '/usr/local/etc/glance/rootwrap.*': No such file or directory
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1967268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894981] Re: [neutron-dynamic-routing] The self-service network can be bound to the bgp speaker

2022-09-16 Thread Dr. Jens Harbott
There are valid use cases for that scenario.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894981

Title:
  [neutron-dynamic-routing] The self-service network can be bound to the
  bgp speaker

Status in neutron:
  Invalid

Bug description:
  How to reproduce this problem:

  1.create a bgp speaker(name:bgpspeaker)
  2.create a self-service network(name:selfservice)
  3.bgp speaker binds a network
  neutron bgp-speaker-network-add bgpspeaker selfservice  #Added successfully

  According to[1], neutron dynamic routing enables advertisement of
  private network prefixes to physical network.

  [1] https://docs.openstack.org/neutron-dynamic-
  routing/latest/admin/system-design.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894981/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988026] [NEW] Neutron should not create security group with project==None

2022-08-29 Thread Dr. Jens Harbott
Public bug reported:

When a non-admin user tries to list security groups for project_id
"None", Neutron creates a default security group for that project and
returns an empty list to the caller.

To reproduce:

openstack --os-cloud devstack security group list --project None
openstack --os-cloud devstack-admin security group list

The API call that is made is essentially

GET /networking/v2.0/security-groups?project_id=None

The expected result would be an authorization failure, since normal
users should not be allowed to list security groups for other projects.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988026

Title:
  Neutron should not create security group with project==None

Status in neutron:
  New

Bug description:
  When a non-admin user tries to list security groups for project_id
  "None", Neutron creates a default security group for that project and
  returns an empty list to the caller.

  To reproduce:

  openstack --os-cloud devstack security group list --project None
  openstack --os-cloud devstack-admin security group list

  The API call that is made is essentially

  GET /networking/v2.0/security-groups?project_id=None

  The expected result would be an authorization failure, since normal
  users should not be allowed to list security groups for other
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988026/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978564] [NEW] Unit tests fail with latest pyroute2 releases

2022-06-14 Thread Dr. Jens Harbott
5, in _dot_lookup
2022-06-14 03:53:43.008024 | ubuntu-focal | return getattr(thing, comp)
2022-06-14 03:53:43.008034 | ubuntu-focal |
2022-06-14 03:53:43.008047 | ubuntu-focal | AttributeError: module 
'pyroute2' has no attribute 'netlink'
2022-06-14 03:53:43.008057 | ubuntu-focal |
2022-06-14 03:53:43.008067 | ubuntu-focal |
2022-06-14 03:53:43.008079 | ubuntu-focal | During handling of the above 
exception, another exception occurred:
2022-06-14 03:53:43.008088 | ubuntu-focal |
2022-06-14 03:53:43.008098 | ubuntu-focal |
2022-06-14 03:53:43.008108 | ubuntu-focal | Traceback (most recent call 
last):
2022-06-14 03:53:43.008117 | ubuntu-focal |
2022-06-14 03:53:43.008127 | ubuntu-focal |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 182, 
in func
2022-06-14 03:53:43.008144 | ubuntu-focal | return f(self, *args, **kwargs)
2022-06-14 03:53:43.008155 | ubuntu-focal |
2022-06-14 03:53:43.008164 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1322, in patched
2022-06-14 03:53:43.008174 | ubuntu-focal | with 
self.decoration_helper(patched,
2022-06-14 03:53:43.008184 | ubuntu-focal |
2022-06-14 03:53:43.008198 | ubuntu-focal |   File 
"/usr/lib/python3.8/contextlib.py", line 113, in __enter__
2022-06-14 03:53:43.008208 | ubuntu-focal | return next(self.gen)
2022-06-14 03:53:43.008217 | ubuntu-focal |
2022-06-14 03:53:43.008227 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1304, in decoration_helper
2022-06-14 03:53:43.008237 | ubuntu-focal | arg = 
exit_stack.enter_context(patching)
2022-06-14 03:53:43.008246 | ubuntu-focal |
2022-06-14 03:53:43.008256 | ubuntu-focal |   File 
"/usr/lib/python3.8/contextlib.py", line 425, in enter_context
2022-06-14 03:53:43.008266 | ubuntu-focal | result = _cm_type.__enter__(cm)
2022-06-14 03:53:43.008275 | ubuntu-focal |
2022-06-14 03:53:43.008285 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1377, in __enter__
2022-06-14 03:53:43.008295 | ubuntu-focal | self.target = self.getter()
2022-06-14 03:53:43.008305 | ubuntu-focal |
2022-06-14 03:53:43.008315 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1552, in 
2022-06-14 03:53:43.008325 | ubuntu-focal | getter = lambda: 
_importer(target)
2022-06-14 03:53:43.008339 | ubuntu-focal |
2022-06-14 03:53:43.008349 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1228, in _importer
2022-06-14 03:53:43.008359 | ubuntu-focal | thing = _dot_lookup(thing, 
comp, import_path)
2022-06-14 03:53:43.008369 | ubuntu-focal |
2022-06-14 03:53:43.008379 | ubuntu-focal |   File 
"/usr/lib/python3.8/unittest/mock.py", line 1218, in _dot_lookup
2022-06-14 03:53:43.008388 | ubuntu-focal | return getattr(thing, comp)
2022-06-14 03:53:43.008398 | ubuntu-focal |
2022-06-14 03:53:43.008408 | ubuntu-focal | AttributeError: module 
'pyroute2' has no attribute 'netlink'
2022-06-14 03:53:43.008418 | ubuntu-focal |
2022-06-14 03:53:43.008428 | ubuntu-focal |
2022-06-14 03:53:43.008437 | ubuntu-focal | 
neutron.tests.unit.privileged.agent.linux.test_ip_lib.IpLibTestCase.test_get_link_vfs
2022-06-14 03:53:43.008447 | ubuntu-focal | 
-
2022-06-14 03:53:43.008457 | ubuntu-focal |
2022-06-14 03:53:43.008466 | ubuntu-focal | Captured traceback:
2022-06-14 03:53:43.008476 | ubuntu-focal | ~~~
2022-06-14 03:53:43.008486 | ubuntu-focal | Traceback (most recent call 
last):
2022-06-14 03:53:43.008495 | ubuntu-focal |
2022-06-14 03:53:43.008505 | ubuntu-focal |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 182, 
in func
2022-06-14 03:53:43.008515 | ubuntu-focal | return f(self, *args, **kwargs)
2022-06-14 03:53:43.008525 | ubuntu-focal |
2022-06-14 03:53:43.008547 | ubuntu-focal |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/privileged/agent/linux/test_ip_lib.py",
 line 239, in test_get_link_vfs
2022-06-14 03:53:43.008559 | ubuntu-focal | 
vf_info.append(pyroute2.netlink.nlmsg_base())
2022-06-14 03:53:43.008569 | ubuntu-focal |
2022-06-14 03:53:43.008578 | ubuntu-focal | AttributeError: module 
'pyroute2' has no attribute 'netlink'

** Affects: neutron
     Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978564

Title:
  Unit tests fail with latest pyroute2 releases

Status in neutron:
  In Progre

[Yahoo-eng-team] [Bug 1960346] Re: Volume detach failure in devstack-platform-centos-9-stream job

2022-02-10 Thread Dr. Jens Harbott
** Project changed: devstack => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1960346

Title:
  Volume detach failure in devstack-platform-centos-9-stream job

Status in OpenStack Compute (nova):
  New

Bug description:
  devstack-platform-centos-9-stream job is failing 100% with the compute
  server rescue test with volume detach error:

  traceback-1: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/waiters.py", line 316, in 
wait_for_volume_resource_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: volume 70cedb4b-e74d-4a86-a73d-ba8bce29bc99 failed to reach 
available status (current in-use) within the required time (196 s).
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/waiters.py", line 384, in 
wait_for_volume_attachment_remove_from_server
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Volume 70cedb4b-e74d-4a86-a73d-ba8bce29bc99 failed to detach from 
server cf57d12b-5e37-431e-8c71-4a7149e963ae within the required time (196 s) 
from the compute API perspective

  
https://a886e0e70a23f464643f-7cd608bf14cafb686390b86bc06cde2a.ssl.cf1.rackcdn.com/827576/6/check/devstack-
  platform-centos-9-stream/53de74e/testr_results.html

  
  
https://zuul.openstack.org/builds?job_name=devstack-platform-centos-9-stream=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1960346/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959666] [NEW] Neutron-dynamic-routing does not work with OVN

2022-02-01 Thread Dr. Jens Harbott
Public bug reported:

When using OVN as Neutron backend, announcing prefixes with neutron-
dynamic-routing is currently not working due to changes in the database
structure. Some attempt to fix this has been made in
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/814055
but wasn't successful.

This is a major stop gap for production deployments which are using BGP
to provide connectivity for IPv6 subnets in tenant networks.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6 l3-bgp ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959666

Title:
  Neutron-dynamic-routing does not work with OVN

Status in neutron:
  New

Bug description:
  When using OVN as Neutron backend, announcing prefixes with neutron-
  dynamic-routing is currently not working due to changes in the
  database structure. Some attempt to fix this has been made in
  https://review.opendev.org/c/openstack/neutron-dynamic-
  routing/+/814055 but wasn't successful.

  This is a major stop gap for production deployments which are using
  BGP to provide connectivity for IPv6 subnets in tenant networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959666/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951872] [NEW] OVN: Missing reverse DNS for instances

2021-11-22 Thread Dr. Jens Harbott
Public bug reported:

When using OVN, reverse DNS for instances is not working. With dhcp-
agent:

ubuntu@vm1:~$ host 10.0.0.11 10.0.0.3
Using domain server:
Name: 10.0.0.3
Address: 10.0.0.3#53
Aliases: 

11.0.0.10.in-addr.arpa domain name pointer vm3.openstackgate.local.

With OVN:

ubuntu@vm1:~$ host 10.0.0.11 8.8.8.8
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases: 

Host 11.0.0.10.in-addr.arpa. not found: 3(NXDOMAIN)

Expected result: Get the same answer as with ML2/OVS.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951872

Title:
  OVN: Missing reverse DNS for instances

Status in neutron:
  New

Bug description:
  When using OVN, reverse DNS for instances is not working. With dhcp-
  agent:

  ubuntu@vm1:~$ host 10.0.0.11 10.0.0.3
  Using domain server:
  Name: 10.0.0.3
  Address: 10.0.0.3#53
  Aliases: 

  11.0.0.10.in-addr.arpa domain name pointer vm3.openstackgate.local.

  With OVN:

  ubuntu@vm1:~$ host 10.0.0.11 8.8.8.8
  Using domain server:
  Name: 8.8.8.8
  Address: 8.8.8.8#53
  Aliases: 

  Host 11.0.0.10.in-addr.arpa. not found: 3(NXDOMAIN)

  Expected result: Get the same answer as with ML2/OVS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951872/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951816] [NEW] [OVN] setting a IPv6 address in dns_servers is broken

2021-11-22 Thread Dr. Jens Harbott
Public bug reported:

When listing an IPv6 address in dns_servers, its last four octets are
being added to DHCPv4 replies. The expected result is that the address
is added to DHCPv6 replies.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951816

Title:
  [OVN] setting a IPv6 address in dns_servers is broken

Status in neutron:
  New

Bug description:
  When listing an IPv6 address in dns_servers, its last four octets are
  being added to DHCPv4 replies. The expected result is that the address
  is added to DHCPv6 replies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951816/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951074] [NEW] [OVN] default setting leak nameserver config from the host to instances

2021-11-16 Thread Dr. Jens Harbott
Public bug reported:

Using the default settings, i.e. without [ovn]dns_servers being
specified in ml2_conf.ini, OVN will send the nameserver addresses that
are specified in /etc/resolv.conf on the host in DHCP responses. This
may lead to unexpected leaks about the host infrastructure and thus
should at least be well documented. In most cases it will also lead to
broken DNS resolution for the instances, since when systemd-resolve is
being used, the host's nameserver address will be 127.0.0.53, and an
instance will not be able to resolve anything using that address.

Possibly a better approach would be to not send any nameserver
information via DHCP in this scenario.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dns ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951074

Title:
  [OVN] default setting leak nameserver config from the host to
  instances

Status in neutron:
  New

Bug description:
  Using the default settings, i.e. without [ovn]dns_servers being
  specified in ml2_conf.ini, OVN will send the nameserver addresses that
  are specified in /etc/resolv.conf on the host in DHCP responses. This
  may lead to unexpected leaks about the host infrastructure and thus
  should at least be well documented. In most cases it will also lead to
  broken DNS resolution for the instances, since when systemd-resolve is
  being used, the host's nameserver address will be 127.0.0.53, and an
  instance will not be able to resolve anything using that address.

  Possibly a better approach would be to not send any nameserver
  information via DHCP in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951074/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950686] [NEW] [OVN] dns-nameserver=0.0.0.0 for a subnet isn't treated properly

2021-11-11 Thread Dr. Jens Harbott
Public bug reported:

As documented in https://docs.openstack.org/neutron/latest/admin/config-
dns-res.html#case-1-each-virtual-network-uses-unique-dns-resolver-s ,
setting dns-nameserver=0.0.0.0 for a subnet should indicate that DHCP
should not advertise any DNS server to instances on that subnet. This
works fine with LB or OVS, but with OVN, instead the IP 0.0.0.0 is
advertised as nameserver to instances.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950686

Title:
  [OVN] dns-nameserver=0.0.0.0 for a subnet isn't treated properly

Status in neutron:
  New

Bug description:
  As documented in
  https://docs.openstack.org/neutron/latest/admin/config-dns-
  res.html#case-1-each-virtual-network-uses-unique-dns-resolver-s ,
  setting dns-nameserver=0.0.0.0 for a subnet should indicate that DHCP
  should not advertise any DNS server to instances on that subnet. This
  works fine with LB or OVS, but with OVN, instead the IP 0.0.0.0 is
  advertised as nameserver to instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1950686/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1947127] [NEW] Some DNS extensions not working with OVN

2021-10-14 Thread Dr. Jens Harbott
Public bug reported:

On a fresh devstack install with the q-dns service enable from the
neutron devstack plugin, some features still don't work, e.g.:

$ openstack subnet set private-subnet --dns-publish-fixed-ip
BadRequestException: 400: Client Error for url: 
https://10.250.8.102:9696/v2.0/subnets/9f50c79e-6396-4c5b-be92-f64aa0f25beb, 
Unrecognized attribute(s) 'dns_publish_fixed_ip'

$ openstack port create p1 --network private --dns-name p1 --dns-domain a.b.
  
BadRequestException: 400: Client Error for url: 
https://10.250.8.102:9696/v2.0/ports, Unrecognized attribute(s) 'dns_domain'

The reason seems to be that
https://review.opendev.org/c/openstack/neutron/+/686343/31/neutron/common/ovn/extensions.py
only added dns_domain_keywords, but not e.g. dns_domain_ports as
supported by OVN

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1947127

Title:
  Some DNS extensions not working with OVN

Status in neutron:
  New

Bug description:
  On a fresh devstack install with the q-dns service enable from the
  neutron devstack plugin, some features still don't work, e.g.:

  $ openstack subnet set private-subnet --dns-publish-fixed-ip
  BadRequestException: 400: Client Error for url: 
https://10.250.8.102:9696/v2.0/subnets/9f50c79e-6396-4c5b-be92-f64aa0f25beb, 
Unrecognized attribute(s) 'dns_publish_fixed_ip'

  $ openstack port create p1 --network private --dns-name p1 --dns-domain a.b.  

  BadRequestException: 400: Client Error for url: 
https://10.250.8.102:9696/v2.0/ports, Unrecognized attribute(s) 'dns_domain'

  The reason seems to be that
  
https://review.opendev.org/c/openstack/neutron/+/686343/31/neutron/common/ovn/extensions.py
  only added dns_domain_keywords, but not e.g. dns_domain_ports as
  supported by OVN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1947127/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1926638] Re: Neutron - "neutron-tempest-plugin-designate-scenario" gate fails all the time

2021-09-26 Thread Dr. Jens Harbott
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1926638

Title:
  Neutron - "neutron-tempest-plugin-designate-scenario" gate fails all
  the time

Status in neutron:
  Fix Released

Bug description:
  Designate tempest plugin patches keep failing because of: "neutron-
  tempest-plugin-designate-scenario" Neutron gate

  For example:
  https://review.opendev.org/c/openstack/designate-tempest-plugin/+/773477

  From Gate's log:
  2021-04-29 14:29:32.979245 | controller | all run-test-pre: 
PYTHONHASHSEED='1519635236'
  2021-04-29 14:29:32.979672 | controller | all run-test: commands[0] | find . 
-type f -name '*.pyc' -delete
  2021-04-29 14:29:33.264035 | controller | all run-test: commands[1] | tempest 
run --regex '^neutron_tempest_plugin\.scenario\.test_dns_integration' 
--concurrency=3
  2021-04-29 14:30:03.064278 | controller | {2} setUpClass 
(neutron_tempest_plugin.scenario.test_dns_integration.DNSIntegrationExtraTests) 
[0.00s] ... FAILED
  2021-04-29 14:30:03.064369 | controller |
  2021-04-29 14:30:03.064387 | controller | Captured traceback:
  2021-04-29 14:30:03.064401 | controller | ~~~
  2021-04-29 14:30:03.064414 | controller | Traceback (most recent call 
last):
  2021-04-29 14:30:03.064428 | controller |
  2021-04-29 14:30:03.064441 | controller |   File 
"/opt/stack/tempest/tempest/test.py", line 181, in setUpClass
  2021-04-29 14:30:03.064459 | controller | raise 
value.with_traceback(trace)
  2021-04-29 14:30:03.064473 | controller |
  2021-04-29 14:30:03.064485 | controller |   File 
"/opt/stack/tempest/tempest/test.py", line 174, in setUpClass
  2021-04-29 14:30:03.064498 | controller | cls.resource_setup()
  2021-04-29 14:30:03.064521 | controller |
  2021-04-29 14:30:03.064539 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 177, in resource_setup
  2021-04-29 14:30:03.064554 | controller | super(DNSIntegrationExtraTests, 
cls).resource_setup()
  2021-04-29 14:30:03.064566 | controller |
  2021-04-29 14:30:03.064578 | controller |   File 
"/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in wrapper
  2021-04-29 14:30:03.064591 | controller | return func(*func_args, 
**func_kwargs)
  2021-04-29 14:30:03.064604 | controller |
  2021-04-29 14:30:03.064616 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 76, in resource_setup
  2021-04-29 14:30:03.064629 | controller | cls.router = 
cls.create_router_by_client()
  2021-04-29 14:30:03.064646 | controller |
  2021-04-29 14:30:03.064660 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 209, in create_router_by_client
  2021-04-29 14:30:03.064687 | controller | 
cls._wait_for_router_ha_active(router['id'])
  2021-04-29 14:30:03.064701 | controller |
  2021-04-29 14:30:03.064713 | controller |   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 214, in _wait_for_router_ha_active
  2021-04-29 14:30:03.064725 | controller | router = 
cls.os_admin.network_client.show_router(router_id)['router']
  2021-04-29 14:30:03.064737 | controller |
  2021-04-29 14:30:03.064753 | controller | AttributeError: type object 
'DNSIntegrationExtraTests' has no attribute 'os_admin'
  2021-04-29 14:30:03.064765 | controller |
  2021-04-29 14:30:05.184794 | controller | {0} setUpClass 
(neutron_tempest_plugin.scenario.test_dns_integration.DNSIntegrationTests) 
[0.00s] ... FAILED
  2021-04-29 14:30:05.184854 | controller |
  2021-04-29 14:30:05.184870 | controller | Captured traceback:
  2021-04-29 14:30:05.184882 | controller | ~~~
  2021-04-29 14:30:05.184895 | controller | Traceback (most recent call 
last):
  2021-04-29 14:30:05.184912 | controller |
  2021-04-29 14:30:05.184924 | controller |   File 
"/opt/stack/tempest/tempest/test.py", line 181, in setUpClass
  2021-04-29 14:30:05.184936 | controller | raise 
value.with_traceback(trace)
  2021-04-29 14:30:05.184948 | controller |
  2021-04-29 14:30:05.184960 | controller |   File 
"/opt/stack/tempest/tempest/test.py", line 174, in setUpClass
  2021-04-29 14:30:05.184971 | controller | cls.resource_setup()
  2021-04-29 14:30:05.184983 | controller |
  2021-04-29 14:30:05.184999 | controller |   File 
"/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in wrapper
  2021-04-29 14:30:05.185012 | controller | return func(*func_args, 
**func_kwargs)
  2021-04-29 14:30:05.185024 | controller |
  2021-04-29 14:30:05.185036 | controller |   File 

[Yahoo-eng-team] [Bug 1921414] Re: Designate PTR record creation results in in-addr.arpa. zone owned by invalid project ID

2021-03-26 Thread Dr. Jens Harbott
You shouldn't be using Designate's PTR feature at the same time as
Neutron's dns_integration, those two are not meant to co-exist.

If you want to use designate, set the
[service:central]managed_resource_tenant_id variable to the project that
you want designate to use for managed resources, the default is
----, so designate is working as
designed here. But then disable dns_integration from trying to handle
PTR zones.

Or have dns_integration handle PTR records, but then tell Designate not
to deal with them.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921414

Title:
  Designate PTR record creation results in in-addr.arpa. zone owned by
  invalid project ID

Status in neutron:
  Invalid

Bug description:
  When Neutron is creating PTR records during Floating IP attachment on
  Stein, we have witnessed the resultant new X.Y.Z.in-addr.arpa. zone is
  owned by project ID ----.

  This creates issues for record updates for future FIP attachments from
  Neutron resulting in API errors.

  Workaround is to change the project-ID to the services project_id in
  the services_domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1921414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1790038] Re: Network's subnets with different subnet pools

2021-03-03 Thread Dr. Jens Harbott
Seems that neutron is working as expected here and the bug of not
showing the error in detail has been resolved.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1790038

Title:
  Network's subnets with different subnet pools

Status in neutron:
  Invalid

Bug description:
  I created two subnet pools all belong to the same address scope.
  +---+--+
  | Field | Value|
  +---+--+
  | address_scope_id  | a86423de-dee6-41b3-a7bf-f7dfef27919d |
  | created_at| 2018-08-31T09:55:16Z |
  | default_prefixlen | 28   |
  | default_quota | None |
  | description   |  |
  | id| f3123096-c566-4962-9760-37e0ad118b76 |
  | ip_version| 4|
  | is_default| False|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | subnet-pool-ip4-2|
  | prefixes  | 192.168.100.0/24 |
  | project_id| a4d6cb4de22746458041f74e77987832 |
  | revision_number   | 0|
  | shared| True |
  | tags  |  |
  | updated_at| 2018-08-31T09:55:16Z |
  +---+--+
  +---+--+
  | Field | Value|
  +---+--+
  | address_scope_id  | a86423de-dee6-41b3-a7bf-f7dfef27919d |
  | created_at| 2018-08-30T15:42:06Z |
  | default_prefixlen | 26   |
  | default_quota | None |
  | description   |  |
  | id| d977634a-fb6d-42db-8c8e-e3f8248c24ec |
  | ip_version| 4|
  | is_default| False|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | subnet-pool-ip4  |
  | prefixes  | 203.0.112.0/21   |
  | project_id| a4d6cb4de22746458041f74e77987832 |
  | revision_number   | 0|
  | shared| True |
  | tags  |  |
  | updated_at| 2018-08-30T15:42:06Z |
  +---+--+

  Then I create a network, "openstack network create --provider-network-type 
vlan network"
  Then I want to create two subnets, with different subnet pools. The first one 
is successfully created, but the second on fails with "BadRequestException: 
Unknown error"

  THe OpenStack version is queens. And it is installed in a Ubuntu 16.04
  server.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1790038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912931] Re: dns_domain not getting set with vlan base provider

2021-01-23 Thread Dr. Jens Harbott
This is the expected behaviour, you can override the dns-domain per
port, if you do not specify it, the value is taken from the associated
network, but it is not copied into the field for the port. Otherwise a
change of the dns-domain for the network would no longer affect the
port.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912931

Title:
  dns_domain not getting set with vlan base provider

Status in neutron:
  Invalid

Bug description:
  Recently I have install Victoria openstack using openstack-ansible and
  integrated with designate DNS service. I am following
  https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html

  neutron-server config: http://paste.openstack.org/show/801906/

  I have set following two options on neutron server
  # /etc/neutron/neutron.conf
  dns_domain = tux.com.
  external_dns_driver = designate 

  # /etc/neutron/plugins/ml2/ml2_conf.ini
  extension_drivers = port_security,dns

  # set dns-domain to network and i can see that in show command its properly 
set.
  openstack network set net_vlan69 --dns-domain tux.com.

  When i created port or launch instance i have noticed dns_domain
  attribute is None. (because of that i can't see my record getting
  updated on designate.

  $ openstack port create --network net_vlan69 --dns-name vm-tux my-port
  
+-++
  | Field   | Value 
 |
  
+-++
  | admin_state_up  | UP
 |
  | allowed_address_pairs   |   
 |
  | binding_host_id |   
 |
  | binding_profile |   
 |
  | binding_vif_details |   
 |
  | binding_vif_type| unbound   
 |
  | binding_vnic_type   | normal
 |
  | created_at  | 2021-01-24T06:05:05Z  
 |
  | data_plane_status   | None  
 |
  | description |   
 |
  | device_id   |   
 |
  | device_owner|   
 |
  | dns_assignment  | fqdn='vm-tux.tux.com.', hostname='vm-tux', 
ip_address='10.69.1.236'|
  | dns_domain  | None  
 |
  | dns_name| vm-tux
 |
  | extra_dhcp_opts |   
 |
  | fixed_ips   | ip_address='10.69.1.236', 
subnet_id='dfbe8e18-25fa-4271-9ba5-4616eb7d56de' |
  | id  | fe9aefb6-fffb-4cae-94a4-11895223cdf9  
 |
  | ip_allocation   | None  
 |
  | mac_address | fa:16:3e:24:5c:38 
 |
  | name| my-port   
 |
  | network_id  | c17a0287-82b0-4976-90f7-403b60a185e4  
 |
  | numa_affinity_policy| None  
 |
  | port_security_enabled   | True  
 |
  | project_id  | f1502c79c70f4651be8ffc7b844b584f  
 |
  | propagate_uplink_status | None  
 |
  | qos_network_policy_id   | None  
 |
  | qos_policy_id   | None  
 |
  | resource_request| None  
 |
  | revision_number | 1  

[Yahoo-eng-team] [Bug 1746627] Re: Reverse floating IP records are not removed when floating IP is deleted

2021-01-23 Thread Dr. Jens Harbott
As I mentioned in the patch, combining neutron dns-integration with the
designate PTR functionality is not supported, so if this is the issue,
the bug is invalid. I actually think that we should deprecate and drop
the designate support for that in order to avoid such conflicts.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746627

Title:
  Reverse floating IP records are not removed when floating IP is
  deleted

Status in Designate:
  Triaged
Status in neutron:
  Invalid

Bug description:
  When I release/delete a floating IP from my project that has a
  corresponding FloatingIP PTR record the record is not deleted.

  Steps to reproduce:

  Assign a floating IP to my project
  Set a PTR record on my floating IP using the reverse floating IP API
  Release floatingIP

  PTR record still exists in designate.

  I have the sink running and this picks up the notification if you have
  the neutron_floatingip handler but this is for something else. I think
  this needs to be modified to also handle the reverse PTR records
  (managed_resource_type = ptr:floatingip)

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1746627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1903342] Re: Neutron – port creation should fail when used network has no subnet associated

2020-11-07 Thread Dr. Jens Harbott
This is still invalid, there are good reasons to create a port without a
fixed_ip. You might want to discuss with nova whether they want to make
step 4 invalid, but step 2 is correct and should not be changed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1903342

Title:
  Neutron – port creation should fail when used network has no subnet
  associated

Status in neutron:
  Invalid

Bug description:
  ### Scenario ###
  1) openstack network create net_a
  2) port create --network net_a port_a

  ### Actual Result ###
  Port is successfully created:
  (openstack) port show port_a -f json
  {
"admin_state_up": true,
"allowed_address_pairs": [],
"binding_host_id": "",
"binding_profile": {},
"binding_vif_details": {},
"binding_vif_type": "unbound",
"binding_vnic_type": "normal",
"created_at": "2020-11-05T15:21:22Z",
"data_plane_status": null,
"description": "",
"device_id": "",
"device_owner": "",
"dns_assignment": [],
"dns_domain": null,
"dns_name": "",
"extra_dhcp_opts": [],
"fixed_ips": [],
"id": "65753162-f5ce-4b19-9b63-c1756cc31c4a",
"location": {
  "cloud": "",
  "region_name": "regionOne",
  "zone": null,
  "project": {
"id": "4bda515a91d143c5a62863cb87b6ec81",
"name": "admin",
"domain_id": null,
"domain_name": "Default"
  }
},
"mac_address": "fa:16:3e:16:a0:a7",
"name": "port_a",
"network_id": "ab0c613f-fcbf-41c4-8366-a7d0d32d6583",
"port_security_enabled": true,
"project_id": "4bda515a91d143c5a62863cb87b6ec81",
"propagate_uplink_status": null,
"qos_policy_id": null,
"resource_request": null,
"revision_number": 1,
"security_group_ids": [
  "f711e6d6-7998-4864-ae1b-0998c0ea068a"
],
"status": "DOWN",
"tags": [],
"trunk_details": null,
"updated_at": "2020-11-05T15:21:22Z"
  }

  
  ### Expected Result ###
  Port creation should fail with appropriate Error/Warning message.
  The reason for that is that when such a port is used to create the VM, 
created VM will be created without "ip_address" and this is a problem.
  Openstack administrator should'n do that (creating port before subnet), but 
they can as nothing prevents them to do so.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1903342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898886] Re: Can't establish BGP session with password authentication

2020-10-10 Thread Dr. Jens Harbott
This is an issue with the os-ken library, see
https://storyboard.openstack.org/#!/story/2007910 . The issue is fixed
with the latest release of the library, make sure to upgrade.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898886

Title:
  Can't establish BGP session with password authentication

Status in neutron:
  Invalid

Bug description:
  Creating a neutron BGP peer with password authentication leads to an
  error reported on neutron-bgp-dragent.log.

  2020-10-06 18:58:51.861 125213 DEBUG bgpspeaker.peer [-] Started peer 
Peer(ip: 100.94.2.2, asn: 65200) _run 
/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/peer.py:676


  2020-10-06 18:58:51.861 125213 DEBUG bgpspeaker.peer [-] start connect loop. 
(mode: active) _on_update_connect_mode 
/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/peer.py:582

  2020-10-06 18:58:52.862 125213 DEBUG bgpspeaker.peer [-] Peer 100.94.2.2 BGP 
FSM went from Idle to Connect bgp_state 
/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/peer.py:236
   
  2020-10-06 18:58:52.863 125213 DEBUG bgpspeaker.peer [-] Peer(ip: 100.94.2.2, 
asn: 65200) trying to connect to ('100.94.2.2', 179) _connect_loop 
/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/peer.py:1292   
   
  2020-10-06 18:58:52.863 125213 DEBUG bgpspeaker.base [-] Connect TCP called 
for 100.94.2.2:179 _connect_tcp 
/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/base.py:412


  2020-10-06 18:58:52.864 125213 ERROR os_ken.lib.hub [-] hub: uncaught 
exception: Traceback (most recent call last):   

  
File "/usr/lib/python3/dist-packages/os_ken/lib/hub.py", line 69, in 
_launch 

 
  return func(*args, **kwargs)  


  
File 
"/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/peer.py", line 
1296, in _connect_loop  


  self._connect_tcp(peer_address,   


File 
"/usr/lib/python3/dist-packages/os_ken/services/protocols/bgp/base.py", line 
422, in _connect_tcp


  sockopt.set_tcp_md5sig(sock, peer_addr[0], password)  


File 
"/usr/lib/python3/dist-packages/os_ken/lib/sockopt.py", line 71, in 
set_tcp_md5sig  

 
  impl(s, addr, key)


File 
"/usr/lib/python3/dist-packages/os_ken/lib/sockopt.py", line 38, in 
_set_tcp_md5sig_linux   

 
  sa = sockaddr.sa_in4(addr)


[Yahoo-eng-team] [Bug 1895636] Re: 'NoneType' object has no attribute 'address_scope_id'

2020-09-16 Thread Dr. Jens Harbott
This looks like a deployment issue to me, If you find a way to reproduce
without using charms, please update accordingly.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1895636

Title:
  'NoneType' object has no attribute 'address_scope_id'

Status in OpenStack Manila-Ganesha Charm:
  In Progress
Status in neutron:
  Invalid

Bug description:
  In this test run [0] we deploy [1] OpenStack Ussuri on Ubuntu focal
  (20.04) with Neutron plugin 'ovs'. Creating an instance failed. nova-
  compute reported:

  Failed to build and run instance: nova.exception.PortBindingFailed:
  Binding failed for port c2e062b7-0dbb-458e-87b5-4eef5930f1f1, please
  check neutron logs for more information.

  neutron-server's logs show:

  Error during notification for 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin.port_callback-433877 
port, after_update: AttributeError: 'NoneType' object has no attribute 
'address_scope_id'
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", 
line 197, in _notify_loop
  callback(resource, event, trigger, **kwargs)
File 
"/usr/lib/python3/dist-packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py",
 line 368, in port_callback
  ext_nets = self.get_external_networks_for_port(ctx,
File "/usr/lib/python3/dist-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 1189, in get_external_networks_for_port
  ext_scope_set.add(ext_pool.address_scope_id)
  AttributeError: 'NoneType' object has no attribute 'address_scope_id'
  Failed to bind port c2e062b7-0dbb-458e-87b5-4eef5930f1f1 on host 
juju-19a362-zaza-73e874daef6c-15.project.serverstack for vnic_type normal using 
segments [{'id': 'ce5ad0ec-44a0-46e8-bf7f-2794b7fdb508', 'network_type': 'gre', 
'physical_network': None, 'segmentation_id': 1, 'network_id': 
'c1a9e451-5e90-4124-a531-79f24f1bc9e6'}]

  0: 
https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline_func_full/openstack/charm-manila-ganesha/748639/3/6889/index.html
  1: 
https://opendev.org/openstack/charm-manila-ganesha/src/branch/master/src/tests/bundles/focal-ussuri.yaml

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-manila-ganesha/+bug/1895636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861401] Re: Renaming instance brokes DNS integration

2020-08-31 Thread Dr. Jens Harbott
DNS is expected to match the hostname of an instance, which according to
nova above is immutable. If you want an instance with a different
hostname, you need to create a new one.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861401

Title:
  Renaming instance brokes DNS integration

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Colleagues,

  Description
  ===
  Renaming instance (e.g. using "openstack server set --name") brokes DNS 
integration, since it makes it impossible to bind port with new instance's 
name. So, if user renamed instance and want to access it using name, he can not.

  Steps to reproduce
  ==
  1) You have an instance with some name (e.g. "web01")

  2) You rename it using "openstack server set --name web02 web01"

  3) You create port with instance's new name (e.g. web02) in order to attach 
it to the instance
  $ openstack port create --network e-net --fixed-ip subnet=e-subnet --dns-name 
web02 test_port

  4) You're trying to attach the port to the instance:
  $ nova interface-attach --port-id  web02

  Expected result
  ===
  Port binds to the instance and instance can be accessed using hostname "web02"

  Actual result
  =
  Last command in steps above fails with the following message:

  ERROR (ClientException): Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.

  Nova log says the following:

  2020-01-30 11:43:32.652 17476 ERROR nova.api.openstack.wsgi
  PortNotUsableDNS: Port 0425701f-d958-4c81-931a-9594fba7d7d2 not usable
  for instance 2d49b781-cef5-4cdd-a310-e74eb98aa514. Value web02
  assigned to dns_name attribute does not match instance's hostname
  web01

  MySQL content show that renaming instance changed column
  "display_name", but "hostname" remained with old name:

  mysql> select hostname, display_name from instances where 
uuid='2d49b781-cef5-4cdd-a310-e74eb98aa514';
  +--+--+
  | hostname | display_name |
  +--+--+
  | web01| web02|
  +--+--+

  Thus, DNS integration compares port's dns_name to "hostname" not the
  "display_name", which makes it unusable after renaming instance.
  Either renaming instance need to change both "hostname" and
  "display_name" columns or DNS integration need compare port's dns_name
  with "display_name".

  Environment
  ===
  Host OS: Ubuntu 18.04 LTS
  Openstack: Rocky
  $ dpkg -l |grep nova
  ii  nova-api   2:18.2.3-0ubuntu1~cloud0
  ii  nova-common2:18.2.3-0ubuntu1~cloud0
  ii  nova-conductor 2:18.2.3-0ubuntu1~cloud0
  ii  nova-novncproxy2:18.2.3-0ubuntu1~cloud0
  ii  nova-placement-api 2:18.2.3-0ubuntu1~cloud0
  ii  nova-scheduler 2:18.2.3-0ubuntu1~cloud0
  ii  python-nova2:18.2.3-0ubuntu1~cloud0
  ii  python-novaclient  2:11.0.0-0ubuntu1~cloud0

  Thank you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1861401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1882521] [NEW] Failing device detachments on Focal

2020-06-08 Thread Dr. Jens Harbott
Public bug reported:

The following tests are failing consistently when deploying devstack on
Focal in the CI, see https://review.opendev.org/734029 for detailed
logs:

tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
tempest.api.compute.volumes.test_attach_volume.AttachVolumeMultiAttachTest.test_resize_server_with_multiattached_volume
tempest.api.compute.servers.test_server_rescue.ServerStableDeviceRescueTest.test_stable_device_rescue_disk_virtio_with_volume_attached
tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerStableDeviceRescueTest)

Sample extract from nova-compute log:

Jun 08 08:48:24.384559 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
DEBUG oslo.service.loopingcall [-] Exception which is in the suggested list of 
exceptions occurred while invoking function: 
nova.virt.libvirt.guest.Guest.detach_device_with_retry.._do_wait_and_retry_detach.
 {{(pid=82495) _func 
/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py:410}}
Jun 08 08:48:24.384862 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
DEBUG oslo.service.loopingcall [-] Cannot retry 
nova.virt.libvirt.guest.Guest.detach_device_with_retry.._do_wait_and_retry_detach
 upon suggested exception since retry count (7) reached max retry count (7). 
{{(pid=82495) _func 
/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py:416}}
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall [-] Dynamic interval looping call 
'oslo_service.loopingcall.RetryDecorator.__call__.._func' failed: 
nova.exception.DeviceDetachFailed: Device detach failed for vdb: Unable to 
detach the device from the live config.
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall Traceback (most recent call last):
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 150, 
in _run_loop
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 428, 
in _func
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall return self._sleep_time
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall self.force_reraise()
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb)
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall raise value
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python3.8/dist-packages/oslo_service/loopingcall.py", line 407, 
in _func
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall result = f(*args, **kwargs)
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 453, in 
_do_wait_and_retry_detach
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall raise exception.DeviceDetachFailed(
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall nova.exception.DeviceDetachFailed: Device detach 
failed for vdb: Unable to detach the device from the live config.
Jun 08 08:48:24.388855 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
ERROR oslo.service.loopingcall 
Jun 08 08:48:24.390684 ubuntu-focal-rax-dfw-0017012548 nova-compute[82495]: 
WARNING nova.virt.block_device [None req-8af75b5f-2587-4ce7-9523-d2902eb45a38 
tempest-ServerRescueNegativeTestJSON-1578800383 
tempest-ServerRescueNegativeTestJSON-1578800383] [instance: 
76f86b1f-8b11-44e6-b718-eda3e7e18937] Guest refused to detach volume 

[Yahoo-eng-team] [Bug 1882421] [NEW] inject_password fails with python3

2020-06-07 Thread Dr. Jens Harbott
 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/usr/local/lib/python3.8/dist-packages/six.py", line 703, in reraise
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] raise 
value
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3887, in _inject_data
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
disk_api.inject_data(disk.get_model(self._conn),
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/opt/stack/nova/nova/virt/disk/api.py", line 368, in inject_data
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
return inject_data_into_fs(fs, key, net, metadata, admin_password,
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/opt/stack/nova/nova/virt/disk/api.py", line 461, in inject_data_into_fs
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
inject_func(inject_val, fs)
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/opt/stack/nova/nova/virt/disk/api.py", line 597, in 
_inject_admin_password_into_fs
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
new_shadow_data = _set_passwd(admin_user, admin_passwd,
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]   File 
"/opt/stack/nova/nova/virt/disk/api.py", line 644, in _set_passwd
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
p_file = passwd_data.split("\n")
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] 
TypeError: a bytes-like object is required, not 'str'
Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863]

** Affects: nova
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1882421

Title:
  inject_password fails with python3

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Originally reported in #openstack-nova:

  14:44 < lvdombrkr>  hello guys, trying to inject admin_password 
(inject_password=true ) into  image but when creating instance get this error 
in nova-compute.log
  14:45 < lvdombrkr>  2020-06-06 14:53:50.188 6 WARNING nova.virt.disk.api 
[req-94f485ca-944c-40e9-bf14-c8b8dbe09a7b 052d02306e6746a4a3e7e5449de49f8c 
 413a4cadf9734fca9ec3e5e6192a446f - default default] 
Ignoring error injecting admin_password into image (a bytes-like object is 
required, not 'str')
  14:45 < lvdombrkr> Train + Centos8

  Can reproduce on master on devstack by installing python3-guestfs and
  setting

  [libvirt]
  inject_partition = -1
  inject_password = true

  in nova-cpu.conf. Backtrace after adding a hard "raise" into
  inject_data_into_fs():

  Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.virt.libvirt.driver [None req-47214a25-b56a-4135-83bb-7c5ff4c86ca6 demo 
demo] [instance: 5604d60c-61c9-49b5-8786-ff5144817863] Error injecting data 
into image 4b3e63a6-b3c4-4de5-b515-cc286e7d5c48 (a bytes-like object is 
required, not 'str')
  Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [None req-47214a25-b56a-4135-83bb-7c5ff4c86ca6 demo demo] 
[instance: 5604d60c-61c9-49b5-8786-ff5144817863] Instance failed to spawn: 
TypeError: a bytes-like object is required, not 'str'
  Jun 06 15:48:39 jh-devstack-focal-01a nova-compute[2983293]: ERROR 
nova.compute.manager [instance: 5604d60c-61c9-49b5-8786-ff5144817863] Traceback 
(most recent call last):
  Jun 06 15:48:39 jh-devstack-focal-01a nova-

[Yahoo-eng-team] [Bug 1875981] Re: Admin deleting servers or ports leaves orphaned DNS records

2020-06-04 Thread Dr. Jens Harbott
** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1875981

Title:
  Admin deleting servers or ports leaves orphaned DNS records

Status in neutron:
  In Progress

Bug description:
  lets say we have a tenant user_side with zone example.com (dns_domain
  defined on a shared neutron network)

  so the user create a server (dns neutron extension enabled) which
  result a designate recordset below

  server1.example.com in zone example.com that only exist in the tenant
  user_side

  
  if an admin wants to delete server1 from the tenant user_side it will use 

  openstack server delete server1

  which will delete the server but will not delete the designate recordset 
since the zone example.com 
  does not exist in admin tenant

  
  which will leave an orphan record in designate

  the admin should be able to delete all the resources of server1
  including the designate recordset

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1875981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1875981] Re: Admin deleting servers for other tenants

2020-05-12 Thread Dr. Jens Harbott
I can reproduce this and I think the error is in the neutron code, not
in designate.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1875981

Title:
  Admin deleting servers for other tenants

Status in Designate:
  New
Status in neutron:
  Confirmed

Bug description:
  lets say we have a tenant user_side with zone example.com (dns_domain
  defined on a shared neutron network)

  so the user create a server (dns neutron extension enabled) which
  result a designate recordset below

  server1.example.com in zone example.com that only exist in the tenant
  user_side

  
  if an admin wants to delete server1 from the tenant user_side it will use 

  openstack server delete server1

  which will delete the server but will not delete the designate recordset 
since the zone example.com 
  does not exist in admin tenant

  
  which will leave an orphan record in designate

  the admin should be able to delete all the resources of server1
  including the designate recordset

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1875981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871327] Re: stable/stein tempest-full job fails with "tempest requires Python '>=3.6' but the running Python is 2.7.17"

2020-04-27 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871327

Title:
  stable/stein tempest-full job fails with "tempest requires Python
  '>=3.6' but the running Python is 2.7.17"

Status in devstack:
  Fix Released
Status in neutron:
  Invalid
Status in tempest:
  Invalid

Bug description:
  Only seen in stable/stein, tempest-full (and also non-voting 
neutron-tempest-dvr-ha-multinode-full) job fails with:
  Obtaining file:///opt/stack/tempest
  tempest requires Python '>=3.6' but the running Python is 2.7.17

  Example failure on https://review.opendev.org/#/c/717336/2:
  https://zuul.opendev.org/t/openstack/build/f05569c475f44327bff7b7ec58faef8c
  https://zuul.opendev.org/t/openstack/build/651ca00e67ab42fd814ec5edad437997

  While backport on rocky passed both:
  https://review.opendev.org/#/c/717337/2
  https://zuul.opendev.org/t/openstack/build/c9c0139cda4f45cd825e169765e6854c
  https://zuul.opendev.org/t/openstack/build/6f318c4897ea4864b7cd2691dc2a36ab

  
  and on train:
  https://review.opendev.org/#/c/717335/2
  https://zuul.opendev.org/t/openstack/build/f84209f049f2459eabd453058ad11ccf

  For neutron-tempest-dvr-ha-multinode-full parent in stein is tempest-
  multinode-full so it looks like common issue in tempest-full job
  definition for this branch?

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1871327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1866961] Re: ImportError: cannot import name 'Feature'

2020-04-27 Thread Dr. Jens Harbott
AFAICT this doesn't affect devstack directly, please add steps to
reproduce if the issue still exists.

** Changed in: devstack
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1866961

Title:
  ImportError: cannot import name 'Feature'

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in kolla:
  Fix Released
Status in kolla train series:
  Triaged
Status in kolla ussuri series:
  Fix Released

Bug description:
  One of Horizon's requirements is pyscss package. Which had last
  release over 4 years ago...

  Two days ago setuptools v46 got released. One of changes was drop of
  Features feature.

  Now Kolla builds fail:

  INFO:kolla.common.utils.horizon:Collecting pyScss===1.3.4
  INFO:kolla.common.utils.horizon:  Downloading 
http://mirror.ord.rax.opendev.org:8080/pypifiles/packages/1d/4a/221ae7561c8f51c4f28b2b172366ccd0820b14bb947350df82428dfce381/pyScss-1.3.4.tar.gz
 (120 kB)
  INFO:kolla.common.utils.horizon:ERROR: Command errored out with exit 
status 1:
  INFO:kolla.common.utils.horizon: command: /var/lib/kolla/venv/bin/python 
-c 'import sys, setuptools, tokenize; sys.argv[0] = 
'"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"'; 
__file__='"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"';f=getattr(tokenize,
 '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info 
--egg-base /tmp/pip-install-rr0db3qs/pyScss/pip-egg-info
  INFO:kolla.common.utils.horizon: cwd: 
/tmp/pip-install-rr0db3qs/pyScss/
  INFO:kolla.common.utils.horizon:Complete output (5 lines):
  INFO:kolla.common.utils.horizon:Traceback (most recent call last):
  INFO:kolla.common.utils.horizon:  File "", line 1, in 
  INFO:kolla.common.utils.horizon:  File 
"/tmp/pip-install-rr0db3qs/pyScss/setup.py", line 9, in 
  INFO:kolla.common.utils.horizon:from setuptools import setup, 
Extension, Feature
  INFO:kolla.common.utils.horizon:ImportError: cannot import name 'Feature'

  Devstack also has the same problem.

  Are there any plans to fix it?

  pyscss project got issue: https://github.com/Kronuz/pyScss/issues/385

  
  What are plans of Horizon team?

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1866961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863091] [NEW] IPVS setup fails with openvswitch firewall driver, works with iptables_hybrid

2020-02-13 Thread Dr. Jens Harbott
Public bug reported:

We have some IPVS setup deployed according to
https://cloudbau.github.io/openstack/loadbalancing/networking/ipvs/2017/03/20
/ipvs-direct-routing-on-top-of-openstack.html which stopped working
after upgrading from Queens to Rocky and switching from the
iptables_hybrid firewall driver to the native openvswitch firewall
driver.

The issue can be resolved by reverting to the iptables_hybrid driver on
the compute-node hosting the LB instance.

This is on Ubuntu Bionic using the Rocky UCA, Neutron version
13.0.6-0ubuntu1~cloud0.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863091

Title:
  IPVS setup fails with openvswitch firewall driver, works with
  iptables_hybrid

Status in neutron:
  New

Bug description:
  We have some IPVS setup deployed according to
  https://cloudbau.github.io/openstack/loadbalancing/networking/ipvs/2017/03/20
  /ipvs-direct-routing-on-top-of-openstack.html which stopped working
  after upgrading from Queens to Rocky and switching from the
  iptables_hybrid firewall driver to the native openvswitch firewall
  driver.

  The issue can be resolved by reverting to the iptables_hybrid driver
  on the compute-node hosting the LB instance.

  This is on Ubuntu Bionic using the Rocky UCA, Neutron version
  13.0.6-0ubuntu1~cloud0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784879] Re: Neutron doesn't update Designate with some use cases

2020-01-14 Thread Dr. Jens Harbott
Patches to neutron-tempest-plugin, SDK and OSC also got merged, so I'd consider 
this thing finished.
https://review.opendev.org/679833
https://review.opendev.org/680384
https://review.opendev.org/679834

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784879

Title:
  Neutron doesn't update Designate with some use cases

Status in neutron:
  Fix Released

Bug description:
  Neutron and Designate integration covers use cases for ports which are 
exposed via floating IPs, or reside on provider networks.
  However, the following use cases aren't being covered:
  1. Ports reside on a no-NAT network, which is routable from outside the 
Openstack deployment.
  2. Ports on any network which need exposure via DNS: e.g an app uses FQDNs to 
intercommunicate between app components.

  As the no-NAT attribute belongs to the router, and not to the network, it 
might be tricky to detect port exposure via this attribute: a user could attach 
a network with some ports on it to a no-NAT network and so they're exposed even 
though they weren't during creation.
  Or a router might be changed from NAT to no-NAT and vice versa.
  To simplify I would suggest adding an attribute to the network via an 
extension, which would indicate that this network's ports should be published 
on the DNS.
  So for networks which need exposure via DNS, we could flag these networks and 
force the DNS publishing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1858377] Re: [neutron-dynamic-routing]The bgp_speaker_show_dragents interface should be modified to bgp_dragent_list_hosting_speakers

2020-01-10 Thread Dr. Jens Harbott
Actually this is a command implemented by the python-neutronclient OSC
plugin. There's a patch proposed at https://review.opendev.org/701125

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Status: New => In Progress

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1858377

Title:
  [neutron-dynamic-routing]The bgp_speaker_show_dragents interface
  should be modified to bgp_dragent_list_hosting_speakers

Status in neutron:
  Invalid
Status in python-neutronclient:
  In Progress

Bug description:
  As described in [1], [2]
  List BGP speakers hosted by a Dynamic Routing Agent
  /v2.0/agents//bgp-drinstances
  neutronclient:neutron bgp-speaker-list-on-dragent XXX

  List Dynamic Routing Agents hosting a specific BGP Speaker
  /v2.0/bgp-speakers//bgp-dragents
  neutronclient:neutron bgp-dragent-list-hosting-speaker XXX

  Corresponding in openstackclient
  List BGP speakers hosted by a Dynamic Routing Agent
  openstack bgp speaker list --agent XXX

  List Dynamic Routing Agents hosting a specific BGP Speaker
  openstack bgp speaker show dragents XXX

  I think it is more appropriate to replace 'bgp_speaker_show_dragents' 
  with 'bgp_dragent_list_hosting_speakers'.

  [1] 
https://docs.openstack.org/neutron-dynamic-routing/latest/reference/index.html#rest-interface
  [2] 
https://docs.openstack.org/newton/networking-guide/config-bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1858377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750121] Re: Dynamic routing: adding speaker to agent fails

2019-12-20 Thread Dr. Jens Harbott
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750121

Title:
  Dynamic routing: adding speaker to agent fails

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing source package in Artful:
  Won't Fix
Status in neutron-dynamic-routing source package in Bionic:
  Fix Released
Status in neutron-dynamic-routing source package in Cosmic:
  Fix Released

Bug description:
  SRU details for Ubuntu
  --
  [Impact]
  See "Original description" below.

  [Test Case]
  See "Original description" below.

  [Regression Potential]
  Low. This is fixed upstream in corresponding stable branches.

  
  Original description
  
  When following 
https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html
 everything works fine because the speaker is scheduled to the agent 
automatically (in contrast to what the docs say). But if I remove the speaker 
from the agent and add it again with

  $ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
  $ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

  the following error is seen in the log:

  Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
  neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
  da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for
  BGP Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has
  failed with exception 'auth_type'.

  The same thing happens when there are multiple agents and one tries to
  add the speaker to one of the other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853280] Re: nova-live-migration job constantly fails on stable/pike

2019-12-19 Thread Dr. Jens Harbott
** Changed in: devstack-plugin-ceph
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1853280

Title:
  nova-live-migration job constantly fails on stable/pike

Status in devstack-plugin-ceph:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) pike series:
  Invalid

Bug description:
  signature:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22E%3A%20Unable%20to%20locate%20package%20python3-cephfs%5C%22

  example:
  
https://zuul.opendev.org/t/openstack/build/0a199eeccc334b98a2eaf67998eef8b5/log/job-output.txt#5821

  It seems that the devstack-plugin-ceph install fails as it tries to
  install py3 packages that are not available in the package mirror.

  I think the merge of https://review.opendev.org/#/c/694330/ in
  devstack-plugin-ceph triggering the fault

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1853280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852447] [NEW] FWaaS: adding a router port to fwg and removing it leaves the fwg active

2019-11-13 Thread Dr. Jens Harbott
Public bug reported:

Steps to reproduce:

- Create a router
- Optionally create a new firewall group (issue also happens when using the 
default FWG)
- Add a subnet to the router
- Add the router port to the firewall group
- Verify that the status of the firewall group changes from INACTIVE to ACTIVE
- Remove the subnet from the router again

Actual result:

The firewall group has an empty ports list but still has status ACTIVE.

Expected result:

The firewall group has an empty ports list and status INACTIVE.

Tested with devstack on current master. This may be related to
https://bugs.launchpad.net/neutron/+bug/1845300 but that one seems to
happen only sporadically and also the tempest test actually explictly
removes the router ports from the fwg.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852447

Title:
  FWaaS: adding a router port to fwg and removing it leaves the fwg
  active

Status in neutron:
  New

Bug description:
  Steps to reproduce:

  - Create a router
  - Optionally create a new firewall group (issue also happens when using the 
default FWG)
  - Add a subnet to the router
  - Add the router port to the firewall group
  - Verify that the status of the firewall group changes from INACTIVE to ACTIVE
  - Remove the subnet from the router again

  Actual result:

  The firewall group has an empty ports list but still has status
  ACTIVE.

  Expected result:

  The firewall group has an empty ports list and status INACTIVE.

  Tested with devstack on current master. This may be related to
  https://bugs.launchpad.net/neutron/+bug/1845300 but that one seems to
  happen only sporadically and also the tempest test actually explictly
  removes the router ports from the fwg.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827689] Re: Install and configure (Ubuntu) in glance

2019-10-07 Thread Dr. Jens Harbott
Fixed in https://review.opendev.org/666973

** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1827689

Title:
  Install and configure (Ubuntu) in glance

Status in Glance:
  Fix Released

Bug description:
  Hi guys,

  I'm on https://docs.openstack.org/glance/stein/install/install-
  ubuntu.html#install-and-configure-components.

  - [x] This doc is inaccurate in this way:

  The final commands for the installation of Glance are restarting the
  services glance-api and glance-registry. As in the doc
  https://docs.openstack.org/glance/stein/install/get-started.html
  described, with release STEIN, the service glance-registry is no more
  available.

  I think, the entry: "service glance-regesitry restart" has to be
  removed. It lacks to an error message...

  Best regards,
  Robert

  ---
  Release:  on 2018-08-22 10:01:34
  SHA: 4a81cad0b0805be7c91adec9f0b21ade548cf997
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
  URL: https://docs.openstack.org/glance/stein/install/install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1827689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844500] [NEW] get_cmdline_from_pid may fail

2019-09-18 Thread Dr. Jens Harbott
Public bug reported:

Even though get_cmdline_from_pid() checks for the existence of the PID
before accessing the cmdline, the process may terminate just in between,
causing an IOError. So we need to catch that exception.

>From 
>https://zuul.opendev.org/t/openstack/build/3a93ef0d0bbd40dc84758682dbc7b049:

ft1.40: 
neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_egress_udp_rule(OVS
 Firewall Driver)_StringException: Traceback (most recent call last):
  File "neutron/tests/base.py", line 180, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/test_firewall.py", line 498, in 
test_egress_udp_rule
self._test_rule(self.tester.EGRESS, self.tester.UDP)
  File "neutron/tests/functional/agent/test_firewall.py", line 464, in 
_test_rule
direction=direction)
  File "neutron/tests/common/conn_testers.py", line 205, in assert_no_connection
self.assert_connection(direction, protocol, src_port, dst_port)
  File "neutron/tests/common/conn_testers.py", line 49, in wrap
return f(self, direction, *args, **kwargs)
  File "neutron/tests/common/conn_testers.py", line 200, in assert_connection
testing_method(direction, protocol, src_port, dst_port)
  File "neutron/tests/common/conn_testers.py", line 161, in 
_test_transport_connectivity
nc_tester.test_connectivity()
  File "neutron/tests/common/net_helpers.py", line 526, in test_connectivity
self.client_process.writeline(testing_string)
  File "neutron/tests/common/net_helpers.py", line 476, in client_process
self.establish_connection()
  File "neutron/tests/common/net_helpers.py", line 503, in establish_connection
self._spawn_server_process()
  File "neutron/tests/common/net_helpers.py", line 489, in _spawn_server_process
listen=True)
  File "neutron/tests/common/net_helpers.py", line 554, in 
_spawn_nc_in_namespace
proc = RootHelperProcess(cmd, namespace=namespace)
  File "neutron/tests/common/net_helpers.py", line 300, in __init__
self._wait_for_child_process()
  File "neutron/tests/common/net_helpers.py", line 333, in 
_wait_for_child_process
"in %d seconds" % (self.cmd, timeout)))
  File "neutron/common/utils.py", line 701, in wait_until_true
while not predicate():
  File "neutron/tests/common/net_helpers.py", line 325, in child_is_running
self.pid, self.cmd, run_as_root=True)
  File "neutron/agent/linux/utils.py", line 296, in get_root_helper_child_pid
if pid_invoked_with_cmdline(pid, expected_cmd):
  File "neutron/agent/linux/utils.py", line 356, in pid_invoked_with_cmdline
cmd = get_cmdline_from_pid(pid)
  File "neutron/agent/linux/utils.py", line 326, in get_cmdline_from_pid
with open('/proc/%s/cmdline' % pid, 'r') as f:
IOError: [Errno 2] No such file or directory: '/proc/2866/cmdline

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1844500

Title:
  get_cmdline_from_pid may fail

Status in neutron:
  In Progress

Bug description:
  Even though get_cmdline_from_pid() checks for the existence of the PID
  before accessing the cmdline, the process may terminate just in
  between, causing an IOError. So we need to catch that exception.

  From 
https://zuul.opendev.org/t/openstack/build/3a93ef0d0bbd40dc84758682dbc7b049:
  
  ft1.40: 
neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_egress_udp_rule(OVS
 Firewall Driver)_StringException: Traceback (most recent call last):
File "neutron/tests/base.py", line 180, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/test_firewall.py", line 498, in 
test_egress_udp_rule
  self._test_rule(self.tester.EGRESS, self.tester.UDP)
File "neutron/tests/functional/agent/test_firewall.py", line 464, in 
_test_rule
  direction=direction)
File "neutron/tests/common/conn_testers.py", line 205, in 
assert_no_connection
  self.assert_connection(direction, protocol, src_port, dst_port)
File "neutron/tests/common/conn_testers.py", line 49, in wrap
  return f(self, direction, *args, **kwargs)
File "neutron/tests/common/conn_testers.py", line 200, in assert_connection
  testing_method(direction, protocol, src_port, dst_port)
File "neutron/tests/common/conn_testers.py", line 161, in 
_test_transport_connectivity
  nc_tester.test_connectivity()
File "neutron/tests/common/net_helpers.py", line 526, in test_connectivity
  self.client_process.writeline(testing_string)
File "neutron/te

[Yahoo-eng-team] [Bug 1762369] Re: DNS domain name update test give KeyError

2019-09-04 Thread Dr. Jens Harbott
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762369

Title:
  DNS  domain name update test give KeyError

Status in neutron:
  Fix Released

Bug description:
  
neutron.tests.tempest.api.test_ports.PortsTestJSON.test_create_update_port_with_dns_domain
  tes is failing.Below is the traceback.

  Environment : Openstack pike version

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tempest/common/utils/__init__.py", 
line 108, in wrapper
  return func(*func_args, **func_kwargs)
File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/api/test_ports.py", 
line 111, in test_create_update_port_with_dns_domain
  self.assertEqual('d.org.', body['dns_domain'])
  KeyError: 'dns_domain'

  It is failing since 111 line in
  neutron/neutron/tests/tempest/api/test_ports.py has error:

  108 body = self.client.update_port(body['id'],
  109dns_name='d2', 
dns_domain='d.org.')
  110 self.assertEqual('d2', body['port']['dns_name'])
  111 self.assertEqual('d.org.', body['dns_domain'])

  
  The response of test_create_update_port_with_dns_domain is below. 
'dns_domain' comes under port.

  2018-04-09 02:29:16.096 27142 DEBUG tempest.lib.common.rest_client 
[req-c1573c7a-4809-4abd-8bc7-ec6f6201fc41 ] Request - Headers: {'X-Auth-Token': 
''}
  Body: {"port": {"dns_name": "d2", "dns_domain": "d.org."}}
  Response - Headers: {'status': '200', u'content-length': '848', 
'content-location': 
'http://172.16.0.118:9696/v2.0/ports/60d595d5-d196-4044-a791-5f402187f8a4', 
u'date': 'Mon, 09 Apr 2018 02:29:16 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-c1573c7a-4809-4abd-8bc7-ec6f6201fc41'}
  Body: {
  "port": {
  "allowed_address_pairs": [],
  "extra_dhcp_opts": [],
  "updated_at": "2018-04-09T02:29:15Z",
  "dns_domain": "d.org.",
  "device_owner": "",
  "revision_number": 6,
  "port_security_enabled": true,
  "fixed_ips": [{
  "subnet_id": "70569352-f1d8-4a7e-b574-be59690ba82d",
  "ip_address": "10.100.0.6"
  }
  ],
  "id": "60d595d5-d196-4044-a791-5f402187f8a4",
  "security_groups": ["e5a97e2b-000f-45d5-ba67-7e018aef1de7"],
  "qos_policy_id": null,
  "mac_address": "fa:16:3e:bd:00:35",
  "project_id": "0f45914cbe9645c4a3ae8e5be462275b",
  "status": "DOWN",
  "description": "",
  "tags": [],
  "dns_assignment": [{
  "hostname": "d2",
  "ip_address": "10.100.0.6",
  "fqdn": "d2.openstackgate.local."
  }
  ],
  "device_id": "",
  "name": "",
  "admin_state_up": true,
  "network_id": "e8480305-7af5-4a90-809e-7d4c3330db91",
  "dns_name": "d2",
  "created_at": "2018-04-09T02:29:14Z",
  "binding:vnic_type": "normal",
  "tenant_id": "0f45914cbe9645c4a3ae8e5be462275b"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778207] Re: fwaas v2 add port into firewall group failed

2019-08-26 Thread Dr. Jens Harbott
*** This bug is a duplicate of bug 1762454 ***
https://bugs.launchpad.net/bugs/1762454

** This bug has been marked a duplicate of bug 1762454
   FWaaS: Invalid port error on associating ports (distributed router) to 
firewall group

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1778207

Title:
  fwaas v2 add port into firewall group failed

Status in neutron:
  Confirmed

Bug description:
  Hey, stackers. There are some errors when I added router ports with
  DVR/HA mode into a fwaasv2 firewall group.

  The error msg was that:

  Error: Failed to update firewallgroup 3c8dbcab-
  0cfb-4189-bd60-dc4b40a346a4: Port 002c3fff-5b00-42b5-83ab-6413afc083c4
  of firewall group is invalid. Neutron server returns request_ids:
  ['req-da8b946c-aa69-456f-b1d3-d956eff49110']

  My router HA interface:

  Device Owner
  network:router_ha_interface
  Device ID
  a804ad96-42c4-437b-a945-9ecc4cdef34c

  And I traced the related source code about how to validate the port for 
firewall group
  
https://github.com/openstack/neutron-fwaas/blob/9346ced4b0f90e1c7acf855ac9db76ed960510e6/neutron_fwaas/services/firewall/fwaas_plugin_v2.py#L147

  I found that there is not any condition to determine whether the
  router is in DVR/HA mode or not. Therefore, maybe we have to update
  this code snippet https://github.com/openstack/neutron-
  
fwaas/blob/9346ced4b0f90e1c7acf855ac9db76ed960510e6/neutron_fwaas/services/firewall/fwaas_plugin_v2.py#L147

  to support router with DVR/HA mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1778207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834849] Re: Wrong endpoints config with configure_auth_token_middleware

2019-07-03 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1834849

Title:
  Wrong endpoints config with configure_auth_token_middleware

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  It looks that after https://review.opendev.org/#/c/628651/ this
  deprecated function configure_auth_token_middleware is doing something
  wrong. And that cause problems e.g. in neutron-tempest-plugin-
  designate-scenario job, like in
  http://logs.openstack.org/16/661916/7/check/neutron-tempest-plugin-
  designate-
  
scenario/eeb242b/controller/logs/screen-q-svc.txt.gz?level=ERROR#_Jul_01_11_07_05_759996

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1834849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822453] Re: neutron-tempest-plugin-designate-scenario is broken

2019-04-01 Thread Dr. Jens Harbott
Actually this is a regression in devstack introduced by
https://review.openstack.org/636078 , but your workaround seems sensible
anyway. But since this will affect other jobs as well, we should fix it
in devstack, too.

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New => Triaged

** Changed in: devstack
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822453

Title:
  neutron-tempest-plugin-designate-scenario is broken

Status in devstack:
  Triaged
Status in neutron:
  In Progress

Bug description:
  It looks that since few days neutron-tempest-plugin-designate-scenario
  job is failing every time on spawning devstack stage.

  Error looks like:

  2019-03-30 19:20:14.016 | + ./stack.sh:main:767  :   
PYPI_ALTERNATIVE_URL=
  2019-03-30 19:20:14.019 | + ./stack.sh:main:767  :   
/opt/stack/devstack/tools/install_pip.sh
  2019-03-30 19:20:15.172 | /opt/stack/devstack/.localrc.auto: line 78: 
/opt/stack/neutron-tempest-plugin: Is a directory
  2019-03-30 19:20:15.176 | ++ ./stack.sh:main:767  :   
err_trap
  2019-03-30 19:20:15.178 | ++ ./stack.sh:err_trap:563  :   
local r=126
  2019-03-30 19:20:15.182 | stack.sh failed: full log in 
/opt/stack/logs/devstacklog.txt.2019-03-30-191759
  2019-03-30 19:20:15.183 | Error on exit

  Example: http://logs.openstack.org/45/638645/11/check/neutron-tempest-
  plugin-designate-scenario/e76e442/controller/logs/devstacklog.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1822453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539640] Re: Make dhcp agent not recycle ports in binding_failed state

2019-02-07 Thread Dr. Jens Harbott
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539640

Title:
  Make dhcp agent not recycle ports in binding_failed state

Status in neutron:
  Fix Released

Bug description:
  We just happened to have broken dhcp because the dhcp ports were in
  binding_failed state.

  We tried to disable/enable dhcp again, but apparently (not 100% sure,
  unfortunately), disabling didn't remove the ports (assuming that it
  was because they were in binding_failed state) and re-enabling just
  ended up with ports in binding_failed states again (likely the same
  ports because dhcp agent tries to recycle ports whenever possible).
  Unfortunately, I don't have the logs / traces of this.

  So disabling / re-enabling didn't fix anything while, I believe, users
  would try that first to fix the situation. If we could make that just
  work, then that would make it easier for people to get out of this "no
  dhcp" problem when ports are failing to bind.

  In the end, we had to remove the ports to have new ports created when
  dhcp was disable / re-enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1539640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691047] Re: dhcp agent - multiple interfaces, last iface coming up overwrite resolv.conf

2019-02-07 Thread Dr. Jens Harbott
*** This bug is a duplicate of bug 1311040 ***
https://bugs.launchpad.net/bugs/1311040

** This bug has been marked a duplicate of bug 1311040
   Subnet option to disable dns server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691047

Title:
  dhcp agent - multiple interfaces, last iface coming up overwrite
  resolv.conf

Status in neutron:
  New

Bug description:
  The resolv.conf gets populated with whatever the last interface that
  came up over DHCP provided.

  Even if the 2nd network/subnet in neutron doesn’t define DNS, it still
  overwrites resolv.conf.

  By default the dnsmasq agent will use itself, and it's pairs as DNS servers 
if no dns_servers are provided for the neutron subnet. Ref:

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L877:L887

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L970

  This is not always desired. Is there a way to disable this behaviour,
  and simply not offer any dns servers if there are none specified in
  the neutron subnet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1810309] [NEW] Edit project validates quota against resources used in current project instead of edited project

2019-01-02 Thread Dr. Jens Harbott
Public bug reported:

This is a followup to https://bugs.launchpad.net/horizon/+bug/1713724,
which fixed the issue for Nova ressources, but there is still a similar
issue for volumes.

Steps to reproduce (via Horizon, of course):

1. Create new project, new user with admin credentials, set new project as 
primary for new user:
https://pasteboard.co/HTvmbcG.jpg

2. Login as new user. Populate new (primary) project with few instances and 
volumes:
https://pasteboard.co/HTvnD0i.jpg

3. Create test project with default quotas.

4. Try to modify test project quotas, setting 'volumes', 'gigabytes' values 
lower then your primary project current usage:
https://pasteboard.co/HTvqFR5.jpg

5. Login as other user with admin credentials, but with blank
primary_project field. Try to modify test project quotas the way above.
It will be OK.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1810309

Title:
  Edit project validates quota against resources used in current project
  instead of edited project

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is a followup to https://bugs.launchpad.net/horizon/+bug/1713724,
  which fixed the issue for Nova ressources, but there is still a
  similar issue for volumes.

  Steps to reproduce (via Horizon, of course):

  1. Create new project, new user with admin credentials, set new project as 
primary for new user:
  https://pasteboard.co/HTvmbcG.jpg

  2. Login as new user. Populate new (primary) project with few instances and 
volumes:
  https://pasteboard.co/HTvnD0i.jpg

  3. Create test project with default quotas.

  4. Try to modify test project quotas, setting 'volumes', 'gigabytes' values 
lower then your primary project current usage:
  https://pasteboard.co/HTvqFR5.jpg

  5. Login as other user with admin credentials, but with blank
  primary_project field. Try to modify test project quotas the way
  above. It will be OK.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1810309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1803717] [NEW] Instance snapshot fails with rbd backend

2018-11-16 Thread Dr. Jens Harbott
Public bug reported:

http://logs.openstack.org/85/617985/1/check/devstack-plugin-ceph-
tempest/58fe872/controller/logs/screen-n-cpu.txt.gz#_Nov_16_07_59_55_423217

Nov 16 08:07:14.891163 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
DEBUG nova.virt.libvirt.storage.rbd_utils [None 
req-3005471d-96d3-4fdd-a042-0b9e6025ccf4 
tempest-ServerActionsTestJSON-406716108 
tempest-ServerActionsTestJSON-406716108] creating snapshot(snap) on rbd 
image(0ef68017-c94d-43b4-8bb9-78f4d77cf928) {{(pid=3629) create_snap 
/opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py:383}}
Nov 16 08:07:16.213304 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
DEBUG oslo_service.periodic_task [None req-898d2dca-37a7-403f-b578-5ca2ae90e329 
None None] Running periodic task 
ComputeManager._cleanup_expired_console_auth_tokens {{(pid=3629) 
run_periodic_tasks 
/usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:219}}
Nov 16 08:07:16.322727 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver [None req-3005471d-96d3-4fdd-a042-0b9e6025ccf4 
tempest-ServerActionsTestJSON-406716108 
tempest-ServerActionsTestJSON-406716108] Failed to snapshot image: TypeError: 
add_location() takes exactly 4 arguments (3 given)
Nov 16 08:07:16.322893 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 16 08:07:16.323039 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1908, in snapshot
Nov 16 08:07:16.323192 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver purge_props=False)
Nov 16 08:07:16.323326 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/api.py", line 
142, in update
Nov 16 08:07:16.323460 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver purge_props=purge_props)
Nov 16 08:07:16.323604 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 588, in update
Nov 16 08:07:16.323801 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver _reraise_translated_image_exception(image_id)
Nov 16 08:07:16.324000 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 908, in _reraise_translated_image_exception
Nov 16 08:07:16.324179 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver six.reraise(type(new_exc), new_exc, 
exc_trace)
Nov 16 08:07:16.324362 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 586, in update
Nov 16 08:07:16.324511 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver image = self._update_v2(context, 
sent_service_image_meta, data)
Nov 16 08:07:16.324655 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 600, in _update_v2
Nov 16 08:07:16.324802 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver image = self._add_location(context, 
image_id, location)
Nov 16 08:07:16.324948 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 485, in _add_location
Nov 16 08:07:16.325110 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver context, 2, 'add_location', args=(image_id, 
location))
Nov 16 08:07:16.325263 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver   File "/opt/stack/nova/nova/image/glance.py", 
line 193, in call
Nov 16 08:07:16.325421 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver result = getattr(controller, method)(*args, 
**kwargs)
Nov 16 08:07:16.325557 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver TypeError: add_location() takes exactly 4 
arguments (3 given)
Nov 16 08:07:16.325747 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
ERROR nova.virt.libvirt.driver 
Nov 16 08:07:16.432786 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: 
DEBUG nova.virt.libvirt.storage.rbd_utils [None 
req-3005471d-96d3-4fdd-a042-0b9e6025ccf4 
tempest-ServerActionsTestJSON-406716108 
tempest-ServerActionsTestJSON-406716108] removing snapshot(snap) on rbd 
image(0ef68017-c94d-43b4-8bb9-78f4d77cf928) {{(pid=3629) remove_snap 
/opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py:410}}

This error may have been introduced up to three weeks ago without
getting noticed because the ceph job has been broken.

** Affects: nova
 Importance: Undecided
 Status: 

[Yahoo-eng-team] [Bug 1802901] [NEW] Federation functional job failing on Bionic

2018-11-12 Thread Dr. Jens Harbott
Public bug reported:

When doing a test to migrate the functional tests to Bionic, this error
occured within keystone-dsvm-functional-federation:

2018-10-18 15:43:32.336409 | controller | ++ functions-common:apt_get:1083  
  :   sudo DEBIAN_FRONTEND=noninteractive http_proxy= https_proxy= 
no_proxy= apt-get --option Dpkg::Options::=--force-confold --assume-yes install 
libapache2-mod-shib2
2018-10-18 15:43:32.376149 | controller | Reading package lists...
2018-10-18 15:43:32.579166 | controller | Building dependency tree...
2018-10-18 15:43:32.579865 | controller | Reading state information...
2018-10-18 15:43:32.664285 | controller | Some packages could not be installed. 
This may mean that you have
2018-10-18 15:43:32.664482 | controller | requested an impossible situation or 
if you are using the unstable
2018-10-18 15:43:32.664634 | controller | distribution that some required 
packages have not yet been created
2018-10-18 15:43:32.664719 | controller | or been moved out of Incoming.
2018-10-18 15:43:32.664856 | controller | The following information may help to 
resolve the situation:
2018-10-18 15:43:32.664886 | controller |
2018-10-18 15:43:32.665000 | controller | The following packages have unmet 
dependencies:
2018-10-18 15:43:32.738989 | controller |  libapache2-mod-shib2 : Depends: 
libshibsp-plugins (= 2.6.1+dfsg1-2) but it is not going to be installed
2018-10-18 15:43:32.739266 | controller | Depends: 
shibboleth-sp2-utils (>= 2.6) but it is not going to be installed
2018-10-18 15:43:32.739445 | controller | Depends: 
libshibsp7 but it is not going to be installed
2018-10-18 15:43:32.739651 | controller | Depends: 
libxmltooling7 (>= 1.6.0-5) but it is not going to be installed
2018-10-18 15:43:32.756003 | controller | E: Unable to correct problems, you 
have held broken packages.

Tracing the sequence of dependencies, it seems that ... isn't
installable on ubuntu-server Bionic because of this reference to an
outdated library:

$ sudo apt install libxmltooling7
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libxmltooling7 : Depends: libcurl3 (>= 7.16.2) but it is not going to be 
installed
E: Unable to correct problems, you have held broken packages.
$ sudo apt install libcurl3
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  grub-pc-bin
Use 'sudo apt autoremove' to remove it.
The following packages will be REMOVED:
  curl libcurl4 pollinate ubuntu-server
The following NEW packages will be installed:
  libcurl3
0 upgraded, 1 newly installed, 4 to remove and 15 not upgraded.
Need to get 214 kB of archives.
After this operation, 495 kB disk space will be freed.
Do you want to continue? [Y/n] n

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1802901

Title:
  Federation functional job failing on Bionic

Status in OpenStack Identity (keystone):
  New

Bug description:
  When doing a test to migrate the functional tests to Bionic, this
  error occured within keystone-dsvm-functional-federation:

  2018-10-18 15:43:32.336409 | controller | ++ functions-common:apt_get:1083
:   sudo DEBIAN_FRONTEND=noninteractive http_proxy= https_proxy= 
no_proxy= apt-get --option Dpkg::Options::=--force-confold --assume-yes install 
libapache2-mod-shib2
  2018-10-18 15:43:32.376149 | controller | Reading package lists...
  2018-10-18 15:43:32.579166 | controller | Building dependency tree...
  2018-10-18 15:43:32.579865 | controller | Reading state information...
  2018-10-18 15:43:32.664285 | controller | Some packages could not be 
installed. This may mean that you have
  2018-10-18 15:43:32.664482 | controller | requested an impossible situation 
or if you are using the unstable
  2018-10-18 15:43:32.664634 | controller | distribution that some required 
packages have not yet been created
  2018-10-18 15:43:32.664719 | controller | or been moved out of Incoming.
  2018-10-18 15:43:32.664856 | controller | The following information may help 
to resolve the situation:
  2018-10-18 15:43:32.664886 | controller |
  2018-10-18 15:43:32.665000 | controller | The following packages have unmet 
dependencies:
  2018-10-18 15:43:32.738989 | controller |  libapache2-mod-shib2 : Depends: 
libshibsp-plugins (= 2.6.1+dfsg1-2) but it is not 

[Yahoo-eng-team] [Bug 1653587] Re: Images tab not showing the available images

2018-07-02 Thread Dr. Jens Harbott
If the issue still exists with current devstack, please describe the
steps needed to reproduce.

** Changed in: devstack
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1653587

Title:
  Images tab not showing the available images

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  After installing the devstack, when we click on images tab of
  dashboard , it is showing the blank page. With cli it is ok

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1653587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779247] Re: directly in the external DNS service unsuccessful

2018-06-29 Thread Dr. Jens Harbott
DNS integration is a feature of Neutron, not Designate.

Also in your case has "router:external = True", which violates one of
the conditions listed in https://docs.openstack.org/neutron/queens/admin
/config-dns-int-ext-serv.html#configuration-of-the-externally-
accessible-network-for-use-case-3 so this is working as designed and
documented in that it does not create any records.

** Project changed: designate => neutron

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779247

Title:
  directly in the external DNS service unsuccessful

Status in neutron:
  Invalid

Bug description:
  openstack queens:

  According to document case1 can generate dns records:
  
https://docs.openstack.org/neutron/queens/admin/config-dns-int-ext-serv.html#config-dns-int-ext-serv
  Use case 1: Floating IPs are published with associated port DNS attributes

  
  According to case3 it can't succeed ?
  
https://docs.openstack.org/neutron/queens/admin/config-dns-int-ext-serv.html#config-dns-int-ext-serv
  Use case 3: Ports are published directly in the external DNS service

  is there any way to ask this question?
  i see the following log found no problem :
   /var/log/neutron/server.log
   /var/log/designate/worker.log

  
  my setps:

  [root@controller ~]# neutron net-update 36d56132-49ab-4bac-985e-92f0bc1b47cf 
--dns_domain openstack.com.
  [root@controller ~]# neutron net-show 36d56132-49ab-4bac-985e-92f0bc1b47cf
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| nova |
  | created_at| 2018-06-29T03:35:16Z |
  | description   |  |
  | dns_domain| openstack.com.   |
  | id| 36d56132-49ab-4bac-985e-92f0bc1b47cf |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | is_default| False|
  | mtu   | 1500 |
  | name  | p2   |
  | port_security_enabled | True |
  | project_id| b2760ba26e5645bf9856669d560d91c7 |
  | provider:network_type | flat |
  | provider:physical_network | provider |
  | provider:segmentation_id  |  |
  | revision_number   | 7|
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 0e9d1293-2133-4b16-8614-6c167d80df21 |
  | tags  |  |
  | tenant_id | b2760ba26e5645bf9856669d560d91c7 |
  | updated_at| 2018-06-29T03:36:01Z |
  +---+--+


  
  [root@controller ~]#neutron port-create 36d56132-49ab-4bac-985e-92f0bc1b47cf 
--dns_name my-vm
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 

[Yahoo-eng-team] [Bug 1765122] Re: qemu-img execute not mocked in unit tests

2018-06-26 Thread Dr. Jens Harbott
stable/pike (16.1.4) is also affected by this.

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1765122

Title:
  qemu-img execute not mocked in unit tests

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  nova.tests.unit.virt.test_images.QemuTestCase.test_qemu_info_with_errors
  is failing in both py27 and py36 tox environments due to a missing
  mock.

  This system does not have qemu(-img) installed in it and running unit
  tests returns the following:

  ==
  Failed 1 tests - output below:
  ==

  nova.tests.unit.virt.test_images.QemuTestCase.test_qemu_info_with_errors
  

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/nova/virt/images.py", line 
73, in qemu_img_info'
  b'out, err = utils.execute(*cmd, prlimit=QEMU_IMG_LIMITS)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/nova/utils.py", line 231, 
in execute'
  b'return processutils.execute(*cmd, **kwargs)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/oslo_concurrency/processutils.py",
 line 424, in execute'
  b'cmd=sanitized_cmd)'
  b'oslo_concurrency.processutils.ProcessExecutionError: Unexpected error 
while running command.'
  b'Command: 
/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/bin/python -m 
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C 
qemu-img info /fake/path'
  b'Exit code: 127'
  b"Stdout: ''"
  b"Stderr: '/usr/bin/env: \xe2\x80\x98qemu-img\xe2\x80\x99: No such file 
or directory\\n'"
  b''
  b'During handling of the above exception, another exception occurred:'
  b''
  b'Traceback (most recent call last):'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1305, in patched'
  b'return func(*args, **keywargs)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/nova/tests/unit/virt/test_images.py",
 line 37, in test_qemu_info_with_errors'
  b"'/fake/path')"
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 485, in assertRaises'
  b'self.assertThat(our_callable, matcher)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 496, in assertThat'
  b'mismatch_error = self._matchHelper(matchee, matcher, message, 
verbose)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 547, in _matchHelper'
  b'mismatch = matcher.match(matchee)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_exception.py",
 line 108, in match'
  b'mismatch = self.exception_matcher.match(exc_info)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match'
  b'mismatch = matcher.match(matchee)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 475, in match'
  b'reraise(*matchee)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/_compat3x.py",
 line 16, in reraise'
  b'raise exc_obj.with_traceback(exc_tb)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_exception.py",
 line 101, in match'
  b'result = matchee()'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 1049, in __call__'
  b'return self._callable_object(*self._args, **self._kwargs)'
  b'  File 
"/home/jaypipes/src/git.openstack.org/openstack/nova/nova/virt/images.py", line 
87, in qemu_img_info'
  b'raise exception.InvalidDiskInfo(reason=msg)'
  b'nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img 
failed to execute on /fake/path : Unexpected error while running command.'
  b'Command: 
/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/bin/python -m 
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C 
qemu-img info /fake/path'
  b'Exit 

[Yahoo-eng-team] [Bug 1611237] Re: Restart neutron-openvswitch-agent get ERROR "Switch connection timeout"

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611237

Title:
  Restart neutron-openvswitch-agent get ERROR "Switch connection
  timeout"

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Environment: devstack  master, ubuntu 14.04

  After ./stack.sh finished, kill the neutron-openvswitch-agent process
  and then start it by /usr/bin/python /usr/local/bin/neutron-
  openvswitch-agent --config-file /etc/neutron/neutron.conf --config-
  file /etc/neutron/plugins/ml2/ml2_conf.ini

  The log shows :
  2016-08-08 11:02:06.346 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 120, in server_loop
  datapath_connection_factory)
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 
43, in listen
  sock.bind(addr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use

  and
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  In kilo I could start ovs-agent in this way correctly, I do not know
  it is right to start ovs-agent in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1611237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445199] Re: Nova user should not have admin role

2018-06-05 Thread Dr. Jens Harbott
Devstack is meant to provide a deployment suitable for development, not
a hardened setup that could be used in production. While it could adopt
this if Nova supported it, I'll mark the bug as invalid for devstack.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445199

Title:
  Nova user should not have admin role

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  
  Most of the service users are granted the 'service' role on the 'service' 
project, except the 'nova' user which is given 'admin'. The 'nova' user should 
also be given only the 'service' role on the 'service' project.

  This is for security hardening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1445199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  Invalid
Status in networking-l2gw:
  Invalid
Status in neutron:
  In Progress

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660650] Re: Can't log in into Horizon when deploying Manila+Sahara

2018-06-05 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1660650

Title:
  Can't log in into Horizon when deploying Manila+Sahara

Status in devstack:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Manila:
  New
Status in Sahara:
  Invalid

Bug description:
  When installing latest devstack trunk alongside with the Manila +
  Sahara/Sahara Dashboard plugins, I can't log in using horizon, even if
  the log says I logged in successfully.

  Horizon gets stuck on the greeter, asking again from the username and
  password.

  local.conf extract:

  
# Enable Manila 


enable_plugin manila https://github.com/openstack/manila





  ~ #Enable heat plugin 


  ~_enable_plugin heat https://git.openstack.org/openstack/heat 





# Enable Swift  


enable_service s-proxy s-object s-container s-account   


SWIFT_REPLICAS=1


SWIFT_HASH=$ADMIN_PASSWORD  





# Enable Sahara 


enable_plugin sahara git://git.openstack.org/openstack/sahara   





# Enable sahara-dashboard   


enable_plugin sahara-dashboard 
git://git.openstack.org/openstack/sahara-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1660650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700496] Re: Notifications are emitted per-cell instead of globally

2018-06-05 Thread Dr. Jens Harbott
[Closing for devstack because there has been no activity for 60 days.]

** Changed in: devstack
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700496

Title:
  Notifications are emitted per-cell instead of globally

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With https://review.openstack.org/#/c/436094/ we began using different 
transport URLs for Nova internal services.
  That said, as notifications emit on a different topic but use the same 
transport URL than the component, it leads to our consumers missing 
notifications as they only subscribe to the original MQ.

  While Nova can use multiple MQs, we should still offer the possibility
  to have global notifications for Nova so a consumer wouldn't require
  to modify their configs every time a new cell is issued.

  That can be an oslo.messaging config option [1], but we certainly need
  to be more gentle in Devstack or depending jobs (like for Vitrage)
  could fail.

  [1]
  
https://docs.openstack.org/developer/oslo.messaging/opts.html#oslo_messaging_notifications.transport_url

  devstack version: pike

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1700496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741079] Re: Deleting heat stack doesnt delete dns records

2018-04-23 Thread Dr. Jens Harbott
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1741079

Title:
  Deleting heat stack doesnt delete dns records

Status in neutron:
  New

Bug description:
  Environment: Ubuntu 16.04.3 LTS, Ocata

  Summary:
  For each new stack created records automatically created for each instance,
  on the other hand, deleting the stack doesn't trigger the deletion of those 
records.

  
  We have configured internal dns integration using designate,
  creating instance /port triggers record creation, deleting instance /port 
triggers record deletion.

  However, while creating a heat stack, record creation works great for
  each instance that is part of the stack, but deleting the stack does
  not trigger record deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1741079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726462] Re: legacy-tempest-dsvm-neutron-full times out waiting for vol available

2018-04-23 Thread Dr. Jens Harbott
Currently 38 hits in the last 7 days, but mostly for other jobs, so I
don't think that this is related to Neutron, rather Tempest and/or
Cinder.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Summary changed:

- legacy-tempest-dsvm-neutron-full times out waiting for vol available
+ Various gate jobs time out waiting for vol available

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726462

Title:
  Various gate jobs time out waiting for vol available

Status in Cinder:
  New
Status in neutron:
  Confirmed
Status in tempest:
  New

Bug description:
  I've started noticing the following error in our legacy-tempest-dsvm-
  neutron-full jobs:

  --
  volume 3c34fd82-8d3d-4d87-8290-f3c119587b94 failed to reach ['available'] 
status (current detaching) within the required time (196 s).
  --

  Based on [2] this does not appear to be neutron specific and has
  started occurring as of 10/17/2017. However additional debug is needed
  to determine where the issue lies and where/what needs to be
  addressed.

  
  [1] 
http://logs.openstack.org/00/487600/7/gate/legacy-tempest-dsvm-neutron-full/90a4660/job-output.txt.gz#_2017-10-22_22_25_11_236303
  [2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22failed%20to%20reach%20%5B'available'%5D%20status%20(current%20detaching)%20within%20the%20required%20time%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1726462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667579] Re: swift-proxy-server fails to start with Python 3.5

2018-04-06 Thread Dr. Jens Harbott
Seems that this is an issue that only swift can solve. Hopefully this
gets done well before py2 is EOL so there can be a reasonable transition
period. IMO it would have to be at least a year, so swift should be
making it a critical effort to support Python 3 by the end of 2018. The
clock is ticking: https://pythonclock.org/

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667579

Title:
  swift-proxy-server fails to start with Python 3.5

Status in devstack:
  Invalid
Status in neutron:
  Invalid
Status in OpenStack Object Storage (swift):
  Confirmed

Bug description:
  Traceback (most recent call last):
File "/usr/local/bin/swift-proxy-server", line 6, in 
  exec(compile(open(__file__).read(), __file__, 'exec'))
File "/opt/stack/new/swift/bin/swift-proxy-server", line 23, in 
  sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
File "/opt/stack/new/swift/swift/common/wsgi.py", line 905, in run_wsgi
  loadapp(conf_path, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 389, in loadapp
  ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 373, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 296, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 320, in _loadconfig
  return loader.get_context(object_type, name, global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 450, in get_context
  global_additions=global_additions)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 562, in _pipeline_app_context
  for name in pipeline[:-1]]
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 562, in 
  for name in pipeline[:-1]]
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 454, in get_context
  section)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 476, in _context_from_use
  object_type, name=use, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 406, in get_context
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 296, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 328, in _loadegg
  return loader.get_context(object_type, name, global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 620, in get_context
  object_type, name=name)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 646, in find_egg_entry_point
  possible.append((entry.load(), protocol, entry.name))
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", 
line 2302, in load
  return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", 
line 2308, in resolve
  module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/opt/stack/new/swift/swift/common/middleware/slo.py", line 799
  def is_small_segment((seg_dict, start_byte, end_byte)):
   ^
  SyntaxError: invalid syntax

  http://logs.openstack.org/14/437514/3/check/gate-rally-dsvm-py35
  -neutron-neutron-ubuntu-xenial/3221186/logs/screen-s-proxy.txt.gz

  This currently blocks neutron gate where we have a voting py3 tempest
  job. The reason why swift is deployed with Python3.5 there is because
  we special case in devstack to deploy the service with Python3:

  http://git.openstack.org/cgit/openstack-
  dev/devstack/tree/inc/python#n167

  The short term solution is to disable the special casing. Swift should
  then work on fixing the code, and gate on Python3 (preferably the same
  job as neutron has).

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1667579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580728] Re: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 386: ordinal not in range(128) in nova.virt.libvirt.vif:unplug with unicode instance.display_nam

2018-04-05 Thread Dr. Jens Harbott
Change for devstack has been abandoned, assuming this has been fixed in
other projects.

** Changed in: devstack
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580728

Title:
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
  386: ordinal not in range(128) in nova.virt.libvirt.vif:unplug with
  unicode instance.display_name

Status in devstack:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in oslo.log:
  Fix Released
Status in oslo.log ocata series:
  Incomplete
Status in oslo.versionedobjects:
  Invalid

Bug description:
  I saw this in the n-cpu logs for a xenproject CI run:

  http://logs.openstack.xenproject.org/00/315100/1/check/dsvm-tempest-
  xen/9649dc5/logs/screen-n-cpu.txt.gz

  2016-05-11 16:19:09.457 27252 INFO nova.virt.libvirt.driver [-] [instance: 
76c4ad96-87dd-4300-acdc-cbe65d3aa0a6] Instance destroyed successfully.
  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
  msg = self.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py", line 
73, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
  return fmt.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 
265, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 386: 
ordinal not in range(128)
  Logged from file vif.py, line 966

  That would be logging the vif object in unplug:

  
https://github.com/openstack/nova/blob/15abb39ef20ae76d602d50e67e43c3500a00cd3e/nova/virt/libvirt/vif.py#L966

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1580728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736385] Re: placement is not being properly restarted in grenade (pike to master)

2018-03-19 Thread Dr. Jens Harbott
Looks like the fix in devstack has been abandoned and the issue resolved
via a patch in grenade, please update if this is incorrect.

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736385

Title:
  placement is not being properly restarted in grenade (pike to master)

Status in devstack:
  Invalid
Status in grenade:
  Fix Released
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When the placement service is supposed to restart in grenade (pike to
  master) it doesn't actually restart:

  http://logs.openstack.org/93/385693/84/check/legacy-grenade-dsvm-
  neutron-multinode-live-
  migration/9fa93e0/logs/grenade.sh.txt.gz#_2017-12-05_00_08_01_111

  This leads to issues with new microversions not being available:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Unacceptable%20version%20header%3A%201.14%5C%22

  This is a latent bug that was revealed, at least in part, by efried's
  (correct) changes in https://review.openstack.org/#/c/524263/

  It looks like a bad assumption is being made somewhere in the handling
  of the systemd unit files: a 'start' when it is already started is
  success, but does not restart (thus new code is not loaded).

  We can probably fix this by using the 'restart' command instead of
  'start':

   restart PATTERN...
 Restart one or more units specified on the command line. If the 
units are not running yet, they will be started.

  
  Adding grenade and devstack as relate projects as the fix is presumably in 
devstack itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1736385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744359] Re: Neutron haproxy logs are not being collected

2018-03-19 Thread Dr. Jens Harbott
Seems like this has been an issue in Neutron, please reopen if there is
still some issue with devstack.

** Changed in: devstack
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1744359

Title:
  Neutron haproxy logs are not being collected

Status in devstack:
  Invalid
Status in neutron:
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  In Neutron, we use haproxy to proxy metadata requests from instances to Nova 
Metadata service.
  By default, haproxy logs to /dev/log but in Ubuntu, those requests get 
redirected by rsyslog to 
  /var/log/haproxy.log which is not being collected.

  ubuntu@devstack:~$ cat /etc/rsyslog.d/49-haproxy.conf 
  # Create an additional socket in haproxy's chroot in order to allow logging 
via
  # /dev/log to chroot'ed HAProxy processes
  $AddUnixListenSocket /var/lib/haproxy/dev/log

  # Send HAProxy messages to a dedicated logfile
  if $programname startswith 'haproxy' then /var/log/haproxy.log
  &~

  
  Another possibility would be to change the haproxy.cfg file to include the 
log-tag option so that haproxy uses a different tag [0] and then it'd be dumped 
into syslog instead but this would break backwards compatibility.

  [0] https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.1
  -log-tag

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1744359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1752903] [NEW] Floating IPs should not allocate IPv6 addresses

2018-03-02 Thread Dr. Jens Harbott
Public bug reported:

When there are both IPv4 and IPv6 subnets on the public network, the
port that a floating IP allocates there gets assigned addresses from
both subnets. Since a floation IP is a pure IPv4 construct, allocating
an IPv6 address for it is completely useless and should be avoided,
because it will for example block removing the IPv6 subnet without a
good reason. Seen in Pike as well as in master.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1752903

Title:
  Floating IPs should not allocate IPv6 addresses

Status in neutron:
  New

Bug description:
  When there are both IPv4 and IPv6 subnets on the public network, the
  port that a floating IP allocates there gets assigned addresses from
  both subnets. Since a floation IP is a pure IPv4 construct, allocating
  an IPv6 address for it is completely useless and should be avoided,
  because it will for example block removing the IPv6 subnet without a
  good reason. Seen in Pike as well as in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1752903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751008] Re: Dynamic routing: Bogus address logged for IPv6 peers

2018-02-23 Thread Dr. Jens Harbott
ryu in this place doesn't show the address of the peer, but the router
id, which indeed was configured to 0.0.0.82 in that case.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1751008

Title:
  Dynamic routing: Bogus address logged for IPv6 peers

Status in neutron:
  Invalid

Bug description:
  Seen in log:

  2018-02-22 07:20:11.940 10009 INFO bgpspeaker.peer [-] Connection to peer: 
fd40:9dc7:b528:80::3 established
  2018-02-22 07:20:11.942 10009 INFO 
neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 
0.0.0.82 for remote_as=65001 is UP.

  Expected result:
  Proper IPv6 address shown in second entry instead of bogus IPv4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1751008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751008] [NEW] Dynamic routing: Bogus address logged for IPv6 peers

2018-02-22 Thread Dr. Jens Harbott
Public bug reported:

Seen in log:

2018-02-22 07:20:11.940 10009 INFO bgpspeaker.peer [-] Connection to peer: 
fd40:9dc7:b528:80::3 established
2018-02-22 07:20:11.942 10009 INFO 
neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 
0.0.0.82 for remote_as=65001 is UP.

Expected result:
Proper IPv6 address shown in second entry instead of bogus IPv4.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1751008

Title:
  Dynamic routing: Bogus address logged for IPv6 peers

Status in neutron:
  New

Bug description:
  Seen in log:

  2018-02-22 07:20:11.940 10009 INFO bgpspeaker.peer [-] Connection to peer: 
fd40:9dc7:b528:80::3 established
  2018-02-22 07:20:11.942 10009 INFO 
neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 
0.0.0.82 for remote_as=65001 is UP.

  Expected result:
  Proper IPv6 address shown in second entry instead of bogus IPv4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1751008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750591] [NEW] Admin-deployed qos policy breaks tenant port creation

2018-02-20 Thread Dr. Jens Harbott
Public bug reported:

This is mainly following https://docs.openstack.org/neutron/pike/admin
/config-qos.html, steps to reproduce:

1. Admin creates qos policy "default" in admin project
2. User creates network "mynet" in user project
3. Admin applies qos policy to tenant network via "openstack network set 
--qos-policy default mynet"
4. User tries to create (an instance with) a port in "mynet".

Result: Neutron fails with "Internal server error". q-svc.log shows

2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
[req-f20ed290-5a24-44fe-9b2a-9cc2a6caacbc 8585b4c745184f538091963331dad1c7 
8b039227731847a0b62eddfde3ab17c0 - default default] POST failed.: 
CallbackFailure: Callback
 
neutron.services.qos.qos_plugin.QoSPlugin._validate_create_port_callback--9223372036854470149
 failed with "'NoneType' object has no attribute 'rules'"
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/pecan/core.py", line 678, in __call__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/pecan/core.py", line 569, in invoke_controller
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 93, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 89, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in wrapper
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 128, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
LOG.debug("Retry wrapper got retriable exception: %s", e)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 124, in wrapped
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*dup_args, **dup_kwargs)
2018-02-20 13:45:49.572 30439 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/controllers/utils.py", 
line 76, in wrapped
2018-02-20 13:45:49.572 

[Yahoo-eng-team] [Bug 1750562] [NEW] Dynamic routing: Docs for speaker scheduling are incorrect

2018-02-20 Thread Dr. Jens Harbott
Public bug reported:

According to https://docs.openstack.org/neutron-dynamic-
routing/latest/contributor/testing.html no automatic scheduling of BGP
speakers to dynamic routing agents is happening. However at least of
Pike this is no longer true: The first speaker that is being created is
automatically scheduled to the first available agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750562

Title:
  Dynamic routing: Docs for speaker scheduling are incorrect

Status in neutron:
  In Progress

Bug description:
  According to https://docs.openstack.org/neutron-dynamic-
  routing/latest/contributor/testing.html no automatic scheduling of BGP
  speakers to dynamic routing agents is happening. However at least of
  Pike this is no longer true: The first speaker that is being created
  is automatically scheduled to the first available agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750383] [NEW] Dynamic routing: Error logged during speaker removal

2018-02-19 Thread Dr. Jens Harbott
Public bug reported:

During normal operations, when a BGP speaker is deleted and being
removed from an agent during that operation, an error like this is being
logged:


Feb 19 10:25:05.054654 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: INFO bgpspeaker.peer [-] Connection to peer 
192.168.10.129 lost, reason: Connection to peer lost: [Errno 9] Bad file 
descriptor. Resetting retry connect loop: False
Feb 19 10:25:05.054912 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG bgpspeaker.signals.base [-] SIGNAL: ('core', 
'adj', 'down') emitted with data: {'peer': 
}  {{(pid=30255) 
emit_signal 
/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/signals/base.py:11}}
Feb 19 10:25:05.055034 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: INFO 
neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver [-] BGP Peer 
192.168.10.129 for remote_as=64522 went DOWN.
Feb 19 10:25:05.055144 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG bgpspeaker.peer [-] Peer 192.168.10.129 BGP 
FSM went from Established to Idle {{(pid=30255) bgp_state 
/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/peer.py:237}}
Feb 19 10:25:05.04 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: ERROR bgpspeaker.base [-] Traceback (most recent 
call last):
Feb 19 10:25:05.055768 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/base.py", 
line 256, in start
Feb 19 10:25:05.055929 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._run(*args, **kwargs)
Feb 19 10:25:05.056106 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 275, in _run
Feb 19 10:25:05.056293 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._recv_loop()
Feb 19 10:25:05.056519 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 571, in _recv_loop
Feb 19 10:25:05.056719 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self.connection_lost(conn_lost_reason)
Feb 19 10:25:05.056911 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 596, in connection_lost
Feb 19 10:25:05.057096 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._peer.connection_lost(reason)
Feb 19 10:25:05.057282 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/peer.py", 
line 2328, in connection_lost
Feb 19 10:25:05.057463 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: self._protocol.stop()
Feb 19 10:25:05.057659 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py", 
line 405, in stop
Feb 19 10:25:05.057835 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: Activity.stop(self)
Feb 19 10:25:05.058019 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/services/protocols/bgp/base.py", 
line 314, in stop
Feb 19 10:25:05.058186 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: raise ActivityException(desc='Cannot call stop 
when activity is '
Feb 19 10:25:05.058363 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: ActivityException: 100.1 - Cannot call stop when 
activity is not started or has been stopped already.
Feb 19 10:25:05.058734 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: : ActivityException: 100.1 - Cannot call stop when 
activity is not started or has been stopped already.
Feb 19 10:25:31.149666 ubuntu-xenial-ovh-bhs1-0002607943 
neutron-bgp-dragent[30255]: DEBUG 
neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None 
req-590735a1-3669-43e0-8feb-3afa445663d9 None None] Report state task started 
{{(pid=30255) _report_state 
/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py:682}}

This is confusing since the operation is finishing successfully, so the
expected result is that no error should be seen in the logs.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750383

Title:
  Dynamic routing: Error logged during speaker removal

Status in neutron:
  New

Bug description:
  During normal operations, when a BGP speaker is deleted and being
  removed from an agent during that operation, an error like this is
  being logged:

  
  Feb 19 10:25:05.054654 

[Yahoo-eng-team] [Bug 1750129] [NEW] Dynamic routing: peer not removed from agent

2018-02-17 Thread Dr. Jens Harbott
Public bug reported:

After going through https://docs.openstack.org/neutron-dynamic-
routing/latest/contributor/testing.html even after removing both the
speaker from the agent and the peer from the speaker, the agent still
continues trying to connect to the peer address.

Expected result would be seeing the connection attempts stop.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750129

Title:
  Dynamic routing: peer not removed from agent

Status in neutron:
  New

Bug description:
  After going through https://docs.openstack.org/neutron-dynamic-
  routing/latest/contributor/testing.html even after removing both the
  speaker from the agent and the peer from the speaker, the agent still
  continues trying to connect to the peer address.

  Expected result would be seeing the connection attempts stop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750121] [NEW] Dynamic routing: adding speaker to agent fails

2018-02-17 Thread Dr. Jens Harbott
Public bug reported:

When following https://docs.openstack.org/neutron-dynamic-
routing/latest/contributor/testing.html everything works fine because
the speaker is scheduled to the agent automatically (in contrast to what
the docs say). But if I remove the speaker from the agent and add it
again with

$ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
$ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

the following error is seen in the log:

Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for BGP
Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has failed
with exception 'auth_type'.

The same thing happens when there are multiple agents and one tries to
add the speaker to one of the other agents.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750121

Title:
  Dynamic routing: adding speaker to agent fails

Status in neutron:
  New

Bug description:
  When following https://docs.openstack.org/neutron-dynamic-
  routing/latest/contributor/testing.html everything works fine because
  the speaker is scheduled to the agent automatically (in contrast to
  what the docs say). But if I remove the speaker from the agent and add
  it again with

  $ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
  $ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

  the following error is seen in the log:

  Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
  neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
  da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for
  BGP Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has
  failed with exception 'auth_type'.

  The same thing happens when there are multiple agents and one tries to
  add the speaker to one of the other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749323] Re: Deleting the first of multiple HA routers munges HA network

2018-02-15 Thread Dr. Jens Harbott
*** This bug is a duplicate of bug 1732543 ***
https://bugs.launchpad.net/bugs/1732543

** This bug has been marked a duplicate of bug 1732543
   HA network tenant network fails upon router delete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1749323

Title:
  Deleting the first of multiple HA routers munges HA network

Status in neutron:
  New

Bug description:
  Pike release, linuxbridge:

  Deleting one ha router breaks all subsequent routers by munging the HA
  network that keep alive uses.

  Neutron server has these errors:
  DEBUG neutron.plugins.ml2.managers [req-224baca3-954d-4daa-8ae8-3dac3aa66931 
- - - - -] Network 82c3bca0-d04e-460a-993f-d95d3665c9d6 has no segments 
_extend_network_dict_provider 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:165

  Example of how to reproduce:

  :~# openstack router create --project 370ed91835cb4a90aa0830060ccf0a88 router
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | UP   |
  | availability_zone_hints |  |
  | availability_zones  |  |
  | created_at  | 2018-02-13T22:18:31Z |
  | description |  |
  | distributed | False|
  | external_gateway_info   | None |
  | flavor_id   | None |
  | ha  | True |
  | id  | 1ded1e23-fcad-40f4-9369-422cbe5fa7ed |
  | name| router   |
  | project_id  | 370ed91835cb4a90aa0830060ccf0a88 |
  | revision_number | None |
  | routes  |  |
  | status  | ACTIVE   |
  | tags|  |
  | updated_at  | 2018-02-13T22:18:31Z |
  +-+--+
  :~# openstack router create --project 370ed91835cb4a90aa0830060ccf0a88 
router-deleteme
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | UP   |
  | availability_zone_hints |  |
  | availability_zones  |  |
  | created_at  | 2018-02-13T22:19:08Z |
  | description |  |
  | distributed | False|
  | external_gateway_info   | None |
  | flavor_id   | None |
  | ha  | True |
  | id  | fc9d0b95-081a-4fce-9acb-0a9f8040e444 |
  | name| router-deleteme  |
  | project_id  | 370ed91835cb4a90aa0830060ccf0a88 |
  | revision_number | None |
  | routes  |  |
  | status  | ACTIVE   |
  | tags|  |
  | updated_at  | 2018-02-13T22:19:08Z |
  +-+--+
  :~# openstack network show 22a84484-7b34-4a16-bba7-f5e6861198bd
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| nova
   |
  | created_at| 2018-02-13T22:18:30Z
   |
  | description   | 
   |
  | dns_domain| 
   |
  | id| 22a84484-7b34-4a16-bba7-f5e6861198bd
   |
  | ipv4_address_scope| None

[Yahoo-eng-team] [Bug 1736669] [NEW] designate driver should autodetect API version

2017-12-05 Thread Dr. Jens Harbott
Public bug reported:

Currently the designate driver assumes that the url it is given points
to the v2 API endpoint, i.e. it includes the "/v2" suffix. When this is
omitted, the driver makes broken calls to designate.

For more stable operations and simplified configuration, the driver
should be able to handle being given the unversioned endpoint and find
its way by doing version discovery.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736669

Title:
  designate driver should autodetect API version

Status in neutron:
  New

Bug description:
  Currently the designate driver assumes that the url it is given points
  to the v2 API endpoint, i.e. it includes the "/v2" suffix. When this
  is omitted, the driver makes broken calls to designate.

  For more stable operations and simplified configuration, the driver
  should be able to handle being given the unversioned endpoint and find
  its way by doing version discovery.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501206] Re: router:dhcp ports are open resolvers

2017-11-26 Thread Dr. Jens Harbott
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501206

Title:
  router:dhcp ports are open resolvers

Status in neutron:
  Confirmed
Status in OpenStack Security Advisory:
  Won't Fix
Status in neutron package in Ubuntu:
  New

Bug description:
  When configuring an public IPv4 subnet with DHCP enabled inside
  Neutron (and attaching it to an Internet-connected router), the DNS
  recursive resolver service provided by dnsmasq inside the qdhcp
  network namespace will respond to DNS queries from the entire
  Internet. This is a huge problem from a security standpoint, as open
  resolvers are very likely to be abused for DDoS purposes. This does
  not only cause significant damage to third parties (i.e., the true
  destination of the DDoS attack and every network in between), but also
  on the local network or servers (due to saturation of all the
  available network bandwidth and/or the processing capacity of the node
  running the dnsmasq instance). Quoting from
  http://openresolverproject.org/:

  «Open Resolvers pose a significant threat to the global network
  infrastructure by answering recursive queries for hosts outside of its
  domain. They are utilized in DNS Amplification attacks and pose a
  similar threat as those from Smurf attacks commonly seen in the late
  1990s.

  [...]

  What can I do?

  If you operate a DNS server, please check the settings.

  Recursive servers should be restricted to your enterprise or customer
  IP ranges to prevent abuse. Directions on securing BIND and Microsoft
  nameservers can be found on the Team CYMRU Website - If you operate
  BIND, you can deploy the TCP-ANY patch»

  It seems reasonable to expect that the dnsmasq instance within Neutron
  would only respond to DNS queries from the subnet prefixes it is
  associated with and ignore all others.

  Note that this only occurs for IPv4. That is however likely just a
  symptom of bug #1499170, which breaks all IPv6 DNS queries (external
  as well as internal). I would assume that when bug #1499170 is fixed,
  the router:dhcp ports will immediately start being open resolvers over
  IPv6 too.

  For what it's worth, the reason I noticed this issue in the first
  place was that NorCERT (the national Norwegian Computer Emergency
  Response Team - http://www.cert.no/) got in touch with us, notifying
  us about the open resolvers they had observed in our network and
  insisted that we lock them down ASAP. It only took NorCERT couple of
  days after the subnet was first created to do so.

  Tore

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437350] Re: Don't use exit status 0 when rejecting login as root

2017-11-24 Thread Dr. Jens Harbott
This should be fixed in 0.4.0, please re-open if you still see an issue
there. Exit code is 43.

** Changed in: cirros
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1437350

Title:
  Don't use exit status 0 when rejecting login as root

Status in CirrOS:
  Fix Released
Status in cloud-init:
  Won't Fix

Bug description:
  $ ssh -i mykey root@10.1.0.2 ls
  Warning: Permanently added '10.1.0.2' (RSA) to the list of known hosts.
  Please login as 'cirros' user, not as root

  $ echo $?
  0

  Since the command is not executed the exit status should be non 0.

  
  /root/.ssh/authorized_keys:
  command="echo Please login as \'cirros\' user, not as root; echo; sleep 10" 
this part should be changed to:
  "echo Please login as \'cirros\' user, not as root; echo; sleep 10; exit 1"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1437350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459042] Re: cloud-init fails to report IPv6 connectivity when booting

2017-11-24 Thread Dr. Jens Harbott
This should be fixed in 0.4.0, please re-open if you still see an issue
there.

** Changed in: cirros
   Status: Fix Committed => Fix Released

** Changed in: cloud-init
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1459042

Title:
  cloud-init fails to report IPv6 connectivity when booting

Status in CirrOS:
  Fix Released
Status in cloud-init:
  Invalid

Bug description:
  It would be convenient to see the IPv6 networking information printed
  at boot, similar to the IPv4 networking information currently is.

  Output from the boot log:
  [   15.621085] cloud-init[1058]: Cloud-init v. 0.7.7 running 'init' at Tue, 
14 Jun 2016 13:48:14 +. Up 6.71 seconds.
  [   15.622670] cloud-init[1058]: ci-info: Net 
device info+
  [   15.624106] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.625516] cloud-init[1058]: ci-info: | Device |  Up  |  Address   | 
Mask| Scope | Hw-Address|
  [   15.627058] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.628504] cloud-init[1058]: ci-info: | ens3:  | True | 10.42.0.48 | 
255.255.0.0 |   .   | fa:16:3e:f9:86:07 |
  [   15.629930] cloud-init[1058]: ci-info: | ens3:  | True | .  |  
.  |   d   | fa:16:3e:f9:86:07 |
  [   15.631334] cloud-init[1058]: ci-info: |  lo:   | True | 127.0.0.1  |  
255.0.0.0  |   .   | . |
  [   15.632765] cloud-init[1058]: ci-info: |  lo:   | True | .  |  
.  |   d   | . |
  [   15.634221] cloud-init[1058]: ci-info: 
++--++-+---+---+
  [   15.635671] cloud-init[1058]: ci-info: 
+++Route IPv4 info+++
  [   15.637186] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.638682] cloud-init[1058]: ci-info: | Route |   Destination   |  
Gateway  | Genmask | Interface | Flags |
  [   15.640182] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+
  [   15.641657] cloud-init[1058]: ci-info: |   0   | 0.0.0.0 | 
10.42.0.1 | 0.0.0.0 |ens3   |   UG  |
  [   15.643149] cloud-init[1058]: ci-info: |   1   |10.42.0.0|  
0.0.0.0  |   255.255.0.0   |ens3   |   U   |
  [   15.644661] cloud-init[1058]: ci-info: |   2   | 169.254.169.254 | 
10.42.0.1 | 255.255.255.255 |ens3   |  UGH  |
  [   15.646175] cloud-init[1058]: ci-info: 
+---+-+---+-+---+---+

  Output from running system:
  ci-info: +++Net device 
info+++
  ci-info: 
++---+-+---++---+
  ci-info: |   Device   |   Up  | Address | 
 Mask | Scope  | Hw-Address|
  ci-info: 
++---+-+---++---+
  ci-info: |ens3|  True |10.42.0.44   |  
255.255.0.0  |   .| fa:16:3e:90:11:e0 |
  ci-info: |ens3|  True | 2a04:3b40:8010:1:f816:3eff:fe90:11e0/64 | 
  .   | global | fa:16:3e:90:11:e0 |
  ci-info: | lo |  True |127.0.0.1|   
255.0.0.0   |   .| . |
  ci-info: | lo |  True | ::1/128 | 
  .   |  host  | . |
  ci-info: 
++---+-+---++---+
  ci-info: +++Route IPv4 
info+++
  ci-info: 
+---+-+---+-+---+---+
  ci-info: | Route |   Destination   |  Gateway  | Genmask | Interface 
| Flags |
  ci-info: 
+---+-+---+-+---+---+
  ci-info: |   0   | 0.0.0.0 | 10.42.0.1 | 0.0.0.0 |ens3   
|   UG  |
  ci-info: |   1   |10.42.0.0|  0.0.0.0  |   255.255.0.0   |ens3   
|   U   |
  ci-info: |   2   | 169.254.169.254 | 10.42.0.1 | 255.255.255.255 |ens3   
|  UGH  |
  ci-info: 
+---+-+---+-+---+---+

  $ netstat -rn46
  Kernel IP routing table
  Destination Gateway Genmask Flags   MSS Window  irtt Iface
  0.0.0.0 10.42.0.1   0.0.0.0 UG0 0  0 ens3
  10.42.0.0   0.0.0.0 255.255.0.0 U 0 0  0 

[Yahoo-eng-team] [Bug 1672792] Re: Nova with ceph backend instance creation fails with: the name of the pool must be a string

2017-11-23 Thread Dr. Jens Harbott
** Changed in: nova/ocata
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1672792

Title:
  Nova with ceph backend instance creation fails with: the name of the
  pool must be a string

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Released

Bug description:
  Situation: Ocata (RDO), Nova configured with ceph backend as follows:

  [libvirt]
  images_type = rbd
  images_rbd_pool = nova
  images_rbd_ceph_conf = /etc/ceph/ceph.conf
  rbd_user = nova_cinder
  rbd_secret_uuid = 

  When launching an image backed instance (so not backed by a cinder
  volume), instance creation fails with: 'the name of the pool must be a
  string'.

  After some digging I found that in: /usr/lib/python2.7/site-
  packages/nova/virt/libvirt/storage/rbd_utils.py in _connect_to_rados
  in the call ioctx = client.open_ioctx(pool_to_open)

  pool_to_open is passed as unicode and in /usr/lib/python2.7/site-
  packages/rados.py a check is done which fails if ioctx_name is not a
  string.

  Easy fix seems to be to do a cast to string in _connect_to_rados:

  ioctx = client.open_ioctx(str(pool_to_open))

  This fixes the issue for me.

  Creating an instance with a ceph backed volume is not affected by this
  issue, this works fine.

  Versions:

  openstack-nova-compute-15.0.0-1.el7.noarch
  python-nova-15.0.0-1.el7.noarch
  python-rados-0.94.10-0.el7.x86_64

  Stacktrace:

  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager 
[req-90b9607f-01e9-4586-a083-c4f2051294ff - - - - -] [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] Instance failed to spawn  

  │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] Traceback (most recent call last):


   │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2125, in 
_build_resources
│
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] yield resources   


   │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1930, in 
_build_and_run_instance 
│
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] block_device_info=block_device_info)  


   │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2676, in 
spawn   
│
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] block_device_info=block_device_info)  


   │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3081, in 
_create_image   
│
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359] fallback_from_host)   


   │
  2017-03-14 15:48:33.480 6668 ERROR nova.compute.manager [instance: 
87145bc6-61fc-4068-a135-fccfd8aed359]   File 

[Yahoo-eng-team] [Bug 1733933] [NEW] nova-conductor is masking error when rescheduling

2017-11-22 Thread Dr. Jens Harbott
Public bug reported:

Sometimes when build_instance fails on n-cpu, the error that n-cond
receives is mangles like this:

Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
[instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Error from last host: 
jh-devstack-03 (node jh-devstack03): [u'Traceback (most recent call last):\n', 
u'  File "/opt/stack/nova/nova/compute/manager.py", line 1847, in
 _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 2086, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 
u"RescheduledException: Build of instance 5ee9d527-0043-474e-bfb3-e6621426662e 
was re-scheduled: operation failed: domain 'instance-0028' already exists 
with uuid 
93974d36e3a7-4139bbd8-2d5b51195a5f\n"]
Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
Failed to compute_task_build_instances: No sql_connection parameter is 
established: CantStartEngineError: No sql_connection parameter is established
Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
[instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Setting instance to ERROR 
state.: CantStartEngineError: No sql_connection parameter is established

Seem to occur quite often in gate, too.
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Setting%20instance%20to%20ERROR%20state.%3A%20CantStartEngineError%5C%22

The result is that the instance information shows "No sql_connection
parameter is established" instead of the original error, making
debugging the root cause quite difficult.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733933

Title:
  nova-conductor is masking error when rescheduling

Status in OpenStack Compute (nova):
  New

Bug description:
  Sometimes when build_instance fails on n-cpu, the error that n-cond
  receives is mangles like this:

  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Error from last host: 
jh-devstack-03 (node jh-devstack03): [u'Traceback (most recent call last):\n', 
u'  File "/opt/stack/nova/nova/compute/manager.py", line 1847, in
   _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 2086, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 
  u"RescheduledException: Build of instance 
5ee9d527-0043-474e-bfb3-e6621426662e was re-scheduled: operation failed: domain 
'instance-0028' already exists with uuid 
  93974d36e3a7-4139bbd8-2d5b51195a5f\n"]
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  Failed to compute_task_build_instances: No sql_connection parameter is 
established: CantStartEngineError: No sql_connection parameter is established
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Setting instance to ERROR 
state.: CantStartEngineError: No sql_connection parameter is established

  Seem to occur quite often in gate, too.
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Setting%20instance%20to%20ERROR%20state.%3A%20CantStartEngineError%5C%22

  The result is that the instance information shows "No sql_connection
  parameter is established" instead of the original error, making
  debugging the root cause quite difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733515] Re: use_journal option not available in ocata

2017-11-21 Thread Dr. Jens Harbott
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733515

Title:
  use_journal option not available in ocata

Status in OpenStack Compute (nova):
  New
Status in openstack-manuals:
  New

Bug description:
  The config documentation for Nova Ocata lists the "use_journal"
  option[1], however that option was added to oslo.log only for the Pike
  cycle[2] in oslo.log==3.24.0. It isn't available e.g. in Ubuntu Ocata
  UCA with python-oslo.log=3.20.1-0ubuntu1~cloud0.

  [1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
  [2] https://docs.openstack.org/releasenotes/oslo.log/pike.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733515] [NEW] use_journal option not available in ocata

2017-11-21 Thread Dr. Jens Harbott
Public bug reported:

The config documentation for Nova Ocata lists the "use_journal"
option[1], however that option was added to oslo.log only for the Pike
cycle[2] in oslo.log==3.24.0. It isn't available e.g. in Ubuntu Ocata
UCA with python-oslo.log=3.20.1-0ubuntu1~cloud0.

[1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] https://docs.openstack.org/releasenotes/oslo.log/pike.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733515

Title:
  use_journal option not available in ocata

Status in OpenStack Compute (nova):
  New

Bug description:
  The config documentation for Nova Ocata lists the "use_journal"
  option[1], however that option was added to oslo.log only for the Pike
  cycle[2] in oslo.log==3.24.0. It isn't available e.g. in Ubuntu Ocata
  UCA with python-oslo.log=3.20.1-0ubuntu1~cloud0.

  [1] 
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
  [2] https://docs.openstack.org/releasenotes/oslo.log/pike.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731857] [NEW] DVR scenario tests fail in default deployment

2017-11-13 Thread Dr. Jens Harbott
Public bug reported:

neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest.test_vm_reachable_through_compute
 [220.774590s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_dvr
 [319.845960s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_dvr_ha
 [319.645897s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_ha
 [318.106647s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_ha
 [318.342858s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_legacy
 [318.466020s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_legacy
 [319.457491s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromHA.test_from_ha_to_dvr
 [344.150677s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromLegacy.test_from_legacy_to_dvr
 [339.829603s] ... FAILED
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromHA.test_from_ha_to_dvr_ha
 [339.848078s] ... FAILED

The reason seems to be an inconsistency in these default config values:

enable_dvr = True
agent_mode = legacy

For consistency, either DVR should be disabled by default or the default
agent_mode should support DVR, so a default of dvr_snat would seem to be
the best solution.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1731857

Title:
  DVR scenario tests fail in default deployment

Status in neutron:
  New

Bug description:
  
neutron.tests.tempest.scenario.test_dvr.NetworkDvrTest.test_vm_reachable_through_compute
 [220.774590s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_dvr
 [319.845960s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_dvr_ha
 [319.645897s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_ha
 [318.106647s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_ha
 [318.342858s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVRHA.test_from_dvr_ha_to_legacy
 [318.466020s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromDVR.test_from_dvr_to_legacy
 [319.457491s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromHA.test_from_ha_to_dvr
 [344.150677s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromLegacy.test_from_legacy_to_dvr
 [339.829603s] ... FAILED
  
neutron.tests.tempest.scenario.test_migration.NetworkMigrationFromHA.test_from_ha_to_dvr_ha
 [339.848078s] ... FAILED

  The reason seems to be an inconsistency in these default config
  values:

  enable_dvr = True
  agent_mode = legacy

  For consistency, either DVR should be disabled by default or the
  default agent_mode should support DVR, so a default of dvr_snat would
  seem to be the best solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1731857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715842] [NEW] Wrong information about DNS resolution in admin guide

2017-09-08 Thread Dr. Jens Harbott
Public bug reported:

In the section "Name resolution for instances" (doc/source/admin/config-
dns-res.rst) the functional description for case 2 and 3 is wrong.

In both cases, the DHCP agents do not offer the mentioned IP addresses
to the instances. Instead, the DHCP agents offer the list of IP
addresses of DHCP agents in the respective subnet to the instances in
that subnet. The DHCP agents then run dnsmasq as a forwarding and
masquerading resolver, forwarding DNS requests from the instances to the
configured IP addresses, i.e. to the ones configured in
``dnsmasq_dns_servers`` in case 2 and to the ones configured in
``/etc/resolv.conf`` on the host in case 3.

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715842

Title:
  Wrong information about DNS resolution in admin guide

Status in neutron:
  New

Bug description:
  In the section "Name resolution for instances" (doc/source/admin
  /config-dns-res.rst) the functional description for case 2 and 3 is
  wrong.

  In both cases, the DHCP agents do not offer the mentioned IP addresses
  to the instances. Instead, the DHCP agents offer the list of IP
  addresses of DHCP agents in the respective subnet to the instances in
  that subnet. The DHCP agents then run dnsmasq as a forwarding and
  masquerading resolver, forwarding DNS requests from the instances to
  the configured IP addresses, i.e. to the ones configured in
  ``dnsmasq_dns_servers`` in case 2 and to the ones configured in
  ``/etc/resolv.conf`` on the host in case 3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714961] [NEW] Incorrect description on Flavor Access tab

2017-09-04 Thread Dr. Jens Harbott
Public bug reported:

It reads:

"Select the projects where the flavors will be used. If no projects are
selected, then the flavor will be available in all projects."

This is only true when the flavor is public, in that case the selection
of projects isn't relevant anyway. When the flavor is private and no
projects are selected, the flavor is not available in any project.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1714961

Title:
  Incorrect description on Flavor Access tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It reads:

  "Select the projects where the flavors will be used. If no projects
  are selected, then the flavor will be available in all projects."

  This is only true when the flavor is public, in that case the
  selection of projects isn't relevant anyway. When the flavor is
  private and no projects are selected, the flavor is not available in
  any project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1714961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714937] [NEW] keystone returns 500 on password change

2017-09-04 Thread Dr. Jens Harbott
Public bug reported:

$ openstack user set --password-prompt demo 
User Password:
Repeat User Password:
An unexpected error prevented the server from fulfilling your request. (HTTP 
500) (Request-ID: req-175ea0bf-02d3-4d73-a3ca-662612c36543)
$

Although there is an error, the password has been changed, though.

Traceback from keystone log:

Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
keystone.common.authorization [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 
None None] RBAC: Authorizing 
identity:update_user(user_id=30f37144f2494fd5b46d97acb72de22c, 
user={u'password': u'***', u'enabl
ed': True}) {{(pid=1804) _build_policy_check_credentials 
/opt/stack/keystone/keystone/common/authorization.py:137}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
keystone.policy.backends.rules [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 
None None] enforce identity:update_user: {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'd55b3e8a3e084ba6a
061b73c33112b05', 'roles': [u'admin'], 'user_domain_id': u'default', 
'consumer_id': None, 'trustee_id': None, 'is_domain': False, 
'is_admin_project': True, 'trustor_id': None, 'token': , 'project_id': u'a42a75e7ca804a6288f3eb51c0fd9eb7', 
'trust_id': None, 'project_domain_id': u'default'} {{(pid=1804) enforce 
/opt/stack/keystone/keystone/policy/backends/rules.py:33}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
oslo_policy.policy [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 None None] 
The policy file policy.json could not be found. {{(pid=1804) load_rules 
/usr/local/lib/python2.7/dist-packages/oslo_policy/pol
icy.py:532}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
keystone.common.authorization [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 
None None] RBAC: Authorization granted {{(pid=1804) check_policy 
/opt/stack/keystone/keystone/common/authorization.py:240}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
keystone.notifications [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 None 
None] Invoking callback _user_callback for event identity 
invalidate_user_tokens internal for {'resource_info': u'30f37144f2494f
d5b46d97acb72de22c'} {{(pid=1804) notify_event_callbacks 
/opt/stack/keystone/keystone/notifications.py:297}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: DEBUG 
keystone.notifications [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 None 
None] Invoking callback _delete_user_tokens_callback for event identity 
invalidate_user_tokens internal for {'resource_info': u'
30f37144f2494fd5b46d97acb72de22c'} {{(pid=1804) notify_event_callbacks 
/opt/stack/keystone/keystone/notifications.py:297}}
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: WARNING 
stevedore.named [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 None None] Could 
not load blah
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi [None req-175ea0bf-02d3-4d73-a3ca-662612c36543 None None] 
(u'Unable to find %(name)r driver in %(namespace)r.', {'namespace': 
'keystone.token.persistence', 'name': 'blah'}): ImportEr
ror: (u'Unable to find %(name)r driver in %(namespace)r.', {'namespace': 
'keystone.token.persistence', 'name': 'blah'})
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi Traceback (most recent call last):
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/common/wsgi.py", line 
228, in __call__
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi result = method(req, **params)
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/controller.py", line 94, in inner
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi return f(self, request, *args, **kwargs)
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/identity/controllers.py", line 255, in update_user
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi return self._update_user(request, user_id, user)
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/identity/controllers.py", line 248, in 
_update_user
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi user_id, user, initiator=request.audit_initiator
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi   File "/opt/stack/keystone/keystone/common/manager.py", 
line 110, in wrapped
Sep 04 11:36:44 jh-devstack-02 devstack@keystone.service[1797]: ERROR 
keystone.common.wsgi __ret_val = __f(*args, **kwargs)
Sep 04 

[Yahoo-eng-team] [Bug 1714901] [NEW] gate-rally-dsvm-neutron-neutron-ubuntu-xenial failing 100% in NeutronTrunks.create_and_list_trunk_subports

2017-09-04 Thread Dr. Jens Harbott
Public bug reported:

This has started about two days ago:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22NeutronTrunks.create_and_list_trunk_subports%5C%22%20AND%20message%3A%5C%22FAIL%5C%22

Sample failure log:

http://logs.openstack.org/61/500261/2/check/gate-rally-dsvm-neutron-
neutron-ubuntu-xenial/f5742a5/rally-plot/detailed.txt.gz


Task fe68e403-0694-4016-be5d-e02f67cb04fd has 4 error(s)


NotFound: The resource could not be found.
Neutron server returns request_ids: ['req-ff94c196-cb60-41d6-9e2c-42f780960d69']

Traceback (most recent call last):
  File "/opt/stack/new/rally/rally/task/runner.py", line 72, in 
_run_scenario_once
getattr(scenario_inst, method_name)(**scenario_kwargs)
  File "/home/jenkins/.rally/plugins/plugins/trunk_scenario.py", line 41, in run
trunk = self._create_trunk(trunk_payload)
  File "/opt/stack/new/rally/rally/task/atomic.py", line 85, in 
func_atomic_actions
f = func(self, *args, **kwargs)
  File "/home/jenkins/.rally/plugins/plugins/trunk_scenario.py", line 53, in 
_create_trunk
return self.clients("neutron").create_trunk({'trunk': trunk_payload})
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 2067, in create_trunk
return self.post(self.trunks_path, body=body)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 357, in post
headers=headers, params=params)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 292, in do_request
self._handle_fault_response(status_code, replybody, resp)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 268, in _handle_fault_response
exception_handler_v20(status_code, error_body)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 92, in exception_handler_v20
request_ids=request_ids)
NotFound: The resource could not be found.
Neutron server returns request_ids: ['req-ff94c196-cb60-41d6-9e2c-42f780960d69']

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714901

Title:
  gate-rally-dsvm-neutron-neutron-ubuntu-xenial failing 100% in
  NeutronTrunks.create_and_list_trunk_subports

Status in neutron:
  New

Bug description:
  This has started about two days ago:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22NeutronTrunks.create_and_list_trunk_subports%5C%22%20AND%20message%3A%5C%22FAIL%5C%22

  Sample failure log:

  http://logs.openstack.org/61/500261/2/check/gate-rally-dsvm-neutron-
  neutron-ubuntu-xenial/f5742a5/rally-plot/detailed.txt.gz

  

  Task fe68e403-0694-4016-be5d-e02f67cb04fd has 4 error(s)
  


  NotFound: The resource could not be found.
  Neutron server returns request_ids: 
['req-ff94c196-cb60-41d6-9e2c-42f780960d69']

  Traceback (most recent call last):
File "/opt/stack/new/rally/rally/task/runner.py", line 72, in 
_run_scenario_once
  getattr(scenario_inst, method_name)(**scenario_kwargs)
File "/home/jenkins/.rally/plugins/plugins/trunk_scenario.py", line 41, in 
run
  trunk = self._create_trunk(trunk_payload)
File "/opt/stack/new/rally/rally/task/atomic.py", line 85, in 
func_atomic_actions
  f = func(self, *args, **kwargs)
File "/home/jenkins/.rally/plugins/plugins/trunk_scenario.py", line 53, in 
_create_trunk
  return self.clients("neutron").create_trunk({'trunk': trunk_payload})
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 2067, in create_trunk
  return self.post(self.trunks_path, body=body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 357, in post
  headers=headers, params=params)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 292, in do_request
  self._handle_fault_response(status_code, replybody, resp)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 268, in _handle_fault_response
  exception_handler_v20(status_code, error_body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 92, in exception_handler_v20
  request_ids=request_ids)
  NotFound: The resource could not be found.
  Neutron server returns request_ids: 
['req-ff94c196-cb60-41d6-9e2c-42f780960d69']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714901/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1714641] [NEW] ML2: port deletion fails when dns extension is enabled

2017-09-02 Thread Dr. Jens Harbott
 ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/db/api.py", line 89, in wrapped
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in wrapper
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation ectxt.value = e.inner_exc
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation self.force_reraise()
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/db/api.py", line 128, in wrapped
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation LOG.debug("Retry wrapper got retriable 
exception: %s", e)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation self.force_reraise()
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*dup_args, **dup_kwargs)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1500, in delete_port
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation self._pre_delete_port(context, id, 
l3_port_check)
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1494, in 
_pre_delete_port
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation raise e.errors[0].error
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation TypeError: sequence item 0: expected 
string, IPAddress found
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron.pecan_wsgi.hooks.notifier [None 
req-9a7fab16-5dba-4501-a999-88adc3766da9 admin admin] No notification will be 
sent due to unsuccessful status code: 500 {{(pid=1047) after 
/opt/stack/neutron/neutron/pecan_wsgi/hooks/notifier.py:74}}
Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: INFO neutron.wsgi [None 
req-9a7fab16-5dba-4501-a999-88adc3766da9 admin admin] 192.168.0.23,192.168.0.23 
"DELETE /v2.0/ports/59ab0574-856d-4503-aa51-0ece9d01bf27 HTTP/1.1" status: 500  
len: 368 time: 0.4127610

** Affects: neutron
 Importance: Undecided
 Assignee: Dr. Jens Harbott (j-harbott)
 Status: In Progress

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714641

Title:
 

[Yahoo-eng-team] [Bug 1675351] Re: glance: creating a public image fails during installation

2017-09-01 Thread Dr. Jens Harbott
Yes, this issue was only caused by me misinterpreting the default glance
config.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1675351

Title:
  glance: creating a public image fails during installation

Status in Glance:
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  Following the instructions at https://docs.openstack.org/ocata
  /install-guide-ubuntu/glance-verify.html I am getting this error when
  trying to upload the test image:

  $ openstack image create "cirros2"   --file cirros-0.3.5-x86_64-disk.img   
--disk-format qcow2 --container-format bare  --public
  403 Forbidden
  You are not authorized to complete publicize_image action.
  (HTTP 403)
  $

  Creating the image without the "--public" option works fine. I did
  verify that I do have the admin role as specified in
  /etc/glance/policy.json:

  "publicize_image": "role:admin",

  $ openstack role assignment list --na
  +---++---+-++---+
  | Role  | User   | Group | Project | Domain | Inherited |
  +---++---+-++---+
  | user  | demo@Default   |   | demo@Default|| False |
  | admin | admin@Default  |   | admin@Default   || False |
  | admin | glance@Default |   | service@Default || False |
  +---++---+-++---+
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1675351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >