[Yahoo-eng-team] [Bug 1880657] [NEW] [openstack][net] static subnet type does not work

2020-05-26 Thread LIU Yulong
Public bug reported:

commit 62bbc262c3c7f633eac1d09ec78c055eef05166a changes the default code
branch condition which breaks the existing cloud static network config.


[1] 
https://github.com/canonical/cloud-init/commit/62bbc262c3c7f633eac1d09ec78c055eef05166a#r39437585

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1880657

Title:
  [openstack][net] static subnet type does not work

Status in cloud-init:
  New

Bug description:
  commit 62bbc262c3c7f633eac1d09ec78c055eef05166a changes the default
  code branch condition which breaks the existing cloud static network
  config.

  
  [1] 
https://github.com/canonical/cloud-init/commit/62bbc262c3c7f633eac1d09ec78c055eef05166a#r39437585

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1880657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-26 Thread Seyeong Kim
** Changed in: neutron (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Xenial)
 Assignee: (unassigned) => Seyeong Kim (xtrusia)

** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress

Bug description:
  [Impact]
  When using linuxbridge and after creating network & interface to ext-net, 
disable_ipv6 is 1. then linuxbridge-agent doesn't add ipv6 properly to newly 
created bridge.

  [Test Case]

  1. deploy basic mitaka env
  2. create external network(ext-net)
  3. create ipv6 network and interface to ext-net
  4. check if related bridge has ipv6 ip
  - no ipv6 originally
  or
  - cat /proc/sys/net/ipv6/conf/[BRIDGE]/disable_ipv6

  after this commit, I was able to see ipv6 address properly.

  [Regression]
  You need to restart neutron-linuxbridge-agent then there could be short 
downtime needed.

  [Others]

  -- original description --

  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
    openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.886 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'brqe1623c94-1f', 'up'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.895 1551 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Starting bridge 
brqe1623c94-1f for subinterface eno1 ensure_bridge 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:367
  2017-02-06 15:09:20.895 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Runnin

[Yahoo-eng-team] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-26 Thread Edward Hope-Morley
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  In Progress

Bug description:
  [Impact]
  When using linuxbridge and after creating network & interface to ext-net, 
disable_ipv6 is 1. then linuxbridge-agent doesn't add ipv6 properly to newly 
created bridge.

  [Test Case]

  1. deploy basic mitaka env
  2. create external network(ext-net)
  3. create ipv6 network and interface to ext-net
  4. check if related bridge has ipv6 ip
  - no ipv6 originally
  or
  - cat /proc/sys/net/ipv6/conf/[BRIDGE]/disable_ipv6

  after this commit, I was able to see ipv6 address properly.

  [Regression]
  You need to restart neutron-linuxbridge-agent then there could be short 
downtime needed.

  [Others]

  -- original description --

  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
    openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.886 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'brqe1623c94-1f', 'up'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:105
  2017-02-06 15:09:20.895 1551 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Starting bridge 
brqe1623c94-1f for subinterface eno1 ensure_bridge 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:367
  2017-02-06 15:09:20.895 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command (rootwrap 
daemon): ['brctl', 'addbr', 'brqe1623c94-1f'] execute_rootwrap_daemon 
/usr/

[Yahoo-eng-team] [Bug 1880669] [NEW] ClientException: Unexpected API Error

2020-05-26 Thread Ravikumar
Public bug reported:

[root@controller1 ~(keystone_admin)]# openstack --version
openstack 3.8.2

[root@controller1 ~(keystone_admin)]# nova list
 ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-8d1b122c-16ea-4ee0-84c0-0d9ace08b662)
[root@controller1 ~(keystone_admin)]#

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1880669

Title:
  ClientException: Unexpected API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  [root@controller1 ~(keystone_admin)]# openstack --version
  openstack 3.8.2

  [root@controller1 ~(keystone_admin)]# nova list
   ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-8d1b122c-16ea-4ee0-84c0-0d9ace08b662)
  [root@controller1 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1880669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880671] [NEW] Nova failed to create an instance.

2020-05-26 Thread wang
Public bug reported:

Nova failed to create an instance.

Description
===
Nova failed to create an instance, prompting 'No valid host was found.'

nova-compute.log
===
ERROR nova.virt.disk.mount.nbd [-] nbd module not loaded 

INFO nova.virt.disk.mount.api [-] Device allocation failed. Will retry
in 2 seconds.

WARNING nova.virt.disk.mount.api [-] Device allocation failed after
repeated retries.

INFO nova.virt.libvirt.driver [-] [instance: c049210a-116f-
49dc-a290-103ba4f97845] Using config drive

INFO nova.virt.libvirt.driver [-] [instance: c049210a-116f-
49dc-a290-103ba4f97845] Creating config drive at /var/lib/nova/instances
/c049210a-116f-49dc-a290-103ba4f97845/disk.config

ERROR nova.compute.manager [-] [instance: c049210a-116f-
49dc-a290-103ba4f97845] Instance failed to spawn

** Affects: nova
 Importance: Undecided
 Status: Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1880671

Title:
  Nova failed to create an instance.

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Nova failed to create an instance.

  Description
  ===
  Nova failed to create an instance, prompting 'No valid host was found.'

  nova-compute.log
  ===
  ERROR nova.virt.disk.mount.nbd [-] nbd module not loaded 

  INFO nova.virt.disk.mount.api [-] Device allocation failed. Will retry
  in 2 seconds.

  WARNING nova.virt.disk.mount.api [-] Device allocation failed after
  repeated retries.

  INFO nova.virt.libvirt.driver [-] [instance: c049210a-116f-
  49dc-a290-103ba4f97845] Using config drive

  INFO nova.virt.libvirt.driver [-] [instance: c049210a-116f-
  49dc-a290-103ba4f97845] Creating config drive at
  /var/lib/nova/instances/c049210a-116f-
  49dc-a290-103ba4f97845/disk.config

  ERROR nova.compute.manager [-] [instance: c049210a-116f-
  49dc-a290-103ba4f97845] Instance failed to spawn

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1880671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794683] Re: Can not detach port when vm is soft-deleted

2020-05-26 Thread Balazs Gibizer
I don't think this is a bug per se. Soft delete means you might or might
not want to restore the instance that is being deleted. If the instance
is end up being really deleted then the port will be freed anyhow. But
if the instance will be restored then the port is still need to be
present so your instance is restored to the state from where it was soft
deleted.

Marking this bug Opinion. If you disagree please set it back to New.

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794683

Title:
  Can not detach port when vm is soft-deleted

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  When vm is soft-deleted, we can not detach port from it. But when vm is 
shutoff, we can detach port.
  VM in SOFT-DELETED and SHUTOFF status are the same status at the hypervisor 
level. The guest is shutdown. Why can't they do the same operation?

  Steps to reproduce
  ==
  1) set reclaim_instance_interval > 0 to enable soft-delete
  2) create an instance vm01
  3) create a port port01 and then attach it to vm01
  4) delete vm01
  5) detach port01 from vm01

  Expected result
  ===
  port01 can be detached successfully.

  Actual result
  =
  The detach operation failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1794683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880672] [NEW] Openstack start instance failed.

2020-05-26 Thread wang
Public bug reported:

Openstack start instance failed.

Description
===
Openstack starts the instance, and after a while the instance is in an error 
state.

logs
=
 Failed to compute_task_build_instances: No valid host was found. There are not 
enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
150, in inner
return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, 
in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 74, in select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts
available.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1880672

Title:
  Openstack start  instance failed.

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack start instance failed.

  Description
  ===
  Openstack starts the instance, and after a while the instance is in an error 
state.

  logs
  =
   Failed to compute_task_build_instances: No valid host was found. There are 
not enough hosts available.
  Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  return func(*args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 
104, in select_destinations
  dests = self.driver.select_destinations(ctxt, spec_obj)

File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 74, in select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1880672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880676] [NEW] "test_iptables.TestHelper" UTs are making a system call

2020-05-26 Thread Rodolfo Alonso
Public bug reported:

"test_iptables.TestHelper" UTs are making a system call, executing
"sysctl" [1]. This won't work in OS like OSX or Windows.

Logs: http://paste.openstack.org/show/793980/

[1]https://github.com/openstack/neutron/blob/4acc6843e849e98cd04a6d01861555c3e120f081/neutron/agent/linux/iptables_firewall.py#L103-L105

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880676

Title:
  "test_iptables.TestHelper" UTs are making a system call

Status in neutron:
  In Progress

Bug description:
  "test_iptables.TestHelper" UTs are making a system call, executing
  "sysctl" [1]. This won't work in OS like OSX or Windows.

  Logs: http://paste.openstack.org/show/793980/

  
[1]https://github.com/openstack/neutron/blob/4acc6843e849e98cd04a6d01861555c3e120f081/neutron/agent/linux/iptables_firewall.py#L103-L105

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880691] [NEW] Comments for stateles security group are missleading

2020-05-26 Thread Slawek Kaplonski
Public bug reported:

Currently comments looks like:

[14:53:46] vagrant@devstack-ubuntu-ovs:~/python-openstackclient$ sudo 
iptables-save | grep notrack
-A neutron-openvswi-PREROUTING -m physdev --physdev-in qvb2bcf6ca7-86 -m 
comment --comment "Make 6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT 
--notrack
-A neutron-openvswi-PREROUTING -i qvb2bcf6ca7-86 -m comment --comment "Make 
6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT --notrack
-A neutron-openvswi-PREROUTING -m physdev --physdev-in tap2bcf6ca7-86 -m 
comment --comment "Make 6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT 
--notrack

which is wrong as first 4 chars are dropped. That may be confusing for
operator in debugging.

** Affects: neutron
 Importance: Low
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880691

Title:
  Comments for stateles security group are missleading

Status in neutron:
  Confirmed

Bug description:
  Currently comments looks like:

  [14:53:46] vagrant@devstack-ubuntu-ovs:~/python-openstackclient$ sudo 
iptables-save | grep notrack
  -A neutron-openvswi-PREROUTING -m physdev --physdev-in qvb2bcf6ca7-86 -m 
comment --comment "Make 6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT 
--notrack
  -A neutron-openvswi-PREROUTING -i qvb2bcf6ca7-86 -m comment --comment "Make 
6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT --notrack
  -A neutron-openvswi-PREROUTING -m physdev --physdev-in tap2bcf6ca7-86 -m 
comment --comment "Make 6ca7-8611-486e-8bbb-05141fa62f57 stateless" -j CT 
--notrack

  which is wrong as first 4 chars are dropped. That may be confusing for
  operator in debugging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880701] [NEW] `cloud-init query` cannot be used to determine the current datasource

2020-05-26 Thread Dan Watkins
Public bug reported:

On a lxd container, which uses the NoCloud datasource[0], the only place
where "NoCloud" is queryable is in `datasource_list`:

# cloud-init query -a | grep -B1 -A2 NoCloud
  "datasource_list": [
   "NoCloud",
   "None"
  ],

With ds-identify enabled, you probably _can_ take that first value as
the correct DS (because if that DS wasn't used then you probably don't
have access to the instance), but that won't generalise well.  We should
provide the name of the datasource used in the queryable data somewhere.

[0] # cat /run/cloud-init/result.json 
{
 "v1": {
  "datasource": "DataSourceNoCloud 
[seed=/var/lib/cloud/seed/nocloud-net][dsmode=net]",
  "errors": []
 }
}

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1880701

Title:
  `cloud-init query` cannot be used to determine the current datasource

Status in cloud-init:
  New

Bug description:
  On a lxd container, which uses the NoCloud datasource[0], the only
  place where "NoCloud" is queryable is in `datasource_list`:

  # cloud-init query -a | grep -B1 -A2 NoCloud
"datasource_list": [
 "NoCloud",
 "None"
],

  With ds-identify enabled, you probably _can_ take that first value as
  the correct DS (because if that DS wasn't used then you probably don't
  have access to the instance), but that won't generalise well.  We
  should provide the name of the datasource used in the queryable data
  somewhere.

  [0] # cat /run/cloud-init/result.json 
  {
   "v1": {
"datasource": "DataSourceNoCloud 
[seed=/var/lib/cloud/seed/nocloud-net][dsmode=net]",
"errors": []
   }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1880701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880712] [NEW] Ironic Horizon panel shows blank page if Ironic is deployed in a different region

2020-05-26 Thread Scott Solkhon
Public bug reported:

If you have a multi region deployment of OpenStack and Ironic / Horizon
live in different regions, the Ironic panel under "System -> Ironic Bare
Metal Provisioning" shows a blank page.

I am deploying using Kolla-Ansible and have enabled the Ironic panel by
setting `enable_horizon_ironic` in the region where Horizon is active. I
believe this is an issue with the panel rather than Kolla-Ansible as the
panes do show in the navigation bar but when pressed you just see an
empty page.

I have also confirmed that there is the expected behaviour when Ironic
and Horizon both live in the same region.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- If you have a multi region deployment of OpenStack and Ironic / Keystone
+ If you have a multi region deployment of OpenStack and Ironic / Horizon
  live in different regions, the Ironic panel under "System -> Ironic Bare
  Metal Provisioning" shows a blank page.
  
  I am deploying using Kolla-Ansible and have enabled the Ironic panel by
  setting `enable_horizon_ironic` in the region where Horizon is active. I
  believe this is an issue with the panel rather than Kolla-Ansible as the
  panes do show in the navigation bar but when pressed you just see an
  empty page.
  
  I have also confirmed that there is the expected behaviour when Ironic
  and Horizon both live in the same region.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1880712

Title:
  Ironic Horizon panel shows blank page if Ironic is deployed in a
  different region

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you have a multi region deployment of OpenStack and Ironic /
  Horizon live in different regions, the Ironic panel under "System ->
  Ironic Bare Metal Provisioning" shows a blank page.

  I am deploying using Kolla-Ansible and have enabled the Ironic panel
  by setting `enable_horizon_ironic` in the region where Horizon is
  active. I believe this is an issue with the panel rather than Kolla-
  Ansible as the panes do show in the navigation bar but when pressed
  you just see an empty page.

  I have also confirmed that there is the expected behaviour when Ironic
  and Horizon both live in the same region.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1880712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880712] Re: Ironic Horizon panel shows blank page if Ironic is deployed in a different region

2020-05-26 Thread Scott Solkhon
Sorry - Confirmed this was an issue with the Horizon container deployed
in the Horizon region.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1880712

Title:
  Ironic Horizon panel shows blank page if Ironic is deployed in a
  different region

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  If you have a multi region deployment of OpenStack and Ironic /
  Horizon live in different regions, the Ironic panel under "System ->
  Ironic Bare Metal Provisioning" shows a blank page.

  I am deploying using Kolla-Ansible and have enabled the Ironic panel
  by setting `enable_horizon_ironic` in the region where Horizon is
  active. I believe this is an issue with the panel rather than Kolla-
  Ansible as the panes do show in the navigation bar but when pressed
  you just see an empty page.

  I have also confirmed that there is the expected behaviour when Ironic
  and Horizon both live in the same region.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1880712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880389] Re: lost net connection when live migration

2020-05-26 Thread sean mooney
regarding the VM Resumimg before the 'brctl addif'.

the libvirt xml we generate contains the name of the linux bridge the tap 
should be added
too. so the linux bridge agent does not need to actully do the brctl addif 
command.
the tap should already be a member of that bridge when the vm is resumed.

it looks like the vm paused on the source compute node at
020-05-26 14:56:05.246 7 INFO nova.compute.manager 
[req-fe4495ae-f1a7-4e93-871e-4d034098babd - - - - -] [instance: 
7f050d9a-413c-4143-849b-75f931a2c07d] VM Paused (Lifecycle Event)

and it resumed on the dest at 
2020-05-26 14:56:05.303 6 INFO nova.compute.manager 
[req-8cd4842d-4783-42b3-9a8e-c4e757b8e6f0 - - - - -] [instance: 
7f050d9a-413c-4143-849b-75f931a2c07d] VM Resumed (Lifecycle Event)

previously at  2020-05-26 14:55:54.639 6 DEBUG nova.virt.libvirt.driver
[req-28bba2f2-89d9-4cbb-8b6a-7a9690469c86
b114d7969c0e465fbd15c2911ca4bb23 28e6517b7d6d4064be1bc878b590c40c -
default default] [instance: 7f050d9a-413c-4143-849b-75f931a2c07d]
Plugging VIFs before live migration. pre_live_migration
/var/lib/kolla/venv/lib/python2.7/site-
packages/nova/virt/libvirt/driver.py:7621

on the dest node we have started per pluging the network backend which
successfully completed at

2020-05-26 14:55:58.769 6 INFO os_vif [req-28bba2f2-89d9-4cbb-8b6a-
7a9690469c86 b114d7969c0e465fbd15c2911ca4bb23
28e6517b7d6d4064be1bc878b590c40c - default default] Successfully plugged
vif
VIFBridge(active=True,address=fa:16:3e:e1:50:ac,bridge_name='brq49b34298-a8',has_traffic_filtering=True,id=b3526533-dc6a-4174
-bd3b-c300e78eda62,network=Network(49b34298-a85a-
42a9-b264-b3a9242fef8f),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tapb3526533-dc')

this happens before we call libvirt to migrate the instance so at this
point os-vif has ensured the linux bridge "brq49b34298-a8" is created so
that when the vm starts the the tap is directly created in the correct
bridge.

at 2020-05-26 14:56:02.172 7 DEBUG nova.compute.manager [req-
40866f62-6362-4e9a-910e-365c3452d29f 030ec97d13dd4d9698209595a7ac01c4
ef210c7d6b2146139a9c94ef790081d8 - default default] [instance: 7f050d9a-
413c-4143-849b-75f931a2c07d] Received event network-
changed-b3526533-dc6a-4174-bd3b-c300e78eda62 external_instance_event
/var/lib/kolla/venv/lib/python2.7/site-
packages/nova/compute/manager.py:8050

we received a network-changed

and then at 2020-05-26 14:56:04.314 7 DEBUG nova.compute.manager [req-
379d2410-5959-45ae-89ce-649dca3ed666 030ec97d13dd4d9698209595a7ac01c4
ef210c7d6b2146139a9c94ef790081d8 - default default] [instance: 7f050d9a-
413c-4143-849b-75f931a2c07d] Received event network-vif-
plugged-b3526533-dc6a-4174-bd3b-c300e78eda62 external_instance_event
/var/lib/kolla/venv/lib/python2.7/site-
packages/nova/compute/manager.py:8050

we recive a network-vif-plugged event which should ideally only be sent
by the ml2 driver when the l2 agent has finished wireing up the
networking on the destintaiton node.

as you pointed out the l2 agent does not finish adding the vlan subport
to the correct bridge until

2020-05-26 14:56:25.743 6 DEBUG neutron.agent.linux.utils [req-
fcca2dcc-5578-4827-ae05-d10935d35223 - - - - -] Running command:
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'brctl',
'addif', 'brq49b34298-a8', 'p2p2.64'] create_process
/var/lib/kolla/venv/lib/python2.7/site-
packages/neutron/agent/linux/utils.py:87


19 seconds later 

so i think the issue is that the linux bridge ml2 driver is not sending
plug time network-vif-plugged events but is instead sending bind time
events.

we wait for the netwroking to be configured here 
https://github.com/openstack/nova/blob/stable/queens/nova/compute/manager.py#L6420-L6425
which wait for the network-vif-plugged event i showed in the log
https://github.com/openstack/nova/blob/bea91b8d58d909852949726296149d93f2c639d5/nova/compute/manager.py#L6352-L6362

before actully starting the migration here
https://github.com/openstack/nova/blob/stable/queens/nova/compute/manager.py#L6467-L6470

the linux bridge l2 agent should only notify nova that the iterface is
plugged when the tap is fully wired up

https://github.com/openstack/neutron/blob/4acc6843e849e98cd04a6d01861555c3e120f081/neutron/plugins/ml2/drivers/agent/_common_agent.py#L303-L306
but as the comment suggest this behavior is racy

https://github.com/openstack/neutron/blob/4acc6843e849e98cd04a6d01861555c3e120f081/neutron/plugins/ml2/drivers/agent/_common_agent.py#L259-L296

in this it started ensurign the bridge had connectiyt to the physical network at
2020-05-26 14:56:17.576 6 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-fcca2dcc-5578-4827-ae05-d10935d35223 - - - - -] Creating subinterface 
p2p2.64 for VLAN 64 on interface p2p2 ensure_vlan 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:303

which is long after the vm resumed at 2020-05-26 14:56:05.303

so w

[Yahoo-eng-team] [Bug 1878916] Re: When deleting a network, delete the segment RP only when the segment is deleted

2020-05-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/728507
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7f40e626d6de5ec895343453b032b747c02e59f5
Submitter: Zuul
Branch:master

commit 7f40e626d6de5ec895343453b032b747c02e59f5
Author: Rodolfo Alonso Hernandez 
Date:   Fri May 15 13:26:15 2020 +

Delete segment RPs when network is deleted

When a network is deleted, only one Placement call per segment
is done to remove the associated (if existing) resource provider.

Before this patch, each time a subnet was deleted, the segment
resource provider was updated. When no subnets were present in the
related segment, the associated resource provider was deleted.

This optimization improves the network deletion time (see Launchpad
bug). E.g.: a network with two segments and ten subnets, the Neutron
server processing time dropped from 8.2 seconds to 4.4 seconds (note
that the poor performance was due to the modest testing environment).

Along with the segment RP optimization during the network deletion,
this patch also skips the router subnet update. Because all subnets
in the network are going to be deleted, there is no need to update
them during the network deletion process.

Change-Id: Ifd50027911a9ca3508e80e0de9a6cc45b67006cf
Closes-Bug: #1878916


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878916

Title:
  When deleting a network, delete the segment RP only when the segment
  is deleted

Status in neutron:
  Fix Released

Bug description:
  When a network is deleted, those are some of the operations executed (in 
order):
  - First we check the network is not used.
  - Then the subnets are deleted.
  - The segments are deleted.
  - The network is deleted.

  For each network, the segment plugin updates the Placement resource
  provider of the segment. When no subnets are allocated in this
  segment, the segment RP is deleted.

  Having more than one subnet per segment, will lead to an unnecessary
  Placement API load. When the network is being deleted, instead of
  updating the segment RP, we can wait until the segment is deleted and
  then we can delete the RP. This will same some time in the Neutron
  server call "network delete" and will reduce the load in the Placement
  server.

  As an example, some figures. With a network created, I've created
  another segment and 10 subnets in this new segment.

    CLI time (s)..Neutron API time (s)
  Code as is now
    9.71..8.23
    9.63..8.19
    9.62..8.11

  Skipping the subnet RP update
    7.42..5.96
    7.49..6.05

  Skipping the subnet route update (host_routes_after_delete) too
    5.49..4.05
    5.74..4.26

  Now adding the segment RP deletion when the segment is deleted
    5.99..4.46
    5.79..4.31

  During a network deletion, we can save time and Placement calls just
  deleting the segment RP only when the segment is already deleted
  (AFTER_DELETE event).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1878916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880734] [NEW] ordering cycle upon booting groovy

2020-05-26 Thread Dimitri John Ledkov
Public bug reported:

May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found ordering 
cycle on getty.target/start
May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on serial-getty@sclp_line0.service/start
May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on cloud-final.service/start
May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on multi-user.target/start
May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Job 
getty.target/start deleted to break ordering cycle starting with 
multi-user.target/start


We shall not have dependency cycles.

Looks like we ordered getty.target to be both before and after multi-
user.target via cloud-final & serial-getty service (getty.target)

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: subiquity
 Importance: Undecided
 Status: New

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1880734

Title:
  ordering cycle upon booting groovy

Status in cloud-init:
  New
Status in subiquity:
  New

Bug description:
  May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found ordering 
cycle on getty.target/start
  May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on serial-getty@sclp_line0.service/start
  May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on cloud-final.service/start
  May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Found dependency 
on multi-user.target/start
  May 26 15:27:19 ubuntu-server systemd[1]: multi-user.target: Job 
getty.target/start deleted to break ordering cycle starting with 
multi-user.target/start

  
  We shall not have dependency cycles.

  Looks like we ordered getty.target to be both before and after multi-
  user.target via cloud-final & serial-getty service (getty.target)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1880734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880676] Re: "test_iptables.TestHelper" UTs are making a system call

2020-05-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/730752
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bd5c98e25b40ed4be749fd0880027407c4468d1e
Submitter: Zuul
Branch:master

commit bd5c98e25b40ed4be749fd0880027407c4468d1e
Author: Rodolfo Alonso Hernandez 
Date:   Tue May 26 10:26:39 2020 +

Mock command execution in "test_iptables.TestHelper" UTs

Change-Id: I112e0a1cb45259c5af3bcbf09ae9f515f90723d0
Closes-Bug: #1880676


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880676

Title:
  "test_iptables.TestHelper" UTs are making a system call

Status in neutron:
  Fix Released

Bug description:
  "test_iptables.TestHelper" UTs are making a system call, executing
  "sysctl" [1]. This won't work in OS like OSX or Windows.

  Logs: http://paste.openstack.org/show/793980/

  
[1]https://github.com/openstack/neutron/blob/4acc6843e849e98cd04a6d01861555c3e120f081/neutron/agent/linux/iptables_firewall.py#L103-L105

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1872713] Re: Windows library "wmi" is imported but not installed

2020-05-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/719960
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=01b9e1106dffae1532887e918b9433096f06a560
Submitter: Zuul
Branch:master

commit 01b9e1106dffae1532887e918b9433096f06a560
Author: Rodolfo Alonso Hernandez 
Date:   Tue Apr 14 13:37:25 2020 +

Install "wmi" library in "win32" systems

Change-Id: Ifb2af895e0d0f1935c63950fcae4480c090d1356
Closes-Bug: #1872713


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1872713

Title:
  Windows library "wmi" is imported but not installed

Status in neutron:
  Fix Released

Bug description:
  Windows library "wmi" is imported in agent.windows.utils but is not
  present in requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1872713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp