[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-28 Thread Launchpad Bug Tracker
This bug was fixed in the package heat - 1:14.0.0-0ubuntu1

---
heat (1:14.0.0-0ubuntu1) groovy; urgency=medium

  * d/watch: Scope to 14.x series and get tarballs from opendev.org.
  * d/control: Align (Build-)Depends with upstream.
  * d/p/monkey-patch-original-current-thread.patch: Cherry-picked
from upstream review (https://review.opendev.org/#/c/727181/)
to fix Python 3.8 monkey patching (LP: #1863021).
  * New upstream release for OpenStack Ussuri (LP: #1877642).

 -- Corey Bryant   Wed, 13 May 2020 16:51:44
-0400

** Changed in: heat (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Fix Released
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Fix Released
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Fix Released
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Fix Released
Status in murano-agent package in Ubuntu:
  Fix Released
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Fix Released
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Fix Released
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Fix Committed
Status in cinder source package in Focal:
  Fix Committed
Status in designate source package in Focal:
  Fix Committed
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Fix Committed
Status in ironic source package in Focal:
  Fix Committed
Status in ironic-inspector source package in Focal:
  Fix Committed
Status in magnum source package in Focal:
  Fix Committed
Status in manila source package in Focal:
  Fix Committed
Status in masakari source package in Focal:
  Fix Committed
Status in mistral source package in Focal:
  Fix Committed
Status in murano source package in Focal:
  Fix Committed
Status in murano-agent source package in Focal:
  Fix Committed
Status in networking-bagpipe source package in Focal:
  Fix Committed
Status in networking-hyperv source package in Focal:
  Fix Committed
Status in networking-l2gw source package in Focal:
  Fix Committed
Status in networking-mlnx source package in Focal:
  Fix Committed
Status in networking-sfc source package in Focal:
  Fix Committed
Status in neutron source package in Focal:
  Fix Committed
Status in neutron-dynamic-routing source package in Focal:
  Fix Committed
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Fix Committed
Status in python-os-ken source package in Focal:
  Fix Committed
Status in python-oslo.service source package in Focal:
  Fix Committed
Status in sahara source package in Focal:
  Fix Committed
Status in senlin source package in Focal:
  Fix Committed
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Fix Committed
Status in barbican source 

[Yahoo-eng-team] [Bug 1863021] Re: [SRU] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-05-28 Thread Corey Bryant
** Changed in: murano-agent (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Changed in: networking-l2gw (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Changed in: neutron (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Changed in: neutron-dynamic-routing (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Changed in: sahara (Ubuntu Focal)
   Status: Triaged => Fix Committed

** Changed in: sahara (Ubuntu Groovy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863021

Title:
  [SRU] eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in BaGPipe:
  Fix Released
Status in networking-hyperv:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in Mellanox backend  integration with Neutron (networking-mlnx):
  In Progress
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in watcher:
  Fix Released
Status in barbican package in Ubuntu:
  Fix Released
Status in cinder package in Ubuntu:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in heat package in Ubuntu:
  Triaged
Status in ironic package in Ubuntu:
  Fix Released
Status in ironic-inspector package in Ubuntu:
  Fix Released
Status in magnum package in Ubuntu:
  Fix Released
Status in manila package in Ubuntu:
  Fix Released
Status in masakari package in Ubuntu:
  Fix Released
Status in mistral package in Ubuntu:
  Fix Released
Status in murano package in Ubuntu:
  Fix Released
Status in murano-agent package in Ubuntu:
  Fix Released
Status in networking-bagpipe package in Ubuntu:
  Fix Released
Status in networking-hyperv package in Ubuntu:
  Fix Released
Status in networking-l2gw package in Ubuntu:
  Fix Released
Status in networking-mlnx package in Ubuntu:
  Fix Released
Status in networking-sfc package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in openstack-trove package in Ubuntu:
  Fix Released
Status in python-os-ken package in Ubuntu:
  Fix Released
Status in python-oslo.service package in Ubuntu:
  Fix Released
Status in sahara package in Ubuntu:
  Fix Released
Status in senlin package in Ubuntu:
  Fix Released
Status in swift package in Ubuntu:
  Triaged
Status in watcher package in Ubuntu:
  Fix Released
Status in barbican source package in Focal:
  Fix Committed
Status in cinder source package in Focal:
  Fix Committed
Status in designate source package in Focal:
  Fix Committed
Status in glance source package in Focal:
  Fix Released
Status in heat source package in Focal:
  Fix Committed
Status in ironic source package in Focal:
  Fix Committed
Status in ironic-inspector source package in Focal:
  Fix Committed
Status in magnum source package in Focal:
  Fix Committed
Status in manila source package in Focal:
  Fix Committed
Status in masakari source package in Focal:
  Fix Committed
Status in mistral source package in Focal:
  Fix Committed
Status in murano source package in Focal:
  Fix Committed
Status in murano-agent source package in Focal:
  Fix Committed
Status in networking-bagpipe source package in Focal:
  Fix Committed
Status in networking-hyperv source package in Focal:
  Fix Committed
Status in networking-l2gw source package in Focal:
  Fix Committed
Status in networking-mlnx source package in Focal:
  Fix Committed
Status in networking-sfc source package in Focal:
  Fix Committed
Status in neutron source package in Focal:
  Fix Committed
Status in neutron-dynamic-routing source package in Focal:
  Fix Committed
Status in nova source package in Focal:
  Fix Released
Status in openstack-trove source package in Focal:
  Fix Committed
Status in python-os-ken source package in Focal:
  Fix Committed
Status in python-oslo.service source package in Focal:
  Fix Committed
Status in sahara source package in Focal:
  Fix Committed
Status in senlin source package in Focal:
  Fix Committed
Status in swift source package in Focal:
  Triaged
Status in watcher source package in Focal:
  Fix Committed
Status in barbican source package in Groovy:
  Fix Released
Status in cinder source package in Groovy:
  Fix Released
Status in designate source package in 

[Yahoo-eng-team] [Bug 1881157] [NEW] [OVS][FW] Remote SG IDs left behind when a SG is removed

2020-05-28 Thread Rodolfo Alonso
Public bug reported:

When any port in the OVS agent is using a SG, is marked to be deleted.
This deletion process is done in [1].

The SG deletion process consists on removing any reference of this SG
from the firewall and the SG port map. The firewall removes this SG in
[2].

The information of a SG is stored in:
- ConjIPFlowManager.conj_id_map = ConjIdMap(). This class stores the 
conjunction IDS (conj_ids) in a dictionary using the following keys:
  ConjIdMap.id_map[(sg_id, remote_sg_id, direction, ethertype, conj_ids)] = 
conj_id_XXX

- ConjIPFlowManager.conj_ids is a nested dictionary, built in the following way:
  self.conj_ids[vlan_tag][(direction, ethertype)][remote_sg_id] = 
set([conj_id_1, conj_id_2, ...])

When a SG is removed, this reference should be deleted both from
"conj_id_map" and "conj_ids". From "conj_id_map" is correctly removed in
[3]. But from "conj_ids" is not being deleted properly. Instead of the
current logic, what we should do is to walk through the nested
dictionary and remove any entry with "remote_sg_id" == "sg_id" (<-- SG
ID to be removed).

The current implementation leaves some "remote_sg_id" in the nested dictionary 
"conj_ids". That could cause:
- A memory leak in the OVS agent, storing in memory those unneeded remote SG.
- A increase in the complexity of the OVS rules, adding those unused SG 
(actually the conj_ids related to those SG)
- A security breach between SGs if the conj_ids left in an unused SG is deleted 
and reused again (the FW stores the unused conj_ids to be recycled in later 
rules).


[1]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L731
[2]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L399
[3]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L296

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881157

Title:
  [OVS][FW] Remote SG IDs left behind when a SG is removed

Status in neutron:
  New

Bug description:
  When any port in the OVS agent is using a SG, is marked to be deleted.
  This deletion process is done in [1].

  The SG deletion process consists on removing any reference of this SG
  from the firewall and the SG port map. The firewall removes this SG in
  [2].

  The information of a SG is stored in:
  - ConjIPFlowManager.conj_id_map = ConjIdMap(). This class stores the 
conjunction IDS (conj_ids) in a dictionary using the following keys:
ConjIdMap.id_map[(sg_id, remote_sg_id, direction, ethertype, conj_ids)] = 
conj_id_XXX

  - ConjIPFlowManager.conj_ids is a nested dictionary, built in the following 
way:
self.conj_ids[vlan_tag][(direction, ethertype)][remote_sg_id] = 
set([conj_id_1, conj_id_2, ...])

  When a SG is removed, this reference should be deleted both from
  "conj_id_map" and "conj_ids". From "conj_id_map" is correctly removed
  in [3]. But from "conj_ids" is not being deleted properly. Instead of
  the current logic, what we should do is to walk through the nested
  dictionary and remove any entry with "remote_sg_id" == "sg_id" (<-- SG
  ID to be removed).

  The current implementation leaves some "remote_sg_id" in the nested 
dictionary "conj_ids". That could cause:
  - A memory leak in the OVS agent, storing in memory those unneeded remote SG.
  - A increase in the complexity of the OVS rules, adding those unused SG 
(actually the conj_ids related to those SG)
  - A security breach between SGs if the conj_ids left in an unused SG is 
deleted and reused again (the FW stores the unused conj_ids to be recycled in 
later rules).

  
  
[1]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L731
  
[2]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L399
  
[3]https://github.com/openstack/neutron/blob/118930f03d31f157f8c7a9e6c57122ecea8982b9/neutron/agent/linux/openvswitch_firewall/firewall.py#L296

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1787588] Re: Live Migration from Local Storage to RBD: rbd import: Invalid Spec

2020-05-28 Thread Lee Yarwood
Moving between computes with different [libvirt]/images_types settings
using live migration like this isn't supported. Additionally I don't
believe cold migration between such computes would work either.

You should however be able to move these instances by creating a
snapshot and recreating the instance on the new computes that are
hopefully in a host aggregate separate to the original computes?

I'm going to mark this as opinion and wishlist to see if anyone thinks
it's worth fixing this.

** Changed in: nova
   Status: New => Opinion

** Tags added: libvirt

** Summary changed:

- Live Migration from Local Storage to RBD: rbd import: Invalid Spec 
+ Live Migration [libvirt]/images_type=qcow2 to [libvirt]/images_type=rbd fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1787588

Title:
  Live Migration [libvirt]/images_type=qcow2 to
  [libvirt]/images_type=rbd fails

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  Trying to migrate an instance from local storage to a hypervisor that is 
backed by RBD results in the error: invalid spec 
'b05eafa0-e7da-4637-a47c-cddca5cc381c_/var/lib/nova/instances/b05eafa0-e7da-4637-a
  47c-cddca5cc381c/disk'

  Steps to reproduce
  ==
  * Create an instance from an image that is not on a rbd backend on a node 
that is configured to store the instance disk locally (as qcow2)
  * Live-Migrate the instance to a node that is configured to use rbd as 
backend.
  openstack server migrate --live compute-rbd --block-migration UUID

  Expected result
  ===
  Migration completes successfully

  Actual result
  =
  rbd import command of base image fails with: Stderr: u"rbd: --pool is 
deprecated for import, use --dest-pool\nrbd: invalid spec 
'b05eafa0-e7da-4637-a47c-cddca5cc381c_/var/lib/nova/instances/b05eafa0-e7da-4637-a
  47c-cddca5cc381c/disk'\n"

  The command run is: rbd import --pool vms
  /var/lib/nova/instances/_base/4d3facc86af22a33b3ca0eb95c1953c76b72bfaa
  b05eafa0-e7da-4637-a47c-
  cddca5cc381c_/var/lib/nova/instances/b05eafa0-e7da-4637-a47c-
  cddca5cc381c/disk --image-format=2 --id cinder --conf
  /etc/ceph/ceph.conf

  A trace is attached below.

  Environment
  ===
  Nova 10.1.0
  CentOS 7.5
  Ceph 12.2

  Is there any way to migrate a local instance to a rbd backend or is
  this not supported?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1787588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867077] Re: stein: when use "openstack server resize" command, there is an error: Unexpected API Error.

2020-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/712766
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=efdcaf00e01b12ac15b938392b2f4b9e7db5eff1
Submitter: Zuul
Branch:master

commit efdcaf00e01b12ac15b938392b2f4b9e7db5eff1
Author: Stephen Finucane 
Date:   Thu Mar 12 18:06:14 2020 +

Handle flavor disk mismatches when resizing

When resizing a non-volume-backed instance, we call the
'_validate_flavor_image_nostatus' function to do a myriad of checks with
the aim of ensuring the flavor and image don't conflict. One of these
checks tests whether the flavor is requesting a smaller local disk than
the size of the image of the minimum size the image says it requires. If
this check fails, it will raise the 'FlavorDiskSmallerThanImage' or
'FlavorDiskSmallerThanMinDisk' exceptions, respectively. We currently
handle this exception in the 'create' and 'rebuild' flows but do not in
the 'resize' path. Correct this by way of adding this exception to
'INVALID_FLAVOR_IMAGE_EXCEPTIONS', a list of exceptions that can be
raised when an flavor and image conflict.

The fix for this issue also highlights another exception that can be
raised in the three code paths but is not handled by them all,
'FlavorMemoryTooSmall'. This is added to
'INVALID_FLAVOR_IMAGE_EXCEPTIONS' also.

Change-Id: Idc82ed3bcfc37220a50d9e2d552be5ab8844374a
Signed-off-by: Stephen Finucane 
Closes-Bug: #1867077


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1867077

Title:
   stein: when use "openstack server resize" command, there is an error:
  Unexpected API Error.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  kolla-ansible: 8.0.1
  openstack_release: "stein"
  kolla_install_type: "source"
  kolla_base_distro: "centos"

  when I use "openstack server resize" command, there is an API error:

  # openstack server resize --flavor m1.large haproxyserver
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-576f1e82-6d47-4c6a-87f1-1e552914f9d6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1867077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736920] Re: Glance images are loaded into memory

2020-05-28 Thread Jeremy Stanley
** Changed in: ossa
   Status: Incomplete => Invalid

** Information type changed from Public Security to Public

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736920

Title:
  Glance images are loaded into memory

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  Nova appears to be loading entire responses from glance into memory
  [1]. This is generally not an issue but these responses could be an
  entire images [2]. Given a large enough image, this seems like a
  potential avenue for DoS, not to mention being highly inefficient.

  [1] 
https://github.com/openstack/nova/blob/16.0.0/nova/image/glance.py#L167-L170
  [2] 
https://github.com/openstack/nova/blob/16.0.0/nova/image/glance.py#L292-L295

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1736920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736920] Re: Glance images are loaded into memory

2020-05-28 Thread Stephen Finucane
I finally got around to investigating this today. tl;dr: there does not
appear to be an issue here.

The return value of 'glanceclient.Client.images.data' is
'glanceclient.common.utils.RequestIdProxy', owing to the use of the
'add_req_id_to_object' decorator [2]. This is *not* a generator, which
means the 'inspect.isgenerator' conditional at [1] is False and we will
never convert these large images to a list. In fact, there appears to be
only one case that does trigger this: the
'glanceclient.Client.images.list' case, which returns a
'glanceclient.common.utils.GeneratorProxy' object due to the use of the
'add_req_id_to_generator' decorator. This is the function at the root of
bug #1557584. As such, the fix is correct and there's nothing to do here
besides possibly documenting things better in the code.

[1] https://github.com/openstack/nova/blob/16.0.0/nova/image/glance.py#L167
[2] 
https://github.com/openstack/python-glanceclient/blob/3.1.1/glanceclient/v2/images.py#L200
[3] 
https://github.com/openstack/python-glanceclient/blob/3.1.1/glanceclient/v2/images.py#L85

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736920

Title:
  Glance images are loaded into memory

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Nova appears to be loading entire responses from glance into memory
  [1]. This is generally not an issue but these responses could be an
  entire images [2]. Given a large enough image, this seems like a
  potential avenue for DoS, not to mention being highly inefficient.

  [1] 
https://github.com/openstack/nova/blob/16.0.0/nova/image/glance.py#L167-L170
  [2] 
https://github.com/openstack/nova/blob/16.0.0/nova/image/glance.py#L292-L295

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1736920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881095] [NEW] [OVN] Router availability zones support

2020-05-28 Thread Lucas Alvares Gomes
Public bug reported:

Reference: https://docs.openstack.org/neutron/latest/admin/config-
az.html

For feature parity, OVN needs to add support for the
"router_availability_zone" extension.

This means that, when scheduling the logical router ports the OVN driver
should take in consideration the router availability zones hints and
only schedule the ports onto the nodes (or chassis in OVN terms) that
belongs to those availability zones.

Differently from ML2/OVS, the OVN driver does not have L3 agents running
on nodes such as networker nodes so we can't reuse the same agents
configuration option. In OVN we will need to find another place to put
this information.

** Affects: neutron
 Importance: Undecided
 Assignee: Lucas Alvares Gomes (lucasagomes)
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881095

Title:
  [OVN] Router availability zones support

Status in neutron:
  Confirmed

Bug description:
  Reference: https://docs.openstack.org/neutron/latest/admin/config-
  az.html

  For feature parity, OVN needs to add support for the
  "router_availability_zone" extension.

  This means that, when scheduling the logical router ports the OVN
  driver should take in consideration the router availability zones
  hints and only schedule the ports onto the nodes (or chassis in OVN
  terms) that belongs to those availability zones.

  Differently from ML2/OVS, the OVN driver does not have L3 agents
  running on nodes such as networker nodes so we can't reuse the same
  agents configuration option. In OVN we will need to find another place
  to put this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881085] Re: About l3 router support ECMP

2020-05-28 Thread yangjianfeng
Sorry I just found this bug
https://bugs.launchpad.net/neutron/+bug/1880532

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881085

Title:
  About l3 router support ECMP

Status in neutron:
  Invalid

Bug description:
  The neutron API support add some routes which have same destioation and 
defferent gateways to a router. In other words, the below command can be 
executed successfully:
  "openstack router set 79cf5d9d-c4ab-45d7-8959-30963cba2ea4 --route 
destination=10.30.30.35/32,gateway=10.30.30.2 --route 
destination=10.30.30.35/32,gateway=10.30.30.3"
  This look like the neutron support add ECMP route to l3 router.

  But, in agent side these routes don't be processed correctly. The 
qrouter-79cf5d9d-c4ab-45d7-8959-30963cba2ea4 namespace's route entity as below:
  10.30.30.35 via 10.30.30.3 dev qr-905cfb1e-43 
  10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
  or (The second route entity override the first one. Aout why has the 
difference, I haven't found the reason yet.)
  10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
  But, I think the route entity shoule be like below:
  10.30.30.35 
nexthop via 10.30.30.2 dev qr-905cfb1e-43 weight 1 
nexthop via 10.30.30.3 dev qr-905cfb1e-43 weight 1

  By the way, The neutron-specs has a patch [1] that propose to extend neutron 
API to implement a similar feature. But I don't think the neutron API need to 
be extended, as my comment in [2].
  [1] https://review.opendev.org/#/c/729532/
  [2] 
https://review.opendev.org/#/c/723864/4/specs/version1.1/alternative-active-active-l3-distributor.rst@45

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881085] [NEW] About l3 router support ECMP

2020-05-28 Thread yangjianfeng
Public bug reported:

The neutron API support add some routes which have same destioation and 
defferent gateways to a router. In other words, the below command can be 
executed successfully:
"openstack router set 79cf5d9d-c4ab-45d7-8959-30963cba2ea4 --route 
destination=10.30.30.35/32,gateway=10.30.30.2 --route 
destination=10.30.30.35/32,gateway=10.30.30.3"
This look like the neutron support add ECMP route to l3 router.

But, in agent side these routes don't be processed correctly. The 
qrouter-79cf5d9d-c4ab-45d7-8959-30963cba2ea4 namespace's route entity as below:
10.30.30.35 via 10.30.30.3 dev qr-905cfb1e-43 
10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
or (The second route entity override the first one. Aout why has the 
difference, I haven't found the reason yet.)
10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
But, I think the route entity shoule be like below:
10.30.30.35 
nexthop via 10.30.30.2 dev qr-905cfb1e-43 weight 1 
nexthop via 10.30.30.3 dev qr-905cfb1e-43 weight 1

By the way, The neutron-specs has a patch [1] that propose to extend neutron 
API to implement a similar feature. But I don't think the neutron API need to 
be extended, as my comment in [2].
[1] https://review.opendev.org/#/c/729532/
[2] 
https://review.opendev.org/#/c/723864/4/specs/version1.1/alternative-active-active-l3-distributor.rst@45

** Affects: neutron
 Importance: Undecided
 Status: Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881085

Title:
  About l3 router support ECMP

Status in neutron:
  Invalid

Bug description:
  The neutron API support add some routes which have same destioation and 
defferent gateways to a router. In other words, the below command can be 
executed successfully:
  "openstack router set 79cf5d9d-c4ab-45d7-8959-30963cba2ea4 --route 
destination=10.30.30.35/32,gateway=10.30.30.2 --route 
destination=10.30.30.35/32,gateway=10.30.30.3"
  This look like the neutron support add ECMP route to l3 router.

  But, in agent side these routes don't be processed correctly. The 
qrouter-79cf5d9d-c4ab-45d7-8959-30963cba2ea4 namespace's route entity as below:
  10.30.30.35 via 10.30.30.3 dev qr-905cfb1e-43 
  10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
  or (The second route entity override the first one. Aout why has the 
difference, I haven't found the reason yet.)
  10.30.30.35 via 10.30.30.2 dev qr-905cfb1e-43
  But, I think the route entity shoule be like below:
  10.30.30.35 
nexthop via 10.30.30.2 dev qr-905cfb1e-43 weight 1 
nexthop via 10.30.30.3 dev qr-905cfb1e-43 weight 1

  By the way, The neutron-specs has a patch [1] that propose to extend neutron 
API to implement a similar feature. But I don't think the neutron API need to 
be extended, as my comment in [2].
  [1] https://review.opendev.org/#/c/729532/
  [2] 
https://review.opendev.org/#/c/723864/4/specs/version1.1/alternative-active-active-l3-distributor.rst@45

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2020-05-28 Thread James Page
Marking charm task as invalid as this is a kernel issue with the xenial
release kernel.

Ubuntu/Linux bug task raised for further progression if updating to the
latest HWE kernel on Xenial is not an option.

** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: charm-nova-compute
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  Invalid
Status in linux package in Ubuntu:
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| c1fa4ae9a0b446f2ba64eebf92705d53 |
  ++--+

  Add the volume to an instance:
  

[Yahoo-eng-team] [Bug 1881070] [NEW] the accepted-egress-direct-flows can't be deleted when the VM is deleted

2020-05-28 Thread Li YaJie
Public bug reported:

When vm is deleted or migrate to other compute node, the function
'delete_accepted_egress_direct_flow' was not executed. This will resule
in stale flows in table 61.

reproduction steps:
 1. Create a VM, which mac is fa:16:3e:2a:4c:9f
 2. Show the flows in br-int:
cookie=0xf19902187e0bc0bf, duration=76.736s, table=1, n_packets=0, 
n_bytes=0, priority=20,dl_vlan=9,dl_dst=fa:16:3e:2a:4c:9f 
actions=mod_dl_src:fa:16:3e:e4:8a:e4,resubmit(,60)
cookie=0xf19902187e0bc0bf, duration=74.976s, table=25, n_packets=126, 
n_bytes=11031, priority=2,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f 
actions=resubmit(,60)
cookie=0xf19902187e0bc0bf, duration=76.732s, table=60, n_packets=28, 
n_bytes=3314, priority=20,dl_vlan=9,dl_dst=fa:16:3e:2a:4c:9f 
actions=strip_vlan,output:"qvode3db9ac-24"
cookie=0xf19902187e0bc0bf, duration=76.299s, table=60, n_packets=126, 
n_bytes=11031, priority=9,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f 
actions=resubmit(,61)
cookie=0xf19902187e0bc0bf, duration=76.299s, table=61, n_packets=62, 
n_bytes=6401, priority=12,dl_dst=fa:16:3e:2a:4c:9f 
actions=output:"qvode3db9ac-24"
cookie=0xf19902187e0bc0bf, duration=76.299s, table=61, n_packets=24, 
n_bytes=1782, 
priority=10,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
 actions=mod_vlan_vid:9,output:"patch-tun"
 3. Delete the VM
 4. Show the flows in br-int again:
cookie=0xf19902187e0bc0bf, duration=134.991s, table=61, n_packets=62, 
n_bytes=6401, priority=12,dl_dst=fa:16:3e:2a:4c:9f actions=output:58

As shown above, the flow remains after deleting the virtual machine.

** Affects: neutron
 Importance: Undecided
 Assignee: Li YaJie (yjmango)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Li YaJie (yjmango)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881070

Title:
  the accepted-egress-direct-flows can't be deleted when the VM is
  deleted

Status in neutron:
  New

Bug description:
  When vm is deleted or migrate to other compute node, the function
  'delete_accepted_egress_direct_flow' was not executed. This will
  resule in stale flows in table 61.

  reproduction steps:
   1. Create a VM, which mac is fa:16:3e:2a:4c:9f
   2. Show the flows in br-int:
  cookie=0xf19902187e0bc0bf, duration=76.736s, table=1, n_packets=0, 
n_bytes=0, priority=20,dl_vlan=9,dl_dst=fa:16:3e:2a:4c:9f 
actions=mod_dl_src:fa:16:3e:e4:8a:e4,resubmit(,60)
  cookie=0xf19902187e0bc0bf, duration=74.976s, table=25, n_packets=126, 
n_bytes=11031, priority=2,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f 
actions=resubmit(,60)
  cookie=0xf19902187e0bc0bf, duration=76.732s, table=60, n_packets=28, 
n_bytes=3314, priority=20,dl_vlan=9,dl_dst=fa:16:3e:2a:4c:9f 
actions=strip_vlan,output:"qvode3db9ac-24"
  cookie=0xf19902187e0bc0bf, duration=76.299s, table=60, n_packets=126, 
n_bytes=11031, priority=9,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f 
actions=resubmit(,61)
  cookie=0xf19902187e0bc0bf, duration=76.299s, table=61, n_packets=62, 
n_bytes=6401, priority=12,dl_dst=fa:16:3e:2a:4c:9f 
actions=output:"qvode3db9ac-24"
  cookie=0xf19902187e0bc0bf, duration=76.299s, table=61, n_packets=24, 
n_bytes=1782, 
priority=10,in_port="qvode3db9ac-24",dl_src=fa:16:3e:2a:4c:9f,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
 actions=mod_vlan_vid:9,output:"patch-tun"
   3. Delete the VM
   4. Show the flows in br-int again:
  cookie=0xf19902187e0bc0bf, duration=134.991s, table=61, n_packets=62, 
n_bytes=6401, priority=12,dl_dst=fa:16:3e:2a:4c:9f actions=output:58

  As shown above, the flow remains after deleting the virtual machine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp