[Yahoo-eng-team] [Bug 1862374] Re: Neutron incorrectly selects subnet

2020-04-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862374

Title:
  Neutron incorrectly selects subnet

Status in neutron:
  Expired

Bug description:
  Distro bionic
  Openstack version: openstack testing cloud

  When using the command

  openstack server add floating ip --fixed-ip-address 10.66.0.18 juju-
  53a7bc-north-0 10.245.163.125

  for the following machine

  ubuntu@dmzoneill-bastion:~$ openstack server list -c  ID -c Name -c Networks
  
+--+--+-+
  | ID   | Name | Networks  
  |
  
+--+--+-+
  | 063c2e8e-4d57-4267-a506-4c7b336e71b6 | juju-53a7bc-north-0  | 
north=10.66.0.18; south=10.55.0.9   |
  | 670bd827-f570-439f-b17f-91710849 | juju-6de968-south-0  | 
south=10.55.0.4 |
  | 0cd3c498-2826-4868-9384-e12e0799f903 | juju-855490-default-0| 
dmzoneill_admin_net=10.5.0.6|
  | 0b72ccd8-b694-45c7-86be-870511426140 | juju-370447-controller-0 | 
north=10.66.0.14; south=10.55.0.3; dmzoneill_admin_net=10.5.0.8 |
  | 40c68cb2-4e20-4d15-a82c-4c4252b8a0da | dmzoneill-bastion| 
dmzoneill_admin_net=10.5.0.7, 10.245.162.200|
  
+--+--+-+

  Neutron returns the error

  GET call to network for 
http://10.245.161.159:9696/v2.0/ports?device_id=063c2e8e-4d57-4267-a506-4c7b336e71b6
 used request id req-d915168a-32c6-4c74-9a83-1ef090b376d8
  Manager serverstack ran task network.GET.ports in 0.122786998749s
  Manager serverstack running task network.PUT.floatingips
  REQ: curl -g -i -X PUT 
http://10.245.161.159:9696/v2.0/floatingips/0c771099-ca95-4447-8a60-5f64a590d943
 -H "User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.18.4 
CPython/2.7.15+" -H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}9cf2688baf3b5c3e5dea7e2f7faa6554ee1b6bfb" -d '{"floatingip": 
{"fixed_ip_address": "10.66.0.18", "port_id": 
"6f8626ba-ae8c-492a-93ad-3f349c600a3b"}}'
  http://10.245.161.159:9696 "PUT 
/v2.0/floatingips/0c771099-ca95-4447-8a60-5f64a590d943 HTTP/1.1" 400 169
  RESP: [400] Content-Type: application/json Content-Length: 169 
X-Openstack-Request-Id: req-1a8be2ec-456e-483d-a4e0-5ab204c55c2d Date: Fri, 07 
Feb 2020 14:45:13 GMT Connection: keep-alive
  RESP BODY: {"NeutronError": {"message": "Bad floatingip request: Port 
6f8626ba-ae8c-492a-93ad-3f349c600a3b does not have fixed ip 10.66.0.18.", 
"type": "BadRequest", "detail": ""}}

  Neutron seems to look at the list of networks associated with server
  and pops the last network (south) from the list.

  Enutron selects the port south 6f8626ba-ae8c-492a-93ad-3f349c600a3b
  which is not in that subnet and errors

  ubuntu@dmzoneill-bastion:~$ openstack port list -c ID -c "Fixed IP Addresses"
  
+--+---+
  | ID   | Fixed IP Addresses   
 |
  
+--+---+
  | 1ad11c89-8574-49fc-9efe-b4a0b73b01eb | ip_address='10.55.0.2', 
subnet_id='eec9530d-df77-41eb-8e85-9ef6e45931a7'  |
  | 36b16756-01fe-49c7-9e14-41abaf0059f9 | ip_address='10.5.0.8', 
subnet_id='7edee502-ab23-46af-b446-17c233d11a94'   |
  | 3e20a885-6421-4acd-b936-56b3bcb4930f | ip_address='10.5.0.6', 
subnet_id='7edee502-ab23-46af-b446-17c233d11a94'   |
  | 410beab6-c2ed-4d31-bc3d-4457a3c28b5f | ip_address='10.55.0.4', 
subnet_id='eec9530d-df77-41eb-8e85-9ef6e45931a7'  |
  | 4e1defd4-1185-4f54-bb93-768dbf8d6436 | ip_address='10.66.0.14', 
subnet_id='fe280e59-e8e0-4cdc-923a-1b15e45b95ce' |
  | 56d6a38d-0698-491b-9996-b239a7d95d5b | ip_address='10.55.0.3', 
subnet_id='eec9530d-df77-41eb-8e85-9ef6e45931a7'  |
  | 67558707-ed0b-47b6-be0e-0da0bf2871b5 | ip_address='10.66.0.2', 
subnet_id='fe280e59-e8e0-4cdc-923a-1b15e45b95ce'  |
  | 6d3c6101-75c3-47d0-a496-83cc62092f2d | ip_address='10.5.0.7', 
subnet_id='7edee502-ab23-46af-b446-17c233d11a94'   |
  | 6f8626ba-ae8c-492a-93ad-3f349c600a3b | ip_address='10.55.0.9', 
subnet_id='eec9530d-df77-41eb-8e85-9ef6e45931a7'  |
  | 778899dd-7c15-47a9-8968-261088aa14bf | ip_address='10.66.0.18', 

[Yahoo-eng-team] [Bug 1778563] Re: Resize/Cold-migrate doesn't recreate vGPUs

2020-04-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/712741
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d2e0afc1f72db5cb56ed987e2873798fc1e89545
Submitter: Zuul
Branch:master

commit d2e0afc1f72db5cb56ed987e2873798fc1e89545
Author: Sylvain Bauza 
Date:   Thu Mar 12 18:06:36 2020 +0100

Allocate mdevs when resizing or reverting resize

Now that allocations are passed to the methods, we can ask whether we
need to use mediated devices for the instance.

Adding a large functional test for verifying it works.

Change-Id: I018762335b19c98045ad42147080203092b51c27
Closes-Bug: #1778563


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778563

Title:
  Resize/Cold-migrate doesn't recreate vGPUs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When resizing an instance having vGPUs, the resized instance will miss
  them even if the related allocations have VGPU resources.

  The main problem is that we're not passing allocations down to the
  virt drivers when finish_migration().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1778563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869306] Re: Users module errors for users of same SSH key type with existing user

2020-04-20 Thread Dan Watkins
Actually I think that may have been a red herring, I think "-
name:trent" was the actual problem: that's parsed as ["name:trent"], not
{"name": "trent"}.  Which then means that the parser expects the
following line to be a list item, and it's a mapping item, hence the
blow up.

Regardless, glad you got this sorted!

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869306

Title:
  Users module errors for users of same SSH key type with existing user

Status in cloud-init:
  Invalid

Bug description:
  I'm starting an instance (tried both centos and ubuntu) in AWS with
  user_data similar to the following:

  users:
- name: bob
  sudo: ALL=(ALL) NOPASSWD:ALL
  groups: users
  lock_passwd: true
  ssh_authorized_keys:
   - ssh-rsa some-ssh-pubkey-x
- name: alice
  sudo: ALL=(ALL) NOPASSWD:ALL
  groups: users
  lock_passwd: true
  ssh_authorized_keys:
   - ssh-rsa some-ssh-pubkey-x
- name: mallory
  sudo: ALL=(ALL) NOPASSWD:ALL
  groups: users
  lock_passwd: true
  ssh_authorized_keys:
   - ssh-rsa some-ssh-pubkey-x
- name: trent
  sudo: ALL=(ALL) NOPASSWD:ALL
  groups: users
  lock_passwd: true
  ssh_authorized_keys:
   - ssh-ed25519 some-ssh-pubkey-x

  Two things are special in this case.  Mallory made herself a user
  account on the box before baking the original image, and Trent has an
  ECC key (the rest are using RSA).

  Upon running this in AWS, only Trent gets created.  The only
  discernible error I have seen is:

File "/usr/lib/python2.7/site-packages/cloudinit/ssh_util.py", line 208, in 
us
  ers_ssh_info
  pw_ent = pwd.getpwnam(username)
  KeyError: 'getpwnam(): name not found: alice'

  Trent can log in and see that his key has been created, but literally
  every other user who is using an RSA SSH key hasn't had their user
  created.  Compounding it, Mallory doesn't have a login but still
  retains her home directory.

  The fix for this entails making a user "mallory2" and leaving mallory
  alone.  When this happens, all users get created (though mallory's
  original account is missing other than /home).  I've also tried making
  a mallory user with a custom homedir of /home/mallorytoo, but the same
  error happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1848342] Re: Duplicated entries in users API

2020-04-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/687990
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7597ecc1350eb6918c09585e4116911102acb54a
Submitter: Zuul
Branch:master

commit 7597ecc1350eb6918c09585e4116911102acb54a
Author: Pedro Martins 
Date:   Thu Oct 10 08:51:32 2019 -0300

Stop adding entry in local_user while updating ephemerals

Problem description
===
Today we have a consistency problem when updating federated
users via OpenStack. When I update a ephemeral user via OpenStack,
a registry in the local_user table is created, making this user
having entries in user, local_user and federated_user tables in
the database.

Furthermore, if I try to do some operations using this user
(that has entries in all three tables), I get a "More than one
user exists with the name ..." error from the OpenStack
Keystone API. It happens because the user has an entry in both
local_user and federated_user tables.

I fix the persistence in the local_user table for ephemeral
users when doing updates.

Proposal

I fix the problem with creating an entry in the
local_user table while updating an ephemeral user

Closes-Bug: #1848342

Change-Id: I2ac6e90f24b94dc5c0d9c0758f008a388597036c


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1848342

Title:
  Duplicated entries in users API

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  System version
  ==
  Keystone: 14.0.1 (Rocky)

  
  The error happens when I try to show a federated user via the user name as 
the key instead of using the ID

  Reproducing the error:

  ==

  First of all, I created a user in my Identity Provider (Idp) that is
  configured my KeyStone instance.

  We use OpenID Connect protocol to create the federated environment. In
  this test, the user created is

   named as 'use_case'. At this moment, the user is not in the OpenStack
  database, just in the Idp

   (ephemeral user type).

  Authenticating with the ephemeral federated user:

  Let's see the database tables before the login process.

  Users already created in the Idp (before the login):

  select * from user;

  
+--+---+-++-++---+

  | id   | extra | enabled |
  default_project_id | created_at  | last_active_at | domain_id
  |

  
+--+---+-++-++---+

  | 1b0ed400ec974420a3574a8788acc795 | {}|   1 | NULL
  | 2019-09-16 20:06:39 | NULL   | default   |

  
+--+---+-++-++---+

  select * from local_user;

  
+-+--+---+---+---++

  | id  | user_id  | domain_id | name  |
  failed_auth_count | failed_auth_at |

  
+-+--+---+---+---++

  |   1 | 1b0ed400ec974420a3574a8788acc795 | default   | admin |
  0 | NULL   |

  
+-+--+---+---+---++

  select * from federated_user;

  Empty set (0.00 sec)

  So far, we have only the local user admin

  Then, we execute the login using the ephemeral user that we created in
  the IdP.

  Let's the database state after the first login:

  select * from user;

  
+--+-+-++-++--+

  | id   | extra
  | enabled | default_project_id | created_at  | last_active_at
  | domain_id|

  
+--+-+-++-++--+

  | 1b0ed400ec974420a3574a8788acc795 | {}
  |   1 | NULL   | 2019-09-16 20:06:39 | NULL
  | default  |

  | f0327d98278f4b1881aec9c051b027d3 | {"email":
  "use_c...@gmail.com"} |   1 | NULL   | 2019-10-15
  18:08:17 | NULL   | d28c2423e9b546deb37a034bb2134f4d |

  

[Yahoo-eng-team] [Bug 1834659] Re: Volume not removed on instance deletion

2020-04-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/669674
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=01c334cbdd859f4e486ac2c369a4bdb3ec7709cc
Submitter: Zuul
Branch:master

commit 01c334cbdd859f4e486ac2c369a4bdb3ec7709cc
Author: Francois Palin 
Date:   Mon Jul 8 10:12:25 2019 -0400

Add retry to cinder API calls related to volume detach

When shutting down an instance for which volume needs to be
deleted, if cinder RPC timeout expires before cinder volume
driver terminates connection, then an unknown cinder exception
is received and the volume is not removed.

This fix adds a retry mechanism directly in cinder API calls
attachment_delete, terminate_connection, and detach.

Change-Id: I3c9ae47d0ceb64fa3082a01cb7df27faa4f5a00d
Closes-Bug: #1834659


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1834659

Title:
  Volume not removed on instance deletion

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When we deploy a non-ephemeral instance (i.e. Creating a new volume), and 
  indicate "YES" in "Delete Volume on Instance delete",then delete the 
  instance, and volume driver terminate connection in cinder takes too long 
  to return, the volume is not removed.

  The volume status remains as "In-use" and "Attached to None on /dev/vda".
  For example:
  abcfa1db-1748-4f04-9a29-128cf22efcc5  - 130GiB In-use - Attached to None on 
/dev/vda

  Steps to reproduce
  ==
  Please refer to this bug comment #2 below

  Expected result
  ===
  Volume gets removed

  Actual result
  =
  Volume remains attached

  Environment
  ===
  Issue was initially reported downstream against Newton release (see 
  comment #1 below). Customer was using hitachi volume driver:
 volume_driver = cinder.volume.drivers.hitachi.hbsd.hbsd_fc.HBSDFCDriver
  As a note, the hitachi drivers are unsupported as of Pike (see cinder 
  commit 595c8d3f8523a9612ccc64ff4147eab993493892

  Issue was reproduced in a devstack environment runnning the Stein release.
  Volume driver used was lvm (devstack default)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1834659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863852] Re: [OVN]Could not support more than one qos rule in one policy

2020-04-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/711317
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b098239d72d79469677bdc1e45daf7454b2eeb47
Submitter: Zuul
Branch:master

commit b098239d72d79469677bdc1e45daf7454b2eeb47
Author: Rodolfo Alonso Hernandez 
Date:   Wed Mar 4 16:39:35 2020 +

Refactor OVN client QoS extension

The QoS OVN client extension is moved to the ML2 driver. This
extension is called from the OVN driver in the events of:
- create port
- update port
- delete port
- update network

The QoS OVN client extension now can accept several rules per policy
as documented in the SUPPORTED_RULES. The QoS OVN client extension
can write one OVN QoS rule per flow direction and each OVN QoS rule
register can hold both a bandwidth limit rule and a DSCP marking rule.

The "update_policy" method is called from the OVN QoS driver, when
a QoS policy or its rules are updated.

The QoS OVN client extension updates the QoS OVN registers
exclusively, based on the related events.

Closes-Bug: #1863852

Change-Id: I4833ed0c9a2741bdd007d4ebb3e8c1cb4c30d4c7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863852

Title:
  [OVN]Could not support more than one qos rule in one policy

Status in neutron:
  Fix Released

Bug description:
  Currently,the networking-ovn doesn't consider such qos policy, which has both 
ingress and egress rule.
  So when we apply such policy to port or network, only one of them will works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873910] [NEW] Nightly CI builds are failing because due to missing get_interfaces_by_mac mocking

2020-04-20 Thread Dan Watkins
Public bug reported:

get_interfaces_by_mac is currently errorring on the system that runs
cloud-init's nightly CI.  This indicates missing mocking in the
following tests:

FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_context_devname
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_emptycontext
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_nonecontext
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_withdefaultvalue
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_withdefaultvalue_emptycontext
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_gateway
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_gateway6
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_dual
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_prefix
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_prefix_emptystring
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_ula
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip_emptystring
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_mask
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_mask_emptystring
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_mtu
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_nameservers
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_network
FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_network_emptystring
FAILED 
cloudinit/sources/tests/test_oracle.py::TestDataSourceOracle::test_network_cmdline
FAILED 
cloudinit/sources/tests/test_oracle.py::TestDataSourceOracle::test_network_fallback

(Sourced from https://jenkins.ubuntu.com/server/job/cloud-init-ci-
nightly/712/.)

** Affects: cloud-init
 Importance: Undecided
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1873910

Title:
  Nightly CI builds are failing because due to missing
  get_interfaces_by_mac mocking

Status in cloud-init:
  New

Bug description:
  get_interfaces_by_mac is currently errorring on the system that runs
  cloud-init's nightly CI.  This indicates missing mocking in the
  following tests:

  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_context_devname
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_emptycontext
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_nonecontext
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_withdefaultvalue
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_field_withdefaultvalue_emptycontext
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_gateway
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_gateway6
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_dual
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_prefix
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_prefix_emptystring
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip6_ula
  FAILED 
tests/unittests/test_datasource/test_opennebula.py::TestOpenNebulaNetwork::test_get_ip_emptystring
  FAILED 

[Yahoo-eng-team] [Bug 1873776] Re: [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong python version

2020-04-20 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1873776

Title:
  [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong
  python version

Status in neutron:
  In Progress

Bug description:
  Issue like in
  
https://b5dda6b7d9fe09cc6972-899ca8db232a7bf5fd9d6d869408b0bd.ssl.cf1.rackcdn.com/720051/1/check
  /neutron-tempest-plugin-api-
  stein/5a092fb/controller/logs/devstacklog.txt happens in all neutron-
  tempest-plugin jobs in stable/stein branch.

  Error:

  2020-04-20 00:19:32.058 | Ignoring typed-ast: markers 'python_version == 
"3.6"' don't match your environment
  2020-04-20 00:19:32.120 | Obtaining file:///opt/stack/neutron-tempest-plugin
  2020-04-20 00:19:32.987 | neutron-tempest-plugin requires Python '>=3.6' but 
the running Python is 2.7.17
  2020-04-20 00:19:33.080 | You are using pip version 9.0.3, however version 
20.0.2 is available.
  2020-04-20 00:19:33.080 | You should consider upgrading via the 'pip install 
--upgrade pip' command.

  I think it should be run with python 3.6 or we should use tag 0.9.0
  which is last version with support for python 2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1873776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873798] [NEW] test_multicast_between_vms_on_same_network failures

2020-04-20 Thread Lucas Alvares Gomes
Public bug reported:

There's at least two failures happening in the
test_multicast_between_vms_on_same_network test

#1.

2020-04-10 15:07:03.425451 | controller | {0}
neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network
... SKIPPED:
neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network
[id-113486fc-24c9-4be4-8361-03b1c9892867] was marked as unstable because
of bug 1850288, failure was: 'IPAddress' object has no attribute
'startswith'

#2.

neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network[id-113486fc-24c9-4be4-8361-03b1c9892867]
 was marked as unstable because of bug 1850288, failure was: Command 'sudo sh 
-eux' failed, exit status: 127, host: '10.0.0.247'
script:
bash ~/unregistered_traffic_receiver.sh
stderr:
+ bash /root/unregistered_traffic_receiver.sh
bash: /root/unregistered_traffic_receiver.sh: No such file or directory

** Affects: neutron
 Importance: Medium
 Assignee: Lucas Alvares Gomes (lucasagomes)
 Status: Confirmed

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1873798

Title:
  test_multicast_between_vms_on_same_network failures

Status in neutron:
  Confirmed

Bug description:
  There's at least two failures happening in the
  test_multicast_between_vms_on_same_network test

  #1.

  2020-04-10 15:07:03.425451 | controller | {0}
  
neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network
  ... SKIPPED:
  
neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network
  [id-113486fc-24c9-4be4-8361-03b1c9892867] was marked as unstable
  because of bug 1850288, failure was: 'IPAddress' object has no
  attribute 'startswith'

  #2.

  
neutron_tempest_plugin.scenario.test_multicast.MulticastTestIPv4.test_multicast_between_vms_on_same_network[id-113486fc-24c9-4be4-8361-03b1c9892867]
 was marked as unstable because of bug 1850288, failure was: Command 'sudo sh 
-eux' failed, exit status: 127, host: '10.0.0.247'
  script:
  bash ~/unregistered_traffic_receiver.sh
  stderr:
  + bash /root/unregistered_traffic_receiver.sh
  bash: /root/unregistered_traffic_receiver.sh: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1873798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873774] [NEW] clout-init does not apply network settings

2020-04-20 Thread Ankit Sharma
Public bug reported:

clout-init does not apply network settings however rest of the settings
are applied. This is not in cloud but a VM on ESXi


network:
  version: 2
  ethernets:
ens192:
  dhcp4: false
  dhcp6: false
  addresses:
- 10.0.0.2
  gateway4: 10.0.0.1
  nameservers:
search: [xyz.com]
addresses: [10.0.0.1]

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1873774

Title:
  clout-init does not apply network settings

Status in cloud-init:
  New

Bug description:
  clout-init does not apply network settings however rest of the
  settings are applied. This is not in cloud but a VM on ESXi

  
  network:
version: 2
ethernets:
  ens192:
dhcp4: false
dhcp6: false
addresses:
  - 10.0.0.2
gateway4: 10.0.0.1
nameservers:
  search: [xyz.com]
  addresses: [10.0.0.1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1873774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873776] [NEW] [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong python version

2020-04-20 Thread Slawek Kaplonski
Public bug reported:

Issue like in
https://b5dda6b7d9fe09cc6972-899ca8db232a7bf5fd9d6d869408b0bd.ssl.cf1.rackcdn.com/720051/1/check
/neutron-tempest-plugin-api-
stein/5a092fb/controller/logs/devstacklog.txt happens in all neutron-
tempest-plugin jobs in stable/stein branch.

Error:

2020-04-20 00:19:32.058 | Ignoring typed-ast: markers 'python_version == "3.6"' 
don't match your environment
2020-04-20 00:19:32.120 | Obtaining file:///opt/stack/neutron-tempest-plugin
2020-04-20 00:19:32.987 | neutron-tempest-plugin requires Python '>=3.6' but 
the running Python is 2.7.17
2020-04-20 00:19:33.080 | You are using pip version 9.0.3, however version 
20.0.2 is available.
2020-04-20 00:19:33.080 | You should consider upgrading via the 'pip install 
--upgrade pip' command.

I think it should be run with python 3.6 or we should use tag 0.9.0
which is last version with support for python 2.7

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1873776

Title:
  [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong
  python version

Status in neutron:
  Confirmed

Bug description:
  Issue like in
  
https://b5dda6b7d9fe09cc6972-899ca8db232a7bf5fd9d6d869408b0bd.ssl.cf1.rackcdn.com/720051/1/check
  /neutron-tempest-plugin-api-
  stein/5a092fb/controller/logs/devstacklog.txt happens in all neutron-
  tempest-plugin jobs in stable/stein branch.

  Error:

  2020-04-20 00:19:32.058 | Ignoring typed-ast: markers 'python_version == 
"3.6"' don't match your environment
  2020-04-20 00:19:32.120 | Obtaining file:///opt/stack/neutron-tempest-plugin
  2020-04-20 00:19:32.987 | neutron-tempest-plugin requires Python '>=3.6' but 
the running Python is 2.7.17
  2020-04-20 00:19:33.080 | You are using pip version 9.0.3, however version 
20.0.2 is available.
  2020-04-20 00:19:33.080 | You should consider upgrading via the 'pip install 
--upgrade pip' command.

  I think it should be run with python 3.6 or we should use tag 0.9.0
  which is last version with support for python 2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1873776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1873761] [NEW] Internal IP leak to physical interface from qrouter in DVR mode

2020-04-20 Thread Yedhu Sastri
Public bug reported:

Setup: Openstack-Ansible cluster(Rocky - 18.1.8) with computes nodes using DVR. 
OS version Ubuntu 16.04.6 LTS with kernel 4.15.0-34-generic.
 
Problem: We can see internal IP leaked without NAT on our physical interface. 
This happens in TCP communication where client stopped abruptly before the 
server. 

Steps to reproduce:

TCP Client(192.168.100.24, 10.96.48.159)  
TCP Server(192.168.100.20, 10.96.48.207)

Server sends RST packets on connection termination.

Step1: Start the server and client.
Setp2: Stop the client(KeyboardInterrupt) while the server is still in the 
connection. 

tcpdump on the bond interface of the compute node in which the tcp
client is running

07:50:35.658208 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [S], seq 
3764020836, win 64240, options [mss 1460,sackOK,TS val 2823050719 ecr 
0,nop,wscale 7], length 0
07:50:35.658539 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [S.], seq 
1750463809, ack 3764020837, win 65160, options [mss 1460,sackOK,TS val 
2874529221 ecr 2823050719,nop,wscale 7], length 0
07:50:35.658717 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [.], ack 1, 
win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], length 0
07:50:35.658746 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [P.], seq 
1:14, ack 1, win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], 
length 13
07:50:35.658949 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 14, 
win 509, options [nop,nop,TS val 2874529221 ecr 2823050720], length 0
07:50:35.659113 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [P.], seq 
14:32, ack 1, win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], 
length 18
07:50:35.659299 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 32, 
win 509, options [nop,nop,TS val 2874529221 ecr 2823050720], length 0
07:50:40.729542 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [F.], seq 32, 
ack 1, win 502, options [nop,nop,TS val 2823055790 ecr 2874529221], length 0
07:50:40.773484 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 33, 
win 509, options [nop,nop,TS val 2874534335 ecr 2823055790], length 0
07:53:35.732815 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [P.], seq 
1:21, ack 33, win 509, options [nop,nop,TS val 2874709290 ecr 2823055790], 
length 20
07:53:35.732878 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [R.], seq 21, 
ack 33, win 509, options [nop,nop,TS val 2874709291 ecr 2823055790], length 0

07:53:35.733668 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R],
seq 3764020869, win 0, length 0


tcpdump on the bond interface of the compute node in which the tcp server is 
running


07:50:35.658302 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [S], seq 
3764020836, win 64240, options [mss 1460,sackOK,TS val 2823050719 ecr 
0,nop,wscale 7], length 0
07:50:35.658589 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [S.], seq 
1750463809, ack 3764020837, win 65160, options [mss 1460,sackOK,TS val 
2874529221 ecr 2823050719,nop,wscale 7], length 0
07:50:35.658811 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [.], ack 1, 
win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], length 0
07:50:35.658901 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [P.], seq 
1:14, ack 1, win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], 
length 13
07:50:35.658998 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 14, 
win 509, options [nop,nop,TS val 2874529221 ecr 2823050720], length 0
07:50:35.659205 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [P.], seq 
14:32, ack 1, win 502, options [nop,nop,TS val 2823050720 ecr 2874529221], 
length 18
07:50:35.659350 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 32, 
win 509, options [nop,nop,TS val 2874529221 ecr 2823050720], length 0
07:50:40.729633 IP 10.96.48.159.36394 > 10.96.48.207.5005: Flags [F.], seq 32, 
ack 1, win 502, options [nop,nop,TS val 2823055790 ecr 2874529221], length 0
07:50:40.773533 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [.], ack 33, 
win 509, options [nop,nop,TS val 2874534335 ecr 2823055790], length 0
07:53:35.732868 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [P.], seq 
1:21, ack 33, win 509, options [nop,nop,TS val 2874709290 ecr 2823055790], 
length 20
07:53:35.732898 IP 10.96.48.207.5005 > 10.96.48.159.36394: Flags [R.], seq 21, 
ack 33, win 509, options [nop,nop,TS val 2874709291 ecr 2823055790], length 0

07:53:35.733767 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R], seq 
3764020869, win 0, length 0
07:53:35.734408 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R], seq 
3764020869, win 0, length 0
07:53:35.734602 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R], seq 
3764020869, win 0, length 0
07:53:35.734748 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R], seq 
3764020869, win 0, length 0
07:53:35.734873 IP 192.168.100.24.36394 > 10.96.48.207.5005: Flags [R], seq 
3764020869, win 0, length 0
07:53:35.734973 IP 192.168.100.24.36394 > 

[Yahoo-eng-team] [Bug 1873735] [NEW] Functional test 'test_image_member_lifecycle_for_multiple_stores' fails intermittently for py37

2020-04-20 Thread Abhishek Kekane
Public bug reported:

Functional test test_image_member_lifecycle_for_multiple_stores failing
intermittently for python 37 in upstream zuul checks. Same test is
passing with python 3.6 environment and also passes in local environment
for python 3.7

No fruitful information from logs either.

Reference logs http://paste.openstack.org/show/792390/

** Affects: glance
 Importance: High
 Status: New

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1873735

Title:
  Functional test 'test_image_member_lifecycle_for_multiple_stores'
  fails intermittently for py37

Status in Glance:
  New

Bug description:
  Functional test test_image_member_lifecycle_for_multiple_stores
  failing intermittently for python 37 in upstream zuul checks. Same
  test is passing with python 3.6 environment and also passes in local
  environment for python 3.7

  No fruitful information from logs either.

  Reference logs http://paste.openstack.org/show/792390/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1873735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp