[Yahoo-eng-team] [Bug 1881311] [NEW] Neutron Tempest Pagination test cases fail if run in parallel

2020-06-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Neutron tempest Pagination tet cases fail if run in parallel.

# Issue
The pagination test cases see items created by other test cases and hence 
expected vs actual differ and the test cases fail.

# Proposed solution:
1. Update pagination test cases to query neutron resource only for a specific 
project.

OR

2. Check the project ID in expected test data and use that Project ID to match 
the results in Actual. Ignore any other Project ID
Open to further discussion or any other solution.

For Eg:

## These two test cases fail:

Test case 1:
 
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links[id-b4293e59-d794-4a93-be09-38667199ef68]

```code
Traceback
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links[id-b4293e59-d794-4a93-be09-38667199ef68]
   Traceback:  Traceback (most recent call last): File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/test_trunk.py",
 line 353, in test_list_pagination_page_reverse_with_href_links 
self._test_list_pagination_page_reverse_with_href_links() File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1132, in inner return f(self, *args, **kwargs) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1123, in inner return f(self, *args, **kwargs) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1346, in _test_list_pagination_page_reverse_with_href_links 
self.assertSameOrder(expected_resources, reversed(resources)) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1160, in assertSameOrder self.assertEqual(len(original), len(actual)) File 
"/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 411, in 
assertEqual self.assertThat(observed, matcher, message) File 
"/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 498, in 
assertThat raise mismatch_error testtools.matchers._impl.MismatchError: 5 != 6
```

Test case 2:
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_with_href_links
[id-dcd02a7a-f07e-4d5e-b0ca-b58e48927a9b]

```code
Traceback (most recent call last):
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/test_trunk.py",
 line 337, in test_list_pagination_with_href_links
 self._test_list_pagination_with_href_links()
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1132, in inner
 return f(self, *args, **kwargs)
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1123, in inner
 return f(self, *args, **kwargs)
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1312, in _test_list_pagination_with_href_links
 self._test_list_pagination_iteratively(self._list_all_with_hrefs)
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1241, in _test_list_pagination_iteratively
 len(expected_resources), sort_args
 File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1302, in _list_all_with_hrefs
 self.assertEqual(1, len(resources_))
 File "/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 411, 
in assertEqual
 self.assertThat(observed, matcher, message)
 File "/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 498, 
in assertThat
 raise mismatch_error
testtools.matchers._impl.MismatchError: 1 != 0
```

## Reason for failure:
More neutron trunk ports are being returned than expected.

Code has to be fixed for 2 and 3 below such that trunks are returned
only for one project (specify project_id in GET trunks call). A parallel
test might be creating trunks so this test case is failing.

2. test_list_pagination_page_reverse_with_href_links
 -> Expected returns trunks for project: 864acee2d6c64faa8750cfe53437a158
 -> Actual paginated returns trunk for more projects: 
89e63227c3b6405498f8fb1973cd055d and 864acee2d6c64faa8750cfe53437a158

3.  test_list_pagination_with_href_links
 Same issue as in 2.

## Able to run the tests if run in serial mode: concurrency = 1
{0} 
neutron_tempest_plugin.api.test_networks.NetworksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
 [3.404399s] ... ok{0}
neutron_tempest_plugin.api.test_networks.NetworksSearchCriteriaTest.test_list_pagination_with_href_links
 [9.412321s] ... ok

{0} 
neutron_tempest_plugin.api.test_networks.NetworksSearchCriteriaTest.test_list_pagination_with_marker
 [4.566547s] ... ok{0}
neutron_tempest_plugin.api.test_ports.PortsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
 [4.636547s] ... ok

{0} 
neutron_tempest_plugin.api.test_subnets.SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
 

[Yahoo-eng-team] [Bug 1881311] Re: Neutron Tempest Pagination test cases fail if run in parallel

2020-06-01 Thread Martin Kopec
I'm changing the project to neutron as it seems the problem is within
neutron tempest plugin, not tempest.

** Project changed: tempest => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881311

Title:
  Neutron Tempest Pagination test cases fail if run in parallel

Status in neutron:
  New

Bug description:
  Neutron tempest Pagination tet cases fail if run in parallel.

  # Issue
  The pagination test cases see items created by other test cases and hence 
expected vs actual differ and the test cases fail.

  # Proposed solution:
  1. Update pagination test cases to query neutron resource only for a specific 
project.

  OR

  2. Check the project ID in expected test data and use that Project ID to 
match the results in Actual. Ignore any other Project ID
  Open to further discussion or any other solution.

  For Eg:

  ## These two test cases fail:

  Test case 1:
   
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links[id-b4293e59-d794-4a93-be09-38667199ef68]

  ```code
  Traceback
  
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links[id-b4293e59-d794-4a93-be09-38667199ef68]
   Traceback:  Traceback (most recent call last): File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/test_trunk.py",
 line 353, in test_list_pagination_page_reverse_with_href_links 
self._test_list_pagination_page_reverse_with_href_links() File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1132, in inner return f(self, *args, **kwargs) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1123, in inner return f(self, *args, **kwargs) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1346, in _test_list_pagination_page_reverse_with_href_links 
self.assertSameOrder(expected_resources, reversed(resources)) File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1160, in assertSameOrder self.assertEqual(len(original), len(actual)) File 
"/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 411, in 
assertEqual self.assertThat(observed, matcher, message) File 
"/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 498, in 
assertThat raise mismatch_error testtools.matchers._impl.MismatchError: 5 != 6
  ```

  Test case 2:
  
neutron_tempest_plugin.api.test_trunk.TrunksSearchCriteriaTest.test_list_pagination_with_href_links
  [id-dcd02a7a-f07e-4d5e-b0ca-b58e48927a9b]

  ```code
  Traceback (most recent call last):
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/test_trunk.py",
 line 337, in test_list_pagination_with_href_links
   self._test_list_pagination_with_href_links()
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1132, in inner
   return f(self, *args, **kwargs)
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1123, in inner
   return f(self, *args, **kwargs)
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1312, in _test_list_pagination_with_href_links
   self._test_list_pagination_iteratively(self._list_all_with_hrefs)
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1241, in _test_list_pagination_iteratively
   len(expected_resources), sort_args
   File 
"/usr/local/lib/python3.6/dist-packages/neutron_tempest_plugin/api/base.py", 
line 1302, in _list_all_with_hrefs
   self.assertEqual(1, len(resources_))
   File "/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 
411, in assertEqual
   self.assertThat(observed, matcher, message)
   File "/usr/local/lib/python3.6/dist-packages/testtools/testcase.py", line 
498, in assertThat
   raise mismatch_error
  testtools.matchers._impl.MismatchError: 1 != 0
  ```

  ## Reason for failure:
  More neutron trunk ports are being returned than expected.

  Code has to be fixed for 2 and 3 below such that trunks are returned
  only for one project (specify project_id in GET trunks call). A
  parallel test might be creating trunks so this test case is failing.

  2. test_list_pagination_page_reverse_with_href_links
   -> Expected returns trunks for project: 864acee2d6c64faa8750cfe53437a158
   -> Actual paginated returns trunk for more projects: 
89e63227c3b6405498f8fb1973cd055d and 864acee2d6c64faa8750cfe53437a158

  3.  test_list_pagination_with_href_links
   Same issue as in 2.

  ## Able to run the tests if run in serial mode: concurrency = 1
  {0} 
neutron_tempest_plugin.api.test_networks.NetworksSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
 [3.404399s] ... ok{0}
  

[Yahoo-eng-team] [Bug 1881685] [NEW] Inconsistent coding while upgrading

2020-06-01 Thread hanchl
Public bug reported:

I am trying to upgrade the platform from queens to rocky . To meet
production needs, I cannot upgrade the platform in place. So I need to
copy the queens version data to the rocky version environment before
upgrading. After I import the queens version database to the rocky
version, execute the following command:

"neutron-db-manage \
  --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
  upgrade head"
Then I got an error:

oslo_db.exception.DBError: (pymysql.err.InternalError) (1005, u'Can\'t
create table `neutron`.`portforwardings` (errno: 150 "Foreign key
constraint is incorrectly formed")') [SQL: u'\nCREATE TABLE
portforwardings (\n\tid VARCHAR(36) NOT NULL, \n\tfloatingip_id
VARCHAR(36) NOT NULL, \n\texternal_port INTEGER NOT NULL,
\n\tinternal_neutron_port_id VARCHAR(36) NOT NULL, \n\tprotocol
VARCHAR(40) NOT NULL, \n\tsocket VARCHAR(36) NOT NULL, \n\tPRIMARY KEY
(id), \n\tFOREIGN KEY(floatingip_id) REFERENCES floatingips (id) ON
DELETE CASCADE, \n\tFOREIGN KEY(internal_neutron_port_id) REFERENCES
ports (id) ON DELETE CASCADE, \n\tCONSTRAINT
uniq_port_forwardings0floatingip_id0external_port UNIQUE (floatingip_id,
external_port), \n\tCONSTRAINT
uniq_port_forwardings0internal_neutron_port_id0socket UNIQUE
(internal_neutron_port_id, socket)\n)ENGINE=InnoDB\n\n'] (Background on
this error at: http://sqlalche.me/e/2j85)

Then I check the two tables: floatingips and ports, found that there was
"DEFAULT CHARSET=utf8"of the two, so I entered the database to manually
create table portforwardings and succeeded.

If there is a bug?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881685

Title:
  Inconsistent coding while upgrading

Status in neutron:
  New

Bug description:
  I am trying to upgrade the platform from queens to rocky . To meet
  production needs, I cannot upgrade the platform in place. So I need to
  copy the queens version data to the rocky version environment before
  upgrading. After I import the queens version database to the rocky
  version, execute the following command:

  "neutron-db-manage \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
upgrade head"
  Then I got an error:

  oslo_db.exception.DBError: (pymysql.err.InternalError) (1005, u'Can\'t
  create table `neutron`.`portforwardings` (errno: 150 "Foreign key
  constraint is incorrectly formed")') [SQL: u'\nCREATE TABLE
  portforwardings (\n\tid VARCHAR(36) NOT NULL, \n\tfloatingip_id
  VARCHAR(36) NOT NULL, \n\texternal_port INTEGER NOT NULL,
  \n\tinternal_neutron_port_id VARCHAR(36) NOT NULL, \n\tprotocol
  VARCHAR(40) NOT NULL, \n\tsocket VARCHAR(36) NOT NULL, \n\tPRIMARY KEY
  (id), \n\tFOREIGN KEY(floatingip_id) REFERENCES floatingips (id) ON
  DELETE CASCADE, \n\tFOREIGN KEY(internal_neutron_port_id) REFERENCES
  ports (id) ON DELETE CASCADE, \n\tCONSTRAINT
  uniq_port_forwardings0floatingip_id0external_port UNIQUE
  (floatingip_id, external_port), \n\tCONSTRAINT
  uniq_port_forwardings0internal_neutron_port_id0socket UNIQUE
  (internal_neutron_port_id, socket)\n)ENGINE=InnoDB\n\n'] (Background
  on this error at: http://sqlalche.me/e/2j85)

  Then I check the two tables: floatingips and ports, found that there
  was "DEFAULT CHARSET=utf8"of the two, so I entered the database to
  manually create table portforwardings and succeeded.

  If there is a bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823818] Re: Memory leak in some neutron agents

2020-06-01 Thread Mark Goddard
** Also affects: kolla/rocky
   Importance: Undecided
   Status: New

** Changed in: kolla/rocky
   Status: New => Triaged

** Changed in: kolla
   Status: Confirmed => Invalid

** Changed in: kolla/rocky
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823818

Title:
  Memory leak in some  neutron agents

Status in kolla:
  Invalid
Status in kolla rocky series:
  Triaged
Status in neutron:
  Invalid

Bug description:
  We have an OpenStack deployment using rocky release. We have seen a
  memory leak issue in some neutron agents twice in our environment
  since it was first deployed this Jan.

  Below are some of the commands we ran to identify the issue and their
  corresponding output:

  This was on one of the compute nodes:
  ---
  [root@c1s4 ~]#  ps aux --sort -rss|head -n1

  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME
  COMMAND

  4243548229  3.5 73.1 98841060 96323252 pts/13 S+ 2018 1881:25 
/usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  ---

  And this was on one of the controller nodes:
  ---
  [root@r1 neutron]# ps aux --sort -rss|head

  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME
  COMMAND

  4243530940  3.1 48.6 68596320 64144784 pts/37 S+ Jan08 588:26
  /usr/bin/python2 /usr/bin/neutron-lbaasv2-agent --config-file
  /etc/neutron/neutron.conf --config-file /etc/neutron/lbaas_agent.ini
  --config-file /etc/neutron/neutron_lbaas.conf

  4243520902  2.8 26.1 36055484 34408952 pts/35 S+ Jan08 525:12
  /usr/bin/python2 /usr/bin/neutron-dhcp-agent --config-file
  /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini

  4243434199  7.1  6.0 39420516 8033480 pts/11 Sl+ 2018 3620:08
  /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql/ --plugin-
  dir=/usr/lib64/mysql/plugin
  --wsrep_provider=/usr/lib64/galera/libgalera_smm.so --wsrep_on=ON
  --log-error=/var/log/kolla/mariadb/mariadb.log --pid-
  file=/var/lib/mysql/mariadb.pid --port=3306
  --wsrep_start_position=0809f452-0251-11e9-8e60-6ad108d9be7b:0

  42435 8327  2.6  2.2 3546004 3001772 pts/10 S+  Jan17 152:04
  /usr/bin/python2 /usr/bin/neutron-l3-agent --config-file
  /etc/neutron/neutron.conf --config-file
  /etc/neutron/neutron_vpnaas.conf --config-file
  /etc/neutron/l3_agent.ini --config-file /etc/neutron/fwaas_driver.ini

  4243540171  2.6  2.1 3893480 2840852 pts/19 S+  Jan16 190:54
  /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file
  /etc/neutron/neutron.conf --config-file
  /etc/neutron/plugins/ml2/ml2_conf.ini

  root 42430  3.1  0.3 4412216 495492 pts/29 SLl+ Jan16 231:20 
/usr/sbin/ovs-vswitchd unix:/run/openvswitch/db.sock -vconsole:emer 
-vsyslog:err -vfile:info --mlockall 
--log-file=/var/log/kolla/openvswitch/ovs-vswitchd.log
  -

  When it happened, we saw a lot of 'OSError: [Errno 12] Cannot allocate
  memory' ERRORs in different neutron-* logs, because there were no free
  mem left. However, we don't know yet what had triggered the memory
  leakage.

  Here is our globals.yml:
  -
  [root@r1 kolla]# cat globals.yml |grep -v "^#"|tr -s "\n"
  ---
  openstack_release: "rocky"
  kolla_internal_vip_address: "172.21.69.22"
  enable_barbican: "yes"
  enable_ceph: "yes"
  enable_ceph_mds: "yes"
  enable_ceph_rgw: "yes"
  enable_cinder: "yes"
  enable_neutron_lbaas: "yes"
  enable_neutron_fwaas: "yes"
  enable_neutron_agent_ha: "yes"
  enable_ceph_rgw_keystone: "yes"
  ceph_pool_pg_num: 16
  ceph_pool_pgp_num: 16
  ceph_osd_store_type: "xfs"
  glance_backend_ceph: "yes"
  glance_backend_file: "no"
  glance_enable_rolling_upgrade: "no"
  ironic_dnsmasq_dhcp_range:
  tempest_image_id:
  tempest_flavor_ref_id:
  tempest_public_network_id:
  tempest_floating_network_name:
  ---

  
  I did some search on google and found this ovs bug is highly related 
https://bugzilla.redhat.com/show_bug.cgi?id=1667007

  I am not sure if the fix has been included in the latest Rocky kolla
  images?

  
  Best regards,

  Lei

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1823818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881628] [NEW] system endpoints changes not reflected under "Projects/API Access" without a logout.

2020-06-01 Thread Kristine Bujold
Public bug reported:

If the endpoints are changed on the system, the user must logout and then log 
back in to see the changes reflected in Horizon under "Projects/API Access". 
 
openstack endpoint list --interface public
+--+---+--+-+-+---++
| ID   | Region    | Service Name | Service Type    
| Enabled | Interface | URL    |
+--+---+--+-+-+---++
| fd1f7a216a5d49a0a89d81638eea7371 | RegionOne | fm   | faultmanagement 
| True    | public    | http://128.224.150.133:18002   |
| 3af06286e2294eaa80a08fe578dc8ec7 | RegionOne | patching | patching    
| True    | public    | http://128.224.150.133:15491   |
| 8848d7d723094b57ac63be59faa1482d | RegionOne | vim  | nfv 
| True    | public    | http://128.224.150.133:4545    |
| 290185e21f894138a73229d3f682941b | RegionOne | smapi    | smapi   
| True    | public    | http://128.224.150.133:    |
| e18e2af4208c4a99822e09eacdcd9bb1 | RegionOne | keystone | identity    
| True    | public    | http://128.224.150.133:5000/v3 |
| db634f5a0d5c456a88da4adb1f2784cf | RegionOne | barbican | key-manager 
| True    | public    | http://128.224.150.133:9311    |
| c4db750fa4124bdc9e1dd70f762d8a70 | RegionOne | sysinv   | platform    
| True    | public    | http://128.224.150.133:6385/v1 |
+--+---+--+-+-+---++

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1881628

Title:
  system endpoints changes not reflected under "Projects/API Access"
  without a logout.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If the endpoints are changed on the system, the user must logout and then log 
back in to see the changes reflected in Horizon under "Projects/API Access". 
   
  openstack endpoint list --interface public
  
+--+---+--+-+-+---++
  | ID   | Region    | Service Name | Service Type  
  | Enabled | Interface | URL    |
  
+--+---+--+-+-+---++
  | fd1f7a216a5d49a0a89d81638eea7371 | RegionOne | fm   | 
faultmanagement | True    | public    | http://128.224.150.133:18002   |
  | 3af06286e2294eaa80a08fe578dc8ec7 | RegionOne | patching | patching  
  | True    | public    | http://128.224.150.133:15491   |
  | 8848d7d723094b57ac63be59faa1482d | RegionOne | vim  | nfv   
  | True    | public    | http://128.224.150.133:4545    |
  | 290185e21f894138a73229d3f682941b | RegionOne | smapi    | smapi 
  | True    | public    | http://128.224.150.133:    |
  | e18e2af4208c4a99822e09eacdcd9bb1 | RegionOne | keystone | identity  
  | True    | public    | http://128.224.150.133:5000/v3 |
  | db634f5a0d5c456a88da4adb1f2784cf | RegionOne | barbican | key-manager   
  | True    | public    | http://128.224.150.133:9311    |
  | c4db750fa4124bdc9e1dd70f762d8a70 | RegionOne | sysinv   | platform  
  | True    | public    | http://128.224.150.133:6385/v1 |
  
+--+---+--+-+-+---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1881628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870096] Re: soft-affinity weight not normalized base on server group's maximum

2020-06-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/713863
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5ab9ef11e27014ce8b43e1bac76903fed70d0fbf
Submitter: Zuul
Branch:master

commit 5ab9ef11e27014ce8b43e1bac76903fed70d0fbf
Author: Johannes Kulik 
Date:   Thu Mar 19 12:51:25 2020 +0100

Don't recompute weighers' minval/maxval attributes

Changing the minval/maxval attribute to the minimum/maxium of every
weigher run changes the outcome of future runs. We noticed it in the
SoftAffinityWeigher, where a previous run with a host hosting a lot of
instances for a server-group would make a later run use that maximum.
This resulted in the weight being lower than 1 for a host hosting all
instances of another server-group, if the number of instances of that
server-group on that host is less than a previous server-group's
instances on any host.

Previously, there were two places that computed the maxval/minval - once
in normalize() and once in weigh_objects() - but only the one in
weigh_objects() saved the values to the weigher.

The code now uses the maxval/minval as defined by the weigher and keeps
the weights inside the maxval-minval range. There's also only one place
to compute the minval/maxval now, if the weigher did not set a value:
normalize().

Closes-Bug: 1870096

Change-Id: I60a90dabcd21b4e049e218c7c55fa075bb7ff933


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870096

Title:
  soft-affinity weight not normalized base on server group's maximum

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New

Bug description:
  Description
  ===

  When using soft-affinity to schedule instances on the same host, the
  weight is unexpectedly low if a server was previously scheduled to any
  server-group with more members on a host. This low weight can then be
  easily outweighed by differences in resources (e.g. RAM/CPU).

  Steps to reproduce
  ==

  Do not restart nova-scheduler in the process or the bug doesn't
  appear. You need to change the ServerGroupSoftAffinityWeigher to
  actually log the weights it computes to see the problem.

  * Create a server-group with soft-affinity (let's call it A)
  * Create 6 servers in server-group A, one after the other so they end up on 
the same host.
  * Create another server-group with soft-affinity (B)
  * Create 1 server in server-group B
  * Create 1 server in server-group B and look at the scheduler's weights 
assigned to the hosts by the ServerGroupSoftAffinityWeigher.

  Expected result
  ===

  The weight assigned to the host by the ServerGroupSoftAffinityWeigher
  should be 1, as the maximum number of instances for server-group B is
  on that host (the one we created there before).

  Actual result
  =
  The weight assigned to the host by the ServerGroupSoftAffinityWeigher is 0.2, 
as the maximum number of instances ever encountered on a host is 5.

  Environment
  ===

  We noticed this on a queens version of nova a year ago. Can't give the
  exact commit anymore, but the code still looks broken in current
  master.

  I've opened a review-request for fixing this bug here:
  https://review.opendev.org/#/c/713863/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881558] [NEW] [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new Cirros

2020-06-01 Thread Maciej Jozefczyk
Public bug reported:

Recently merged code [1] added a few IPv6 hotplug scenarios.
In meantime we're working on enabling new Cirros on OVN Gates [2]

After merging [1] we can find that on [2] the new tests started to fail:

neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.

Example failure:
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release

[1] https://review.opendev.org/#/c/711931/
[2] https://review.opendev.org/#/c/711425/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Summary changed:

- [OVN][CIrros 0.5.1] IPv6 hot plug tempest tests are failing with new cirros
+ [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new Cirros

** Description changed:

- Recently merged code [1] added a IPv6 hotplug scenarios.
+ Recently merged code [1] added a few IPv6 hotplug scenarios.
  In meantime we're working on enabling new Cirros on OVN Gates [2]
  
  After merging [1] we can find that on [2] the new tests started to fail:
  
  
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
  neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.
  
- 
  Example failure:
  
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release
  
- 
  [1] https://review.opendev.org/#/c/711931/
  [2] https://review.opendev.org/#/c/711425/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881558

Title:
  [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new
  Cirros

Status in neutron:
  New

Bug description:
  Recently merged code [1] added a few IPv6 hotplug scenarios.
  In meantime we're working on enabling new Cirros on OVN Gates [2]

  After merging [1] we can find that on [2] the new tests started to
  fail:

  
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
  neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.

  Example failure:
  
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release

  [1] https://review.opendev.org/#/c/711931/
  [2] https://review.opendev.org/#/c/711425/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1118815] Re: Remove python-oauth from the archive

2020-06-01 Thread Florian Guitton
** Also affects: charm-hacluster
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1118815

Title:
  Remove python-oauth from the archive

Status in OpenStack hacluster charm:
  New
Status in cloud-init:
  Fix Released
Status in Gwibber:
  New
Status in MAAS:
  Invalid
Status in pyjuju:
  Triaged
Status in U1DB:
  New
Status in identicurse package in Ubuntu:
  Confirmed
Status in jsonbot package in Ubuntu:
  Confirmed
Status in python-django-piston package in Ubuntu:
  Triaged
Status in python-oauth package in Ubuntu:
  Confirmed
Status in turpial package in Ubuntu:
  Confirmed
Status in tweepy package in Ubuntu:
  Confirmed

Bug description:
  This bug tracks the removal of python-oauth from the archive.  (See
  also
  https://blueprints.launchpad.net/ubuntu/+spec/foundations-r-python3-oauth
  for additional details).

  There are several very good reasons to remove python-oauth and port
  all reverse depends to python-oauthlib.

   * upstream oauth has been abandoned since 2009
   * upstream oauth only supports OAuth 1 (and probably not even the RFC 5849 
standard)
   * upstream oauth only supports Python 2
   * upstream oauthlib is actively maintained
   * upstream oauthlib supports Python 2 and Python 3
   * upstream oauthlib supports RFC 5849 and the OAuth2 spec draft

  As of yet, we cannot remove python-oauth because of existing reverse
  dependencies.  I'll add each of those as bug tasks to this one for
  tracking purposes.  When the time comes, I'll subscribe ~ubuntu-
  archive to the bug to do the final deed.

  It will need to be blacklisted from Debian sync too.

  In the meantime, *please* don't write any new code using python-oauth!
  Use python-oauthlib.

  http://pypi.python.org/pypi/oauthlib

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1118815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881556] [NEW] [ovs-agent]exception cause stale flow cleaned but new flow not installed

2020-06-01 Thread Wei Hui
Public bug reported:

kolla upgrade neutron

rpc_loop 1:
Physical bridge br-ex was just re-created, 
set bridges_recreated=True
set need_clean_stale_flow=True
process_network_ports argument provisioning_needed is True but throw exception, 
not call provision_local_vlan and flag need_clean_stale_flow keep True

rpc_loop 2:
set bridges_recreated=False
process_network_ports argument provisioning_needed is False, and not call 
provision_local_vlan
Cleaning stale flows and set need_clean_stale_flow = False
flow lost 


fllowing is detailed log:

 457:2020-05-29 11:12:37.070 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Agent rpc_loop - 
iteration:0 started rpc_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2214
1912:2020-05-29 11:13:04.994 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Processing port: 
a31f01a5-371a-4b44-8374-d87272d322ce treat_devices_added_or_updated 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1692
1914:2020-05-29 11:13:04.995 7 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Assigning 43 as local vlan 
for net-id=c514bab4-139d-4eb0-8da2-3ab89938cbb2
11362:2020-05-29 11:17:20.688 7 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Cleaning stale br-ex flows
11393:2020-05-29 11:17:21.831 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:0 completed. Processed ports statistics: {'regular': {'updated': 0, 
'added': 120, 'removed': 0}}. Elapsed:284.761 loop_count_and_wait 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2003

11396:2020-05-29 11:17:21.832 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:1 started rpc_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2214
11399:2020-05-29 11:17:21.840 7 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Physical bridge br-ex was 
just re-created.
12771:2020-05-29 11:19:15.353 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Error while processing VIF 
ports: MessagingTimeout: Timed out waiting for a reply to message ID 
df2deeed7f614e67abd42b088e0258e0
12811:2020-05-29 11:19:15.358 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:1 completed. Processed ports statistics: {'regular': {'updated': 0, 
'added': 0, 'removed': 0}}. Elapsed:113.526 loop_count_and_wait 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2003


12813:2020-05-29 11:19:15.359 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:2 started rpc_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2214
14372:2020-05-29 11:21:03.018 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Processing port: 
a31f01a5-371a-4b44-8374-d87272d322ce treat_devices_added_or_updated 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1692
20013:2020-05-29 11:23:46.786 7 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -] Cleaning stale br-ex flows
20037:2020-05-29 11:23:47.709 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:2 completed. Processed ports statistics: {'regular': {'updated': 58, 
'added': 120, 'removed': 0}}. Elapsed:272.350 loop_count_and_wait 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2003

20039:2020-05-29 11:23:47.710 7 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-cc051aec-d6e6-4278-8944-13835ee1c47b - - - - -]Agent rpc_loop - 
iteration:3 started rpc_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2214
20307:2020-05-29 

[Yahoo-eng-team] [Bug 1881557] [NEW] Can not resize to same host for libvirt driver

2020-06-01 Thread Eric Xie
Public bug reported:

Description
===
As before, the instance can be resized or cold-migrated to same host for 
libvirt driver
if CONF.allow_resize_to_same_host was set to true.
When use latest source, got error "UnableToMigrateToSelf: Unable to migrate 
instance".

Steps to reproduce
==
* Configure CONF.allow_resize_to_same_host to true
* Create one instance
* Cold-migrate the instance

Expected result
===
The instance can be cold-migrated on same host

Actual result
=
Got error "UnableToMigrateToSelf: Unable to migrate instance"

Environment
===
$ git log
commit f571151e79dbd87a76ae3222a9f5b507d85648b1
Merge: 3233392 236f1b2
Author: Zuul 
Date:   Sat May 30 06:55:18 2020 +

Merge "zuul: Make devstack-plugin-ceph-tempest-py3 a voting check
job again"

libvirt + KVM

Logs & Configs
==
[DEFAULT]
allow_resize_to_same_host = true

nova-compute
2020-06-01 06:53:24.367 28545 ERROR nova.compute.manager [instance: 
982f9273-eb50-443a-8bbc-fa728ceac8e4] UnableToMigrateToSelf: Unable to migrate 
instance (982f9273-eb50-443a-8bbc-fa728ceac8e4) to current host (compute04).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1881557

Title:
  Can not resize to same host for libvirt driver

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  As before, the instance can be resized or cold-migrated to same host for 
libvirt driver
  if CONF.allow_resize_to_same_host was set to true.
  When use latest source, got error "UnableToMigrateToSelf: Unable to migrate 
instance".

  Steps to reproduce
  ==
  * Configure CONF.allow_resize_to_same_host to true
  * Create one instance
  * Cold-migrate the instance

  Expected result
  ===
  The instance can be cold-migrated on same host

  Actual result
  =
  Got error "UnableToMigrateToSelf: Unable to migrate instance"

  Environment
  ===
  $ git log
  commit f571151e79dbd87a76ae3222a9f5b507d85648b1
  Merge: 3233392 236f1b2
  Author: Zuul 
  Date:   Sat May 30 06:55:18 2020 +

  Merge "zuul: Make devstack-plugin-ceph-tempest-py3 a voting check
  job again"

  libvirt + KVM

  Logs & Configs
  ==
  [DEFAULT]
  allow_resize_to_same_host = true

  nova-compute
  2020-06-01 06:53:24.367 28545 ERROR nova.compute.manager [instance: 
982f9273-eb50-443a-8bbc-fa728ceac8e4] UnableToMigrateToSelf: Unable to migrate 
instance (982f9273-eb50-443a-8bbc-fa728ceac8e4) to current host (compute04).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1881557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp