[Yahoo-eng-team] [Bug 1863038] [NEW] Nova need to consider ironic node as 'host'

2020-02-12 Thread Yang Youseok
Public bug reported:

For routed network for ironic, neutron (networking-baremetal) try to add
'host' to segment aggregate.

But the 'host' neutron way trying is 'ironic node uuid' which does not
considered to be 'host' in nova side. As a result, neutron meets
exception when it try to add 'host' to segment aggregate in
(https://github.com/openstack/neutron/blob/master/neutron/services/segments/plugin.py#L253)

Nova already consider ironic node as resource provider which same as
'host', we need some way to add 'ironic node' to aggregates.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863038

Title:
  Nova need to consider ironic node  as 'host'

Status in OpenStack Compute (nova):
  New

Bug description:
  For routed network for ironic, neutron (networking-baremetal) try to
  add 'host' to segment aggregate.

  But the 'host' neutron way trying is 'ironic node uuid' which does not
  considered to be 'host' in nova side. As a result, neutron meets
  exception when it try to add 'host' to segment aggregate in
  
(https://github.com/openstack/neutron/blob/master/neutron/services/segments/plugin.py#L253)

  Nova already consider ironic node as resource provider which same as
  'host', we need some way to add 'ironic node' to aggregates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492140] Re: consoleauth token displayed in log file

2020-02-12 Thread melanie witt
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/train
   Status: New => Fix Released

** Changed in: nova/train
 Assignee: (unassigned) => Claudio Hollanda (gibi)

** Changed in: nova/train
 Assignee: Claudio Hollanda (gibi) => Balazs Gibizer (gibizer)

** Changed in: nova/stein
   Importance: Undecided => Low

** Changed in: nova/stein
   Status: New => Fix Released

** Changed in: nova/stein
 Assignee: (unassigned) => Balazs Gibizer (gibizer)

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Balazs Gibizer (gibizer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492140

Title:
  consoleauth token displayed in log file

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  Fix Released
Status in OpenStack Compute (nova) train series:
  Fix Released
Status in oslo.utils:
  Fix Released
Status in OpenStack Security Advisory:
  Triaged

Bug description:
  when instance console is accessed auth token is displayed nova-
  consoleauth.log

  nova-consoleauth.log:874:2015-09-02 14:20:36 29941 INFO 
nova.consoleauth.manager [req-6bc7c116-5681-43ee-828d-4b8ff9d566d0 
fe3cd6b7b56f44c9a0d3f5f2546ad4db 37b377441b174b8ba2deda6a6221e399] Received 
Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, {'instance_uuid': 
u'dd29a899-0076-4978-aa50-8fb752f0c3ed', 'access_url': 
u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92-935e-4c22ec90d5f7',
 'token': u'f8ea537c-b924-4d92-935e-4c22ec90d5f7', 'last_activity_at': 
1441203636.387588, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'192.168.245.6', 'port': u'5900'}
  nova-consoleauth.log:881:2015-09-02 14:20:52 29941 INFO 
nova.consoleauth.manager [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0 None None] 
Checking Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, True

  and

  nova-novncproxy.log:30:2015-09-02 14:20:52 31927 INFO
  nova.console.websocketproxy [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0
  None None]   3: connect info: {u'instance_uuid':
  u'dd29a899-0076-4978-aa50-8fb752f0c3ed', u'internal_access_path':
  None, u'last_activity_at': 1441203636.387588, u'console_type':
  u'novnc', u'host': u'192.168.245.6', u'token': u'f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'access_url':
  u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'port': u'5900'}

  This token has a short lifetime but the exposure still represents a
  potential security weakness, especially as the log record in question
  are INFO level and thus available via centralized logging. A user with
  real time access to these records could mount a denial of service
  attack by accessing the instance console and performing a ctl alt del
  to reboot it

  Alternatively data privacy could be compromised if the attacker were
  able to obtain user credentials

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863021] Re: eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-02-12 Thread Corey Bryant
See attached for much more detailed recreation/patching/fixing.

** Attachment added: "nova-eventlet-monkey-patch.txt"
   
https://bugs.launchpad.net/nova/+bug/1863021/+attachment/5327763/+files/nova-eventlet-monkey-patch.txt

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: New => Triaged

** Changed in: nova (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863021

Title:
  eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  This appears to be the same issue documented here:
  https://github.com/eventlet/eventlet/issues/592

  However I seem to only hit this with python3.8. Basically nova
  services fail with:

   Exception ignored in: 
   Traceback (most recent call last):
 File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:
   Exception ignored in: 
   Traceback (most recent call last):
 File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:

  Patching nova/monkey_patch.py with the following appears to fix this:

  diff --git a/nova/monkey_patch.py b/nova/monkey_patch.py
  index a07ff91dac..bb7252c643 100644
  --- a/nova/monkey_patch.py
  +++ b/nova/monkey_patch.py
  @@ -59,6 +59,9 @@ def _monkey_patch():
   else:
   eventlet.monkey_patch()
   
  +import __original_module_threading
  +import threading
  +__original_module_threading.current_thread.__globals__['_active'] = 
threading._active
   # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet
   # hub use a monotonic clock to avoid issues with drifts of system time 
(see

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863021] [NEW] eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-02-12 Thread Corey Bryant
Public bug reported:

This appears to be the same issue documented here:
https://github.com/eventlet/eventlet/issues/592

However I seem to only hit this with python3.8. Basically nova services
fail with:

 Exception ignored in: 
 Traceback (most recent call last):
   File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
 assert len(_active) == 1
 AssertionError:
 Exception ignored in: 
 Traceback (most recent call last):
   File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
 assert len(_active) == 1
 AssertionError:

Patching nova/monkey_patch.py with the following appears to fix this:

diff --git a/nova/monkey_patch.py b/nova/monkey_patch.py
index a07ff91dac..bb7252c643 100644
--- a/nova/monkey_patch.py
+++ b/nova/monkey_patch.py
@@ -59,6 +59,9 @@ def _monkey_patch():
 else:
 eventlet.monkey_patch()
 
+import __original_module_threading
+import threading
+__original_module_threading.current_thread.__globals__['_active'] = 
threading._active
 # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet
 # hub use a monotonic clock to avoid issues with drifts of system time (see

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: High
 Status: Triaged


** Tags: py38

** Tags added: py38

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863021

Title:
  eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  This appears to be the same issue documented here:
  https://github.com/eventlet/eventlet/issues/592

  However I seem to only hit this with python3.8. Basically nova
  services fail with:

   Exception ignored in: 
   Traceback (most recent call last):
 File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:
   Exception ignored in: 
   Traceback (most recent call last):
 File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:

  Patching nova/monkey_patch.py with the following appears to fix this:

  diff --git a/nova/monkey_patch.py b/nova/monkey_patch.py
  index a07ff91dac..bb7252c643 100644
  --- a/nova/monkey_patch.py
  +++ b/nova/monkey_patch.py
  @@ -59,6 +59,9 @@ def _monkey_patch():
   else:
   eventlet.monkey_patch()
   
  +import __original_module_threading
  +import threading
  +__original_module_threading.current_thread.__globals__['_active'] = 
threading._active
   # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet
   # hub use a monotonic clock to avoid issues with drifts of system time 
(see

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863009] [NEW] os-deferred-delete restore server API policy is allowed for everyone even policy defaults is admin_or_owner

2020-02-12 Thread Ghanshyam Mann
Public bug reported:

os-deferred-delete restore server API policy is default to
admin_or_owner[1] but API is allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/707455/

This is because API does not pass the server project_id in policy target
- 
https://github.com/openstack/nova/blob/1fcd74730d343b7cee12a0a50ea537dc4ff87f65/nova/api/openstack/compute/deferred_delete.py#L38

and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

[1]
- 
https://github.com/openstack/nova/blob/1fcd74730d343b7cee12a0a50ea537dc4ff87f65/nova/policies/deferred_delete.py#L27

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863009

Title:
  os-deferred-delete restore server API policy is allowed for everyone
  even policy defaults is admin_or_owner

Status in OpenStack Compute (nova):
  New

Bug description:
  os-deferred-delete restore server API policy is default to
  admin_or_owner[1] but API is allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/707455/

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/1fcd74730d343b7cee12a0a50ea537dc4ff87f65/nova/api/openstack/compute/deferred_delete.py#L38

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/1fcd74730d343b7cee12a0a50ea537dc4ff87f65/nova/policies/deferred_delete.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863006] [NEW] "ping" command should be correctly supported in rootwrap filters

2020-02-12 Thread Rodolfo Alonso
Public bug reported:

Some "ping" commands have failed because the rootwrap filter does not
match. Example [1]:

RuntimeError: Process ['ping', '192.178.0.2', '-W', '1', '-c', '3']
hasn't been spawned in 20 seconds. Return code: 99, stdout: , sdterr:
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/bin
/neutron-rootwrap: Unauthorized command: ip netns exec test-ed1ca152
-40df-457f-95ea-bd1edd68baa9 ping 192.178.0.2 -W 1 -c 3 (no filter
matched)

"ping" commands should always have the same parameters and in the same
order.

[1] https://f686e70b9699eba6880c-
12f0768fe735ff9b43e4aa64f3cfd6c9.ssl.cf2.rackcdn.com/701733/33/check
/neutron-functional/36f4f9c/testr_results.html

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863006

Title:
  "ping" command should be correctly supported in rootwrap filters

Status in neutron:
  In Progress

Bug description:
  Some "ping" commands have failed because the rootwrap filter does not
  match. Example [1]:

  RuntimeError: Process ['ping', '192.178.0.2', '-W', '1', '-c', '3']
  hasn't been spawned in 20 seconds. Return code: 99, stdout: , sdterr:
  /home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/bin
  /neutron-rootwrap: Unauthorized command: ip netns exec test-ed1ca152
  -40df-457f-95ea-bd1edd68baa9 ping 192.178.0.2 -W 1 -c 3 (no filter
  matched)

  "ping" commands should always have the same parameters and in the same
  order.

  [1] https://f686e70b9699eba6880c-
  12f0768fe735ff9b43e4aa64f3cfd6c9.ssl.cf2.rackcdn.com/701733/33/check
  /neutron-functional/36f4f9c/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862968] [NEW] [rfe] Add RBAC support for address scopes

2020-02-12 Thread Igor Malinovskiy
Public bug reported:

Currently, RBAC for address scopes is missing in Neutron but it will
valuable feature for cloud administrators.

Adds "address_scopes" as a supported RBAC type:

Neutron-lib:
WIP

Neutron:
WIP

Tempest tests:
TBD

Client:
TBD

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862968

Title:
  [rfe] Add RBAC support for address scopes

Status in neutron:
  New

Bug description:
  Currently, RBAC for address scopes is missing in Neutron but it will
  valuable feature for cloud administrators.

  Adds "address_scopes" as a supported RBAC type:

  Neutron-lib:
  WIP

  Neutron:
  WIP

  Tempest tests:
  TBD

  Client:
  TBD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831986] Re: fwaas_v2 - unable to associate port with firewall (PXC strict mode)

2020-02-12 Thread James Page
** Changed in: cloud-archive/stein
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831986

Title:
  fwaas_v2 - unable to associate port with firewall (PXC strict mode)

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Disco:
  Won't Fix
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  Unable to associate ports with a firewall under FWaaS v2

  [Test Case]
  Deploy OpenStack (stein or Later) using Charms
  Create firewall policy, apply to router - failure as unable to associate port 
with policy in underlying DB

  [Regression Potential]
  Medium; the proposed fix has not been accepted upstream as yet (discussion 
ongoing due to change of database migrations).

  [Original Bug Report]
  Impacts both Stein and Rocky (although rocky does not enable v2 just yet).

  542 a9761fa9124740028d0c1d70ff7aa542] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE 
FROM firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'85a277d0-ebaf-4a5d-9d45-6a74b8f54372'}] (Background on this error at: 
http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 509, in 
do_execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in 
check_error
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
pymysql.err.InternalError: (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a tab

[Yahoo-eng-team] [Bug 1862836] Re: neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a table (neutron.firewall_group_port_associations_v2) without an explicit primary key with pxc_s

2020-02-12 Thread Felipe Reyes
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862836

Title:
  neutron-fwaas Percona-XtraDB-Cluster prohibits use of DML command on a
  table (neutron.firewall_group_port_associations_v2) without an
  explicit primary key with pxc_strict_mode = ENFORCING or MASTER'

Status in Ubuntu Cloud Archive:
  New
Status in neutron-fwaas package in Ubuntu:
  New

Bug description:
  When trying to delete a heat stack in Stein fails, because neutron-
  server couldn't update firewall groups, the stack trace found in the
  logs is:

  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 197, in 
_notify_loop
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/fwaas_plugin_v2.py",
 line 307, in handle_update_port
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
{'firewall_group': {'ports': port_ids}})
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/oslo_log/helpers.py", line 67, in wrapper
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
return method(*args, **kwargs)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/fwaas_plugin_v2.py",
 line 369, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
return self.driver.update_firewall_group(context, id, firewall_group)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/service_drivers/driver_api.py",
 line 211, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
context, id, firewall_group_delta)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/db/firewall/v2/firewall_db_v2.py",
 line 981, in update_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
self._delete_ports_in_firewall_group(context, id)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_fwaas/db/firewall/v2/firewall_db_v2.py",
 line 832, in _delete_ports_in_firewall_group
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
firewall_group_id=firewall_group_id).delete()
  [...]
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager raise 
errorclass(errno, errval)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager 
oslo_db.exception.DBError: (pymysql.err.InternalError) (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE FROM 
firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'8da85bcb-1e1d-4d5a-b508-25c1d4c85d50'}] (Background on this error at: 
http://sqlalche.me/e/2j85)
  2020-02-11 13:14:21.356 1998511 ERROR neutron_lib.callbacks.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1862836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862932] [NEW] [neutron-bgp-dragent] passive peers send wrong number of routes

2020-02-12 Thread Radu Popescu
Public bug reported:

So, we have the following setup:

3 neutron-bgp-dragents connected to the same peers, all agents assigned to the 
same speaker (HA).
Remote hardware is: Cisco Nexus 93180YC-EX, NXOS: version 7.0(3)I7(5a).

The problem:
whenever something changes with the routes, only one of the controllers 
actually sends the withdrawal requests to the peers and sends the correct 
number of routes. The other 2 will still send the old number. That will not be 
a problem, because by default it will default to the routes sent by the 
"active" controller, unless something happens with the "active" controller. 
When that happens, another controller will become "active", but unless we 
restart the agent, it will still send the old number of routes.

Here's some info that might help:

# speaker
openstack bgp speaker list
+--+-+--++
| ID   | Name| Local AS | IP Version |
+--+-+--++
| 486cd041-fbe0-4f0a-b12f-d630923fac58 | speaker |65007 |  4 |
+--+-+--++
# number of routes
openstack bgp speaker list advertised routes speaker -f value | wc -l
93
# agents attached to the speaker
+--+--+---+---+
| ID   | Host | State | Alive |
+--+--+---+---+
| 4258f572-adfe-42dc-bcd5-7bb1a380503e | controller-2 | True  | :-)   |
| 6b08e6f3-728c-4cd6-a2f8-0247b55ba49b | controller-3 | True  | :-)   |
| 7b7b2db1-5c0d-4aa3-8260-e398109ba727 | controller-1 | True  | :-)   |
+--+--+---+---+
# speaker peers
openstack bgp peer list
+--+---+-+---+
| ID   | Name  | Peer IP | Remote AS |
+--+---+-+---+
| 99e9d413-98cf-4985-aaf4-4d920bc72678 | sw1   | XXX.XXX.XXX.XXX | 65005 |
| b10c1ab0-ce8e-47cd-a036-a14425b9a917 | sw2   | XXX.XXX.XXX.XXX | 65005 |
+--+---+-+---+
# show speaker
openstack bgp speaker show speaker
+---++
| Field | Value 
 |
+---++
| advertise_floating_ip_host_routes | True  
 |
| advertise_tenant_networks | True  
 |
| id| 486cd041-fbe0-4f0a-b12f-d630923fac58  
 |
| ip_version| 4 
 |
| local_as  | 65007 
 |
| name  | speaker   
 |
| networks  | [u'97c53e69-89f5-4cd8-ac97-1ea536797f5c'] 
 |
| peers | [u'99e9d413-98cf-4985-aaf4-4d920bc72678', 
u'b10c1ab0-ce8e-47cd-a036-a14425b9a917'] |
| project_id| 4cdb825ea20f43cb9cde3b3686188b5a  
 |
| tenant_id | 4cdb825ea20f43cb9cde3b3686188b5a  
 |
+---++

# route count on the other side:
XXX.XXX.XXX.XXX 4 65007 2288383 2285772   181451005d08h 92
XXX.XXX.XXX.XXX 4 65007 1494121 1490739   181451005d08h 92
XXX.XXX.XXX.XXX 4 65007 1531378 1528974   181451005d08h 93

# for the momment, controller-2 is the "active" one, sending messages like:
2020-02-11 06:44:08.639 50425 DEBUG bgpspeaker.info_base.base [-] Sending 
withdrawal to Peer(ip: , asn: 65005) for OutgoingRoute(path: 
Path(source: None, nlri: IPAddrPrefix(addr='XX.XX.XX.XX',length=32), source 
ver#: 1, path attrs.: OrderedDict(), n
exthop: XX.XX.XX.XX, is_withdraw: True), for_route_refresh: False) 
_best_path_lost 
/usr/lib/python2.7/dist-packages/ryu/services/protocols/bgp/info_base/base.py:243
2020-02-11 06:44:08.640 50425 DEBUG bgpspeaker.info_base.base [-] Sending 
withdrawal to Peer(ip: , asn: 65005) for

[Yahoo-eng-team] [Bug 1862927] [NEW] "ncat" rootwrap filter is missing

2020-02-12 Thread Rodolfo Alonso
Public bug reported:

"ncat" rootwrap filter is missing, as we can see in [1].

Log:
RuntimeError: Process ['ncat', '0.0.0.0', '1234', '-l', '-k'] hasn't been 
spawned in 20 seconds. Return code: 99, stdout: , sdterr: 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/bin/neutron-rootwrap:
 Unauthorized command: ip netns exec nc-2aefd97b-cf51-4404-804b-b61dc17ce59f 
ncat 0.0.0.0 1234 -l -k (no filter matched)


[1] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b89/701733/28/check/neutron-functional/b89805d/testr_results.html

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862927

Title:
  "ncat" rootwrap filter is missing

Status in neutron:
  New

Bug description:
  "ncat" rootwrap filter is missing, as we can see in [1].

  Log:
  RuntimeError: Process ['ncat', '0.0.0.0', '1234', '-l', '-k'] hasn't been 
spawned in 20 seconds. Return code: 99, stdout: , sdterr: 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/bin/neutron-rootwrap:
 Unauthorized command: ip netns exec nc-2aefd97b-cf51-4404-804b-b61dc17ce59f 
ncat 0.0.0.0 1234 -l -k (no filter matched)

  
  [1] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b89/701733/28/check/neutron-functional/b89805d/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824858] Re: nova instance remnant left behind after cold migration completes

2020-02-12 Thread zhipeng liu
** Changed in: starlingx
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1824858

Title:
  nova instance remnant left behind after cold migration completes

Status in OpenStack Compute (nova):
  Fix Released
Status in StarlingX:
  Fix Released

Bug description:
  Brief Description
  -
  After cold migration to a new worker node, instances remnants are left behind

  
  Severity
  
  standard

  
  Steps to Reproduce
  --
  worker nodes compute-1 and compute-2 have label   remote-storage enabled
  1. Launch instance on compute-1
  2. cold migrate to compute-2
  3. confirm cold migration to complete

  
  Expected Behavior
  --
  Migration to compute-2 and cleanup on files on compute-1

  
  Actual Behavior
  
  At 16:35:24 cold migration for instance a416ead6-a17f-4bb9-9a96-3134b426b069  
completed to compute-2 but the following path is left behind on compute-1
  compute-1:/var/lib/nova/instances/a416ead6-a17f-4bb9-9a96-3134b426b069

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069 _base  locks
  a416ead6-a17f-4bb9-9a96-3134b426b069_resize  compute_nodes  lost+found

  
  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found

  compute-1:/var/lib/nova/instances$ ls
  a416ead6-a17f-4bb9-9a96-3134b426b069  _base  compute_nodes  locks  lost+found


  2019-04-15T16:35:24.646749clear   700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:24.482575log 700.168 Cold-Migrate-Confirm 
complete for instance tenant2-migration_test-1 enabled on host compute-2   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:16.815223log 700.163 Cold-Migrate-Confirm 
issued by tenant2 against instance tenant2-migration_test-1 owned by tenant2 on 
host compute-2 
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:10.030068clear   700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.971414set 700.010 Instance 
tenant2-migration_test-1 owned by tenant2 has been cold-migrated to host 
compute-2 waiting for confirmation
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:35:09.970212log 700.162 Cold-Migrate complete 
for instance tenant2-migration_test-1 now enabled on host compute-2 waiting for 
confirmation  
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637687set 700.009 Instance 
tenant2-migration_test-1 owned by tenant2 is cold migrating from host compute-1 
   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.637636log 700.158 Cold-Migrate inprogress 
for instance tenant2-migration_test-1 from host compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:51.478442log 700.157 Cold-Migrate issued by 
tenant2 against instance tenant2-migration_test-1 owned by tenant2 from host 
compute-1   
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical
  2019-04-15T16:34:20.181155log 700.101 Instance 
tenant2-migration_test-1 is enabled on host compute-1  
tenant=7f1d4223-3341-428a-9188-55614770e676.instance=a416ead6-a17f-4bb9-9a96-3134b426b069
   critical

  
  see nova-compute.log (compute-1)
  compute-1 nova-compute log

  [instance: a416ead6-a17f-4bb9-9a96-3134b426b069 claimed and spawned
  here on compute-1]

  {"log":"2019-04-15 16:34:04,617.617 60908 INFO nova.compute.claims 
[req-f1195bbb-d5b0-4a75-a598-ff287d247643 3fd3229d3e6248cf9b5411b2ecec86e9 
7f1d42233341428a918855614770e676 - default default] [instance: 
a416ead6-a17f-4bb9-9a96-3134b426b069] Claim successful on node 
compute-1\n","stream":"stdout","time":"2019-04-15T16:34:04.617671485Z"}
  {"log":"2019-04-15 16:34:07,836.836 60908 INFO nova.virt.libvirt.driver 
[req-f1195bbb-d5b0-4a75-a598-ff287d247643 3fd322