[Yahoo-eng-team] [Bug 1834045] Re: Live-migration double binding doesn't work with OVN

2020-08-07 Thread Maciej Jozefczyk
Fix already released: https://review.opendev.org/#/c/673803/

** Changed in: networking-ovn
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1834045

Title:
  Live-migration double binding doesn't work with OVN

Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Incomplete
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  For ml2/OVN live-migration doesn't work. After spending some time
  debugging this issue I found that its potentially more complicated and
  not related to OVN intself.

  Here is the full story behind not working live-migration while using
  OVN in latest u/s master.

  To speedup live-migration double-binding was introduced in neutron [1] and 
nova [2]. It implements this blueprint [3]. In short words it creates double 
binding (ACTIVE and INACTIVE) to verify if network bind is possible to be done 
on destination host and then starts live-migration (to not waste time in case 
of rollback).
  This mechanism started to be default in Stein [4]. So before actual qemu 
live-migration neutron should send 'network-vif-plugged' to nova and then 
migration is being run.

  While using OVN this mechanism doesn't work. Notification 'network-
  vif-plugged' is not being send so live-migration is stuck at the
  beginning.

  Lets check how those notifications are send. On every change of
  'status' field (sqlalchemy event) in neutron.ports row [5] function
  [6] is executed and it is responsible for sending 'network-vif-
  unplugged' and 'network-vif-plugged' notifications.

  During pre_live_migration tasks two bindings and bindings levels are created. 
At the end of this process I found that commit_port_binding() is executed [7]. 
At this time neutron port status in the db is DOWN. 
  I found that at the end of commit_port_binding() [8] after 
neutron_lib.callbacks.registry notification is send the port status moves to 
UP. For ml2/OVN it stays DOWN. This is the first difference that I found 
between ml2/ovs and ml2/ovn.

  After a bit digging I figured out how 'network-vif-plugged' is triggered in 
ml2/ovs.
  Lets see how this is done.

  1. On list of registered callbacks in ml2/ovs [8] we have configured
  callback from class ovo_rpc._ObjectChangeHandler [9] and at the end of
  commit_port_binding() this callback is used.

  -
  neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event
  -

  2. It is responsible for pushing new port object revisions to agents,
  like:

  
  Jun 24 10:01:01 test-migrate-1 neutron-server[3685]: DEBUG 
neutron.api.rpc.handlers.resources_rpc [None 
req-1430f349-d644-4d33-8833-90fad0124dcd service neutron] Pushing event updated 
for resources: {'Port': 
['ID=3704a567-ef4c-4f6d-9557-a1191de07c4a,revision_number=10']} {{(pid=3697) 
push /opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:243}}
  

  3. OVS agent consumes it and sends back RPC to the neutron server that port 
is actually UP (on source node!):
  

  Jun 24 10:01:01 test-migrate-1 neutron-openvswitch-agent[18660]: DEBUG 
neutron.agent.resource_cache [None req-1430f349-d644-4d33-8833-90fad0124dcd 
service neutron] Resource Port 3704a567-ef4c-4f6d-9557-a1191de07c4a updated 
(revision_number 8->10). Old fields: {'status': u'ACTIVE', 'bindings': 
[PortBinding(host='test-migrate-1',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={},status='INACTIVE',vif_details={"port_filter":
 true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal'), 
PortBinding(host='test-migrate-2',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={"migrating_to":
 "test-migrate-1"},status='ACTIVE',vif_details={"port_filter": true, 
"bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal')], 'binding_levels': 
[PortBindingLevel(driver='openvswitch',host='test-migrate-1',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59),
 
PortBindingLevel(driver='openvswitch',host='test-migrate-2',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59)]}
 New fields: {'status': u'DOWN', 'bindings': 

[Yahoo-eng-team] [Bug 1834045] Re: Live-migration double binding doesn't work with OVN

2020-08-07 Thread Maciej Jozefczyk
Fix released: https://review.opendev.org/#/c/673803/

** Changed in: neutron (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1834045

Title:
  Live-migration double binding doesn't work with OVN

Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Incomplete
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  For ml2/OVN live-migration doesn't work. After spending some time
  debugging this issue I found that its potentially more complicated and
  not related to OVN intself.

  Here is the full story behind not working live-migration while using
  OVN in latest u/s master.

  To speedup live-migration double-binding was introduced in neutron [1] and 
nova [2]. It implements this blueprint [3]. In short words it creates double 
binding (ACTIVE and INACTIVE) to verify if network bind is possible to be done 
on destination host and then starts live-migration (to not waste time in case 
of rollback).
  This mechanism started to be default in Stein [4]. So before actual qemu 
live-migration neutron should send 'network-vif-plugged' to nova and then 
migration is being run.

  While using OVN this mechanism doesn't work. Notification 'network-
  vif-plugged' is not being send so live-migration is stuck at the
  beginning.

  Lets check how those notifications are send. On every change of
  'status' field (sqlalchemy event) in neutron.ports row [5] function
  [6] is executed and it is responsible for sending 'network-vif-
  unplugged' and 'network-vif-plugged' notifications.

  During pre_live_migration tasks two bindings and bindings levels are created. 
At the end of this process I found that commit_port_binding() is executed [7]. 
At this time neutron port status in the db is DOWN. 
  I found that at the end of commit_port_binding() [8] after 
neutron_lib.callbacks.registry notification is send the port status moves to 
UP. For ml2/OVN it stays DOWN. This is the first difference that I found 
between ml2/ovs and ml2/ovn.

  After a bit digging I figured out how 'network-vif-plugged' is triggered in 
ml2/ovs.
  Lets see how this is done.

  1. On list of registered callbacks in ml2/ovs [8] we have configured
  callback from class ovo_rpc._ObjectChangeHandler [9] and at the end of
  commit_port_binding() this callback is used.

  -
  neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event
  -

  2. It is responsible for pushing new port object revisions to agents,
  like:

  
  Jun 24 10:01:01 test-migrate-1 neutron-server[3685]: DEBUG 
neutron.api.rpc.handlers.resources_rpc [None 
req-1430f349-d644-4d33-8833-90fad0124dcd service neutron] Pushing event updated 
for resources: {'Port': 
['ID=3704a567-ef4c-4f6d-9557-a1191de07c4a,revision_number=10']} {{(pid=3697) 
push /opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:243}}
  

  3. OVS agent consumes it and sends back RPC to the neutron server that port 
is actually UP (on source node!):
  

  Jun 24 10:01:01 test-migrate-1 neutron-openvswitch-agent[18660]: DEBUG 
neutron.agent.resource_cache [None req-1430f349-d644-4d33-8833-90fad0124dcd 
service neutron] Resource Port 3704a567-ef4c-4f6d-9557-a1191de07c4a updated 
(revision_number 8->10). Old fields: {'status': u'ACTIVE', 'bindings': 
[PortBinding(host='test-migrate-1',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={},status='INACTIVE',vif_details={"port_filter":
 true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal'), 
PortBinding(host='test-migrate-2',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={"migrating_to":
 "test-migrate-1"},status='ACTIVE',vif_details={"port_filter": true, 
"bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal')], 'binding_levels': 
[PortBindingLevel(driver='openvswitch',host='test-migrate-1',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59),
 
PortBindingLevel(driver='openvswitch',host='test-migrate-2',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59)]}
 New fields: {'status': u'DOWN', 'bindings': 

[Yahoo-eng-team] [Bug 1889737] [NEW] [OVN] Stop using neutron.api.rpc.handlers.resources_rpc with OVN as a backend

2020-07-31 Thread Maciej Jozefczyk
Public bug reported:

I noticed that in devstack master we have a lot of logs like:

Jul 21 08:18:13.371897 ubuntu-bionic-rax-iad-0018525571 neutron-
server[6599]: DEBUG neutron.api.rpc.handlers.resources_rpc [None req-
968a9155-c80a-4fbc-9c10-bcfca6ba372d None None] Pushing event updated
for resources: {'Port': ['ID=fedef62e-
0a31-4136-a693-4b4d3bab289e,revision_number=5']} {{(pid=6988) push
/opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:243}}


That means we're pushing resources updates via RPC, but it is not needed with 
OVN as a backend, because there is no consumer of those messages.

Lets try to stop doing it.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1889737

Title:
  [OVN] Stop using neutron.api.rpc.handlers.resources_rpc with OVN as a
  backend

Status in neutron:
  New

Bug description:
  I noticed that in devstack master we have a lot of logs like:

  Jul 21 08:18:13.371897 ubuntu-bionic-rax-iad-0018525571 neutron-
  server[6599]: DEBUG neutron.api.rpc.handlers.resources_rpc [None req-
  968a9155-c80a-4fbc-9c10-bcfca6ba372d None None] Pushing event updated
  for resources: {'Port': ['ID=fedef62e-
  0a31-4136-a693-4b4d3bab289e,revision_number=5']} {{(pid=6988) push
  /opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:243}}

  
  That means we're pushing resources updates via RPC, but it is not needed with 
OVN as a backend, because there is no consumer of those messages.

  Lets try to stop doing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1889737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1889738] [NEW] [OVN] Stop doing PgDelPortCommand on each router port update

2020-07-31 Thread Maciej Jozefczyk
Public bug reported:

I noticed while we do a router port update, on each call there is a
command:

Jul 31 08:17:15 devstack-mjozefcz-ovn-octavia-provider neutron-
server[26580]: DEBUG ovsdbapp.backend.ovs_idl.transaction [None req-
61de5786-9d32-4d52-8a24-56af3d400ce2 None None] Running txn n=1
command(idx=2): PgDelPortCommand(port_group=neutron_pg_drop,
lsp=['7bf20b51-8b40-448e-a528-54d9727dbddc'], if_exists=False)
{{(pid=26587) do_commit /usr/local/lib/python3.6/dist-
packages/ovsdbapp/backend/ovs_idl/transaction.py:87}}


It is not needed on each call, because:
1) only if a router port is created we shouldn't add it to default pg_drop (we 
don't filter traffic on router ports)
2) only if a normal port is being bound to a router we should drop this default 
pg_drop

In all other cases - it is not needed. We will save CPU cycles.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1889738

Title:
  [OVN] Stop doing PgDelPortCommand on each router port update

Status in neutron:
  New

Bug description:
  I noticed while we do a router port update, on each call there is a
  command:

  Jul 31 08:17:15 devstack-mjozefcz-ovn-octavia-provider neutron-
  server[26580]: DEBUG ovsdbapp.backend.ovs_idl.transaction [None req-
  61de5786-9d32-4d52-8a24-56af3d400ce2 None None] Running txn n=1
  command(idx=2): PgDelPortCommand(port_group=neutron_pg_drop,
  lsp=['7bf20b51-8b40-448e-a528-54d9727dbddc'], if_exists=False)
  {{(pid=26587) do_commit /usr/local/lib/python3.6/dist-
  packages/ovsdbapp/backend/ovs_idl/transaction.py:87}}

  
  It is not needed on each call, because:
  1) only if a router port is created we shouldn't add it to default pg_drop 
(we don't filter traffic on router ports)
  2) only if a normal port is being bound to a router we should drop this 
default pg_drop

  In all other cases - it is not needed. We will save CPU cycles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1889738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888646] [NEW] [OVN Octavia Provider] octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener fails

2020-07-23 Thread Maciej Jozefczyk
Public bug reported:

octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener
fails

example failure:

https://e500f5844c497d7c1455-bb0af7d0ed113130252cfd767637324e.ssl.cf2.rackcdn.com/742445/4/check
/ovn-octavia-provider-tempest-release/9fb114c/testr_results.html

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 86, in test_pool_create_with_listener
self._test_pool_create(has_listener=True)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 149, in _test_pool_create
CONF.load_balancer.build_timeout)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/waiters.py",
 line 96, in wait_for_status
raise exceptions.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (PoolAPITest:test_pool_create_with_listener) show_pool 
operating_status failed to update to ONLINE within the required time 300. 
Current status of show_pool: OFFLINE

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn-octavia-provider

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888646

Title:
  [OVN Octavia Provider]
  
octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener
  fails

Status in neutron:
  Confirmed

Bug description:
  
octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener
  fails

  example failure:

  
https://e500f5844c497d7c1455-bb0af7d0ed113130252cfd767637324e.ssl.cf2.rackcdn.com/742445/4/check
  /ovn-octavia-provider-tempest-release/9fb114c/testr_results.html

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 86, in test_pool_create_with_listener
  self._test_pool_create(has_listener=True)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 149, in _test_pool_create
  CONF.load_balancer.build_timeout)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/waiters.py",
 line 96, in wait_for_status
  raise exceptions.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (PoolAPITest:test_pool_create_with_listener) show_pool 
operating_status failed to update to ONLINE within the required time 300. 
Current status of show_pool: OFFLINE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888489] [NEW] [OVN Octavia Provider] octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest.test_member_batch_update fails

2020-07-22 Thread Maciej Jozefczyk
Public bug reported:

octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest.test_member_batch_update
fails on:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_member.py",
 line 863, in test_member_batch_update
pool_id=pool_id, members_list=batch_update_list)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/services/load_balancer/v2/member_client.py",
 line 271, in update_members
response, body = self.put(request_uri, jsonutils.dumps(obj_dict))
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 362, in put
return self.request('PUT', url, extra_headers, headers, body, chunked)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 702, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 879, in 
_error_checker
message=message)
tempest.lib.exceptions.ServerFault: Got server fault
Details: b'{"faultcode": "Server", "faultstring": "Provider \'ovn\' reports 
error: member_batch_update() takes 2 positional arguments but 3 were given", 
"debuginfo": null}'

** Affects: neutron
 Importance: High
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn-octavia-provider

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888489

Title:
  [OVN Octavia Provider]
  
octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest.test_member_batch_update
  fails

Status in neutron:
  Confirmed

Bug description:
  
octavia_tempest_plugin.tests.api.v2.test_member.MemberAPITest.test_member_batch_update
  fails on:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_member.py",
 line 863, in test_member_batch_update
  pool_id=pool_id, members_list=batch_update_list)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/services/load_balancer/v2/member_client.py",
 line 271, in update_members
  response, body = self.put(request_uri, jsonutils.dumps(obj_dict))
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 362, in 
put
  return self.request('PUT', url, extra_headers, headers, body, chunked)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 702, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 879, in 
_error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: b'{"faultcode": "Server", "faultstring": "Provider \'ovn\' reports 
error: member_batch_update() takes 2 positional arguments but 3 were given", 
"debuginfo": null}'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887363] [NEW] [ovn-octavia-provider] Functional tests job fails

2020-07-13 Thread Maciej Jozefczyk
Public bug reported:

Functional tests job fails on:

2020-07-13 08:22:50.145117 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:113
 :   source 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs
2020-07-13 08:22:50.145252 | controller | 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:
 line 113: 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs: No 
such file or directory

https://9ce43a75e3387ceb8909-2b4f2fa211fea8445ec0f4a568f6056b.ssl.cf2.rackcdn.com/740625/1/check
/ovn-octavia-provider-functional/714ba02/job-output.txt

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887363

Title:
  [ovn-octavia-provider] Functional tests job fails

Status in neutron:
  New

Bug description:
  Functional tests job fails on:

  2020-07-13 08:22:50.145117 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:113
 :   source 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs
  2020-07-13 08:22:50.145252 | controller | 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:
 line 113: 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs: No 
such file or directory

  
https://9ce43a75e3387ceb8909-2b4f2fa211fea8445ec0f4a568f6056b.ssl.cf2.rackcdn.com/740625/1/check
  /ovn-octavia-provider-functional/714ba02/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1886962] [NEW] [OVN][QOS] NBDB qos table entries still exist even after corresponding neutron ports are deleted

2020-07-09 Thread Maciej Jozefczyk
Public bug reported:

When removing neutron ports with qos policy applied, corresponding
entries are not removed from NBDB qos table.

Steps to reproduce:

openstack network qos policy create bw-limiter
openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --egress bw-limiter
openstack network create internal_A
openstack port create vm1-port --network internal_A
openstack port set --qos-policy bw-limiter 
On this stage 'ovn-nbctl list qos' displays corresponding qos rule of the port
openstack port delete 
Result: port removed from neutron DB - this is OK. But OVN NBDB qos table still 
displays qos entry for this port:

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1886962

Title:
  [OVN][QOS] NBDB qos table entries still exist even after corresponding
  neutron ports are deleted

Status in neutron:
  New

Bug description:
  When removing neutron ports with qos policy applied, corresponding
  entries are not removed from NBDB qos table.

  Steps to reproduce:

  openstack network qos policy create bw-limiter
  openstack network qos rule create --type bandwidth-limit --max-kbps 3000 
--max-burst-kbits 2400 --egress bw-limiter
  openstack network create internal_A
  openstack port create vm1-port --network internal_A
  openstack port set --qos-policy bw-limiter 
  On this stage 'ovn-nbctl list qos' displays corresponding qos rule of the port
  openstack port delete 
  Result: port removed from neutron DB - this is OK. But OVN NBDB qos table 
still displays qos entry for this port:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1886962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1877377] Re: [OVN] neutron-ovn-tempest-ovs-master-fedora periodic job is failing

2020-06-25 Thread Maciej Jozefczyk
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877377

Title:
  [OVN] neutron-ovn-tempest-ovs-master-fedora periodic job is failing

Status in neutron:
  In Progress

Bug description:
  https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-
  master-fedora

  Last success:
  https://zuul.openstack.org/build/3816a88272ea413995408846a52b5366

  First failure:
  https://zuul.openstack.org/build/b12ec3ab38e6418d9580829f0e98bfd2

  Failure is on installation of kernel-devel for OVS module compilation:

  2020-04-26 06:28:30.004 | No match for argument: kernel-devel-5.5.17
  2020-04-26 06:28:30.004 | Error: Unable to find a match: kernel-devel-5.5.17
  2020-04-26 06:28:30.004 | YUM_FAILED 1

  
  Strange is based on the logs from last success run, the package 
kernel-devel-5.5.17 has been installed properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1877377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1884986] [NEW] [OVN] functional test failing TestNBDbMonitorOverTcp test_floatingip_mac_bindings

2020-06-24 Thread Maciej Jozefczyk
Public bug reported:

We can find random failures of the test:

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_63a/711425/11/check
/neutron-functional/63ac4ca/testr_results.html

neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_floatingip_mac_bindings


ft3.1: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_floatingip_mac_bindingstesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 132, in test_floatingip_mac_bindings
macb_id = self.sb_api.db_create('MAC_Binding', datapath=dp[0]['_uuid'],
IndexError: list index out of range

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1884986

Title:
  [OVN] functional test failing TestNBDbMonitorOverTcp
  test_floatingip_mac_bindings

Status in neutron:
  New

Bug description:
  We can find random failures of the test:

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_63a/711425/11/check
  /neutron-functional/63ac4ca/testr_results.html

  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_floatingip_mac_bindings

  
  ft3.1: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_floatingip_mac_bindingstesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 132, in test_floatingip_mac_bindings
  macb_id = self.sb_api.db_create('MAC_Binding', datapath=dp[0]['_uuid'],
  IndexError: list index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1884986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1882724] [NEW] OVN Octavia driver: driver shouldn't update status to octavia in case of VIP creation failure

2020-06-09 Thread Maciej Jozefczyk
Public bug reported:

While there is a VIP port already created and used by other LB
(Amphorea), which in different thread is going to be deleted, and second
thread tries to re-use the same IP address as previous VIP, the OVN LB
sends the update_status_to_octavia() instead fail with driver error.

Example:
https://github.com/openstack/networking-ovn/blob/stable/train/networking_ovn/octavia/ovn_driver.py#L1811

It causes some strange Octavia API behavior.

Example log:

http://paste.openstack.org/show/794494/


The OVN Octavia provider should do the same as Amphorea provider:
https://github.com/openstack/octavia/blob/master/octavia/api/drivers/amphora_driver/v2/driver.py#L77

** Affects: neutron
 Importance: Medium
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn-octavia-provider

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Confirmed

** Tags removed: ovn
** Tags added: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1882724

Title:
  OVN Octavia driver:  driver shouldn't update status to octavia in case
  of VIP creation failure

Status in neutron:
  Confirmed

Bug description:
  While there is a VIP port already created and used by other LB
  (Amphorea), which in different thread is going to be deleted, and
  second thread tries to re-use the same IP address as previous VIP, the
  OVN LB sends the update_status_to_octavia() instead fail with driver
  error.

  Example:
  
https://github.com/openstack/networking-ovn/blob/stable/train/networking_ovn/octavia/ovn_driver.py#L1811

  It causes some strange Octavia API behavior.

  Example log:

  http://paste.openstack.org/show/794494/

  
  The OVN Octavia provider should do the same as Amphorea provider:
  
https://github.com/openstack/octavia/blob/master/octavia/api/drivers/amphora_driver/v2/driver.py#L77

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1882724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1882060] [NEW] Neutron CI doesn't run tests that require advanced image

2020-06-04 Thread Maciej Jozefczyk
Public bug reported:

In Neutron CI we mostly run tempest jobs, that inherits from 'tempest-
integrated-networking' [1]. This configuration doesn't have anything
about ADVANCED_IMAGE.

We know that we have tempest tests that require ADVANCE_IMAGE, like this
one [2].

In neutron-tempest-plugin gates we do something else because we
explicitly define ADVANCED_IMAGE [3].

That means we don't run several tests in Neutron gates. We should
add/enable advanced image to Neutron gates.

[1] https://opendev.org/openstack/tempest/src/branch/master/.zuul.yaml
[2] 
https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_multicast.py#L139
[3] 
https://opendev.org/openstack/neutron-tempest-plugin/src/branch/master/zuul.d/base.yaml

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1882060

Title:
  Neutron CI doesn't run tests that require advanced image

Status in neutron:
  New

Bug description:
  In Neutron CI we mostly run tempest jobs, that inherits from 'tempest-
  integrated-networking' [1]. This configuration doesn't have anything
  about ADVANCED_IMAGE.

  We know that we have tempest tests that require ADVANCE_IMAGE, like
  this one [2].

  In neutron-tempest-plugin gates we do something else because we
  explicitly define ADVANCED_IMAGE [3].

  That means we don't run several tests in Neutron gates. We should
  add/enable advanced image to Neutron gates.

  [1] https://opendev.org/openstack/tempest/src/branch/master/.zuul.yaml
  [2] 
https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_multicast.py#L139
  [3] 
https://opendev.org/openstack/neutron-tempest-plugin/src/branch/master/zuul.d/base.yaml

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1882060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881759] [NEW] [OVN] Virtual port type set while port has no parents

2020-06-02 Thread Maciej Jozefczyk
Public bug reported:

I noticed that one port has been set as virtual, while there is no other
port having the same ip address configured withing same network (address
pairs extension).

My example is as follows:

()[root@controller-0 flows-distribution]# ovn-nbctl list logical_switch_port 
6c788bc2-beb1-4f12-bf93-b45433241b90
_uuid   : 6c788bc2-beb1-4f12-bf93-b45433241b90
addresses   : ["fa:16:3e:23:b5:50 10.2.3.19"]
dhcpv4_options  : 59f7a3c1-9a6f-45e4-92ec-8a3263650ab0
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="10.2.3.19/24", "neutron:device_id"="", 
"neutron:device_owner"="", 
"neutron:network_name"="neutron-b55fccdf-e79e-4967-b87c-e7380d57e678", 
"neutron:port_name"="s_rally_e0292957_jPyoaKst", 
"neutron:project_id"="6e447c71b8bf480d80e008bb72015832", 
"neutron:revision_number"="3", 
"neutron:security_group_ids"="abb6d545-2647-42ca-a832-4e0945c72249"}

   
ha_chassis_group: []
name: "ec863657-2757-4636-a434-b7a67b41c9cb"
options : {requested-chassis="compute-1.redhat.local", 
virtual-ip="10.2.3.19", virtual-parents="21cf8792-2944-46cc-bc73-97c761a50f25"} 

   
parent_name : []
port_security   : ["fa:16:3e:23:b5:50 10.2.3.19"]
tag : []
tag_request : []
type: virtual
up  : true
()[root@controller-0 flows-distribution]# 


And the port that has been identified as a parent for ^:

()[root@controller-0 flows-distribution]# ovn-nbctl list logical_switch_port 
21cf8792-2944-46cc-bc73-97c761a50f25
_uuid   : b8195549-ccb4-4c4c-b228-2e2e095cf017
addresses   : ["fa:16:3e:be:07:5c 10.2.3.191"]
dhcpv4_options  : 59f7a3c1-9a6f-45e4-92ec-8a3263650ab0
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="10.2.3.191/24", "neutron:device_id"="", 
"neutron:device_owner"="", 
"neutron:network_name"="neutron-b55fccdf-e79e-4967-b87c-e7380d57e678", 
"neutron:port_name"="s_rally_e0292957_K7uVoGwV", 
"neutron:project_id"="6e447c71b8bf480d80e008bb72015832", 
"neutron:revision_number"="3", 
"neutron:security_group_ids"="abb6d545-2647-42ca-a832-4e0945c72249"}

  
ha_chassis_group: []
name: "21cf8792-2944-46cc-bc73-97c761a50f25"
options : {requested-chassis="compute-1.redhat.local"}
parent_name : []
port_security   : ["fa:16:3e:be:07:5c 10.2.3.191"]
tag : []
tag_request : []
type: ""
up  : true


In line:
https://github.com/openstack/neutron/blob/cb55643a0695ebc5b41f50f6edb1546bcc676b71/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L209

there is a broken check "and virtual_ip in ps". It validates string
instead actual IP address.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881759

Title:
  [OVN] Virtual port type set while port has no parents

Status in neutron:
  Confirmed

Bug description:
  I noticed that one port has been set as virtual, while there is no
  other port having the same ip address configured withing same network
  (address pairs extension).

  My example is as follows:

  ()[root@controller-0 flows-distribution]# ovn-nbctl list logical_switch_port 
6c788bc2-beb1-4f12-bf93-b45433241b90
  _uuid   : 6c788bc2-beb1-4f12-bf93-b45433241b90
  addresses   : ["fa:16:3e:23:b5:50 10.2.3.19"]
  dhcpv4_options  : 59f7a3c1-9a6f-45e4-92ec-8a3263650ab0
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="10.2.3.19/24", 
"neutron:device_id"="", "neutron:device_owner"="", 
"neutron:network_name"="neutron-b55fccdf-e79e-4967-b87c-e7380d57e678", 
"neutron:port_name"="s_rally_e0292957_jPyoaKst", 
"neutron:project_id"="6e447c71b8bf480d80e008bb72015832", 
"neutron:revision_number"="3", 
"neutron:security_group_ids"="abb6d545-2647-42ca-a832-4e0945c72249"}

   
  ha_chassis_group: []
  name: "ec863657-2757-4636-a434-b7a67b41c9cb"
  options : {requested-chassis="compute-1.redhat.local", 
virtual-ip="10.2.3.19", virtual-parents="21cf8792-2944-46cc-bc73-97c761a50f25"} 

   
  parent_name : []
  port_security   : ["fa:16:3e:23:b5:50 10.2.3.19"]
  tag : 

[Yahoo-eng-team] [Bug 1881558] [NEW] [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new Cirros

2020-06-01 Thread Maciej Jozefczyk
Public bug reported:

Recently merged code [1] added a few IPv6 hotplug scenarios.
In meantime we're working on enabling new Cirros on OVN Gates [2]

After merging [1] we can find that on [2] the new tests started to fail:

neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.

Example failure:
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release

[1] https://review.opendev.org/#/c/711931/
[2] https://review.opendev.org/#/c/711425/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Summary changed:

- [OVN][CIrros 0.5.1] IPv6 hot plug tempest tests are failing with new cirros
+ [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new Cirros

** Description changed:

- Recently merged code [1] added a IPv6 hotplug scenarios.
+ Recently merged code [1] added a few IPv6 hotplug scenarios.
  In meantime we're working on enabling new Cirros on OVN Gates [2]
  
  After merging [1] we can find that on [2] the new tests started to fail:
  
  
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
  neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.
  
- 
  Example failure:
  
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release
  
- 
  [1] https://review.opendev.org/#/c/711931/
  [2] https://review.opendev.org/#/c/711425/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881558

Title:
  [OVN][Cirros 0.5.1] IPv6 hot plug tempest tests are failing with new
  Cirros

Status in neutron:
  New

Bug description:
  Recently merged code [1] added a few IPv6 hotplug scenarios.
  In meantime we're working on enabling new Cirros on OVN Gates [2]

  After merging [1] we can find that on [2] the new tests started to
  fail:

  
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
  neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.

  Example failure:
  
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release

  [1] https://review.opendev.org/#/c/711931/
  [2] https://review.opendev.org/#/c/711425/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881283] [NEW] [OVN] In stable branches we don't run neutron-tempest-plugin tests

2020-05-29 Thread Maciej Jozefczyk
Public bug reported:

We don't run neutron-tempest-plugin tests in stable branches, at least
with networking-ovn...

Example:
https://41620a559c5090d887c6-f99336677cbcb935a72dddbf18215860.ssl.cf5.rackcdn.com/731477/2/check/networking-ovn-tempest-dsvm-ovs-release/5f1b8f3/testr_results.html

600~ tests were triggered instead 1200~...

Because of the line [1] the "INSTALL_TEMPEST" variable is False on
stable branches, to not mix stable code requirements with master tempest
plugin.

neutron-tempest-plugin is not installed while testing networking-ovn on
stable branches.

Recently this change was merged [2], that enables installation of
tempest in separate py3 only venv. So that we can run new tempest in
stable branches.

Maybe we could somehow install neutron-tempest-plugin in that venv
instead globally? That could enable running the tests on gates.

[1] 
https://opendev.org/openstack/devstack/src/branch/stable/train/lib/tempest#L60
[2] 
https://github.com/openstack/tempest/commit/1c680fdb728c24a4c9a1507ad8319f0a505cef9c

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881283

Title:
  [OVN] In stable branches we don't run neutron-tempest-plugin tests

Status in neutron:
  New

Bug description:
  We don't run neutron-tempest-plugin tests in stable branches, at least
  with networking-ovn...

  Example:
  
https://41620a559c5090d887c6-f99336677cbcb935a72dddbf18215860.ssl.cf5.rackcdn.com/731477/2/check/networking-ovn-tempest-dsvm-ovs-release/5f1b8f3/testr_results.html

  600~ tests were triggered instead 1200~...

  Because of the line [1] the "INSTALL_TEMPEST" variable is False on
  stable branches, to not mix stable code requirements with master
  tempest plugin.

  neutron-tempest-plugin is not installed while testing networking-ovn
  on stable branches.

  Recently this change was merged [2], that enables installation of
  tempest in separate py3 only venv. So that we can run new tempest in
  stable branches.

  Maybe we could somehow install neutron-tempest-plugin in that venv
  instead globally? That could enable running the tests on gates.

  [1] 
https://opendev.org/openstack/devstack/src/branch/stable/train/lib/tempest#L60
  [2] 
https://github.com/openstack/tempest/commit/1c680fdb728c24a4c9a1507ad8319f0a505cef9c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880969] [NEW] Creating FIP takes time

2020-05-27 Thread Maciej Jozefczyk
Public bug reported:

I noticed on upstream and downstream gates that while creating
FloatingIP for action like:

neutron floatingip-create public

For ml2/ovs and ml2/ovn this operation takes minimum ~4 seconds.

The same we can find on u/s gates from rally jobs [1].

While we put the load on Neutron server it normally takes more than 10
seconds.

For ML/OVN creating a FIP doesn't end with creating NAT entry in OVN
NBDB row. So its clearly only API operation.

Maybe we can consider profiling it?

[1]
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check
/neutron-rally-
task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Creating FloatingIP takes time
+ Creating FIP takes time

** Description changed:

  I noticed on upstream and downstream gates that while creating
  FloatingIP for action like:
  
  neutron floatingip-create public
  
  For ml2/ovs and ml2/ovn this operation takes minimum ~4 seconds.
  
  The same we can find on u/s gates from rally jobs [1].
  
- While we put the load on Neutron server it normally more than 10
+ While we put the load on Neutron server it normally takes more than 10
  seconds.
  
  For ML/OVN creating a FIP doesn't end with creating NAT entry in OVN
  NBDB row. So its clearly only API operation.
  
  Maybe we can consider profiling it?
  
- 
- [1] 
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check/neutron-rally-task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview
+ [1]
+ 
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check
+ /neutron-rally-
+ 
task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880969

Title:
  Creating FIP takes time

Status in neutron:
  New

Bug description:
  I noticed on upstream and downstream gates that while creating
  FloatingIP for action like:

  neutron floatingip-create public

  For ml2/ovs and ml2/ovn this operation takes minimum ~4 seconds.

  The same we can find on u/s gates from rally jobs [1].

  While we put the load on Neutron server it normally takes more than 10
  seconds.

  For ML/OVN creating a FIP doesn't end with creating NAT entry in OVN
  NBDB row. So its clearly only API operation.

  Maybe we can consider profiling it?

  [1]
  
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check
  /neutron-rally-
  
task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1879301] Re: [OVN] networking-ovn-tempest-dsvm-ovs-release-python2 job starts to fail on tempest py2 installation

2020-05-20 Thread Maciej Jozefczyk
Fixed released: https://review.opendev.org/#/c/728817/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1879301

Title:
  [OVN] networking-ovn-tempest-dsvm-ovs-release-python2 job starts to
  fail on tempest py2 installation

Status in neutron:
  Fix Released

Bug description:
  The job networking-ovn-tempest-dsvm-ovs-release-python2  starts to
  fail on tempest py2 installation. Its blocking stable/train and maybe
  other stable branches.

  2020-05-18 07:48:07.702856 | controller | Collecting oslo.concurrency===4.0.2 
(from -c https://releases.openstack.org/constraints/upper/master (line 24))
  2020-05-18 07:48:07.702895 | controller |   ERROR: Could not find a version 
that satisfies the requirement oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24)) (from 
versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 1.4.1, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 
1.8.2, 1.9.0, 1.10.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 
2.7.0, 2.8.0, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 3.5.0, 3.6.0, 3.7.0, 3.7.1, 
3.8.0, 3.9.0, 3.10.0, 3.11.0, 3.12.0, 3.13.0, 3.14.0, 3.14.1, 3.15.0, 3.16.0, 
3.17.0, 3.18.0, 3.18.1, 3.19.0, 3.20.0, 3.21.0, 3.21.1, 3.21.2, 3.22.0, 3.23.0, 
3.24.0, 3.25.0, 3.25.1, 3.26.0, 3.27.0, 3.28.0, 3.28.1, 3.29.0, 3.29.1, 3.30.0, 
3.31.0)
  2020-05-18 07:48:07.702935 | controller | ERROR: No matching distribution 
found for oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24))
  2020-05-18 07:48:07.702962 | controller | WARNING: You are using pip version 
19.2.3, however version 20.1 is available.
  2020-05-18 07:48:07.702983 | controller | You should consider upgrading via 
the 'pip install --upgrade pip' command.
  2020-05-18 07:48:07.703003 | controller |
  2020-05-18 07:48:07.703227 | controller | === 
log end 
  2020-05-18 07:48:07.703702 | controller | ERROR: could not install deps 
[-chttps://releases.openstack.org/constraints/upper/master, 
-r/opt/stack/tempest/requirements.txt]; v = 
InvocationError(u'/opt/stack/tempest/.tox/tempest/bin/pip install 
-chttps://releases.openstack.org/constraints/upper/master 
-r/opt/stack/tempest/requirements.txt', 1)

  
  https://3ba3378426f3a529e977-
  
d1da58634df71c1c590b1ad3c3dea539.ssl.cf5.rackcdn.com/715447/10/check/networking-ovn-tempest-dsvm-ovs-release-python2/0474be3/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1879301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1879301] [NEW] [OVN] networking-ovn-tempest-dsvm-ovs-release-python2 job starts to fail on tempest py2 installation

2020-05-18 Thread Maciej Jozefczyk
Public bug reported:

The job networking-ovn-tempest-dsvm-ovs-release-python2  starts to fail
on tempest py2 installation. Its blocking stable/train and maybe other
stable branches.

2020-05-18 07:48:07.702856 | controller | Collecting oslo.concurrency===4.0.2 
(from -c https://releases.openstack.org/constraints/upper/master (line 24))
2020-05-18 07:48:07.702895 | controller |   ERROR: Could not find a version 
that satisfies the requirement oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24)) (from 
versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 1.4.1, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 
1.8.2, 1.9.0, 1.10.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 
2.7.0, 2.8.0, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 3.5.0, 3.6.0, 3.7.0, 3.7.1, 
3.8.0, 3.9.0, 3.10.0, 3.11.0, 3.12.0, 3.13.0, 3.14.0, 3.14.1, 3.15.0, 3.16.0, 
3.17.0, 3.18.0, 3.18.1, 3.19.0, 3.20.0, 3.21.0, 3.21.1, 3.21.2, 3.22.0, 3.23.0, 
3.24.0, 3.25.0, 3.25.1, 3.26.0, 3.27.0, 3.28.0, 3.28.1, 3.29.0, 3.29.1, 3.30.0, 
3.31.0)
2020-05-18 07:48:07.702935 | controller | ERROR: No matching distribution found 
for oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24))
2020-05-18 07:48:07.702962 | controller | WARNING: You are using pip version 
19.2.3, however version 20.1 is available.
2020-05-18 07:48:07.702983 | controller | You should consider upgrading via the 
'pip install --upgrade pip' command.
2020-05-18 07:48:07.703003 | controller |
2020-05-18 07:48:07.703227 | controller | === 
log end 
2020-05-18 07:48:07.703702 | controller | ERROR: could not install deps 
[-chttps://releases.openstack.org/constraints/upper/master, 
-r/opt/stack/tempest/requirements.txt]; v = 
InvocationError(u'/opt/stack/tempest/.tox/tempest/bin/pip install 
-chttps://releases.openstack.org/constraints/upper/master 
-r/opt/stack/tempest/requirements.txt', 1)


https://3ba3378426f3a529e977-
d1da58634df71c1c590b1ad3c3dea539.ssl.cf5.rackcdn.com/715447/10/check/networking-ovn-tempest-dsvm-ovs-release-python2/0474be3/job-output.txt

** Affects: neutron
 Importance: High
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1879301

Title:
  [OVN] networking-ovn-tempest-dsvm-ovs-release-python2 job starts to
  fail on tempest py2 installation

Status in neutron:
  Confirmed

Bug description:
  The job networking-ovn-tempest-dsvm-ovs-release-python2  starts to
  fail on tempest py2 installation. Its blocking stable/train and maybe
  other stable branches.

  2020-05-18 07:48:07.702856 | controller | Collecting oslo.concurrency===4.0.2 
(from -c https://releases.openstack.org/constraints/upper/master (line 24))
  2020-05-18 07:48:07.702895 | controller |   ERROR: Could not find a version 
that satisfies the requirement oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24)) (from 
versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 1.4.1, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 
1.8.2, 1.9.0, 1.10.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 
2.7.0, 2.8.0, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 3.5.0, 3.6.0, 3.7.0, 3.7.1, 
3.8.0, 3.9.0, 3.10.0, 3.11.0, 3.12.0, 3.13.0, 3.14.0, 3.14.1, 3.15.0, 3.16.0, 
3.17.0, 3.18.0, 3.18.1, 3.19.0, 3.20.0, 3.21.0, 3.21.1, 3.21.2, 3.22.0, 3.23.0, 
3.24.0, 3.25.0, 3.25.1, 3.26.0, 3.27.0, 3.28.0, 3.28.1, 3.29.0, 3.29.1, 3.30.0, 
3.31.0)
  2020-05-18 07:48:07.702935 | controller | ERROR: No matching distribution 
found for oslo.concurrency===4.0.2 (from -c 
https://releases.openstack.org/constraints/upper/master (line 24))
  2020-05-18 07:48:07.702962 | controller | WARNING: You are using pip version 
19.2.3, however version 20.1 is available.
  2020-05-18 07:48:07.702983 | controller | You should consider upgrading via 
the 'pip install --upgrade pip' command.
  2020-05-18 07:48:07.703003 | controller |
  2020-05-18 07:48:07.703227 | controller | === 
log end 
  2020-05-18 07:48:07.703702 | controller | ERROR: could not install deps 
[-chttps://releases.openstack.org/constraints/upper/master, 
-r/opt/stack/tempest/requirements.txt]; v = 
InvocationError(u'/opt/stack/tempest/.tox/tempest/bin/pip install 
-chttps://releases.openstack.org/constraints/upper/master 
-r/opt/stack/tempest/requirements.txt', 1)

  
  https://3ba3378426f3a529e977-
  
d1da58634df71c1c590b1ad3c3dea539.ssl.cf5.rackcdn.com/715447/10/check/networking-ovn-tempest-dsvm-ovs-release-python2/0474be3/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1

[Yahoo-eng-team] [Bug 1878160] [NEW] [OVN] Functional tests environment is using old OVN

2020-05-12 Thread Maciej Jozefczyk
Public bug reported:

Environment used for functional tests in Neutron master installs old OVS
and OVN instead newest one specified in zuul configuration.


2020-05-11 17:51:44.018628 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:111
 :   source /home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs
2020-05-11 17:51:44.021258 | controller | ++ 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:source:13 :   
OVS_REPO=https://github.com/openvswitch/ovs.git
2020-05-11 17:51:44.024454 | controller | +++ 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:source:14 :   
basename https://github.com/openvswitch/ovs.git
2020-05-11 17:51:44.025329 | controller | +++ 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:source:14 :   cut 
-f1 -d.
2020-05-11 17:51:44.029281 | controller | ++ 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:source:14 :   
OVS_REPO_NAME=ovs
2020-05-11 17:51:44.031459 | controller | ++ 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:source:15 :   
OVS_BRANCH=master
2020-05-11 17:51:44.034660 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:112
 :   remove_ovs_packages
2020-05-11 17:51:44.036794 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:213
 :   for package in openvswitch openvswitch-switch openvswitch-common
2020-05-11 17:51:44.039203 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:214
 :   is_package_installed openvswitch
2020-05-11 17:51:44.040999 | controller | + 
functions-common:is_package_installed:1338 :   [[ -z openvswitch ]]
2020-05-11 17:51:44.042712 | controller | + 
functions-common:is_package_installed:1342 :   [[ -z deb ]]
2020-05-11 17:51:44.044782 | controller | + 
functions-common:is_package_installed:1346 :   [[ deb = \d\e\b ]]
2020-05-11 17:51:44.046600 | controller | + 
functions-common:is_package_installed:1347 :   dpkg -s openvswitch
2020-05-11 17:51:44.063710 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:213
 :   for package in openvswitch openvswitch-switch openvswitch-common
2020-05-11 17:51:44.065874 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:214
 :   is_package_installed openvswitch-switch
2020-05-11 17:51:44.068487 | controller | + 
functions-common:is_package_installed:1338 :   [[ -z openvswitch-switch ]]
2020-05-11 17:51:44.070984 | controller | + 
functions-common:is_package_installed:1342 :   [[ -z deb ]]
2020-05-11 17:51:44.073079 | controller | + 
functions-common:is_package_installed:1346 :   [[ deb = \d\e\b ]]
2020-05-11 17:51:44.075080 | controller | + 
functions-common:is_package_installed:1347 :   dpkg -s openvswitch-switch
2020-05-11 17:51:44.093565 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:213
 :   for package in openvswitch openvswitch-switch openvswitch-common
2020-05-11 17:51:44.095882 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:remove_ovs_packages:214
 :   is_package_installed openvswitch-common
2020-05-11 17:51:44.098046 | controller | + 
functions-common:is_package_installed:1338 :   [[ -z openvswitch-common ]]
2020-05-11 17:51:44.100093 | controller | + 
functions-common:is_package_installed:1342 :   [[ -z deb ]]
2020-05-11 17:51:44.102364 | controller | + 
functions-common:is_package_installed:1346 :   [[ deb = \d\e\b ]]
2020-05-11 17:51:44.104367 | controller | + 
functions-common:is_package_installed:1347 :   dpkg -s openvswitch-common
2020-05-11 17:51:44.121001 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:113
 :   OVS_BRANCH=v2.12.0
2020-05-11 17:51:44.123399 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:114
 :   compile_ovs False /usr /var
2020-05-11 17:51:57.183697 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:prepare_for_compilation:43
 :   cd /home/zuul/src/opendev.org/openstack/ovs
2020-05-11 17:51:57.186069 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/devstack/lib/ovs:prepare_for_compilation:44
 :   git checkout v2.12.0
2020-05-11 17:51:57.477222 | controller | Note: checking out 'v2.12.0'.


Example log: https://0b56f229cf5dd2511d5c-
6e1e7a8d8abee2ff0e137b8a66c992cf.ssl.cf2.rackcdn.com/726850/1/check
/neutron-functional/2f23d61/job-output.txt

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: ovn ovn-octavia-provider

** Tags added: ovn-octavia-provider

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug 

[Yahoo-eng-team] [Bug 1877377] [NEW] [OVN] neutron-ovn-tempest-ovs-master-fedora periodic job is failing

2020-05-07 Thread Maciej Jozefczyk
Public bug reported:

https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-
master-fedora

Last success:
https://zuul.openstack.org/build/3816a88272ea413995408846a52b5366

First failure:
https://zuul.openstack.org/build/b12ec3ab38e6418d9580829f0e98bfd2

Failure is on installation of kernel-devel for OVS module compilation:

2020-04-26 06:28:30.004 | No match for argument: kernel-devel-5.5.17
2020-04-26 06:28:30.004 | Error: Unable to find a match: kernel-devel-5.5.17
2020-04-26 06:28:30.004 | YUM_FAILED 1


Strange is based on the logs from last success run, the package 
kernel-devel-5.5.17 has been installed properly.

** Affects: neutron
 Importance: Medium
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877377

Title:
  [OVN] neutron-ovn-tempest-ovs-master-fedora periodic job is failing

Status in neutron:
  Confirmed

Bug description:
  https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-
  master-fedora

  Last success:
  https://zuul.openstack.org/build/3816a88272ea413995408846a52b5366

  First failure:
  https://zuul.openstack.org/build/b12ec3ab38e6418d9580829f0e98bfd2

  Failure is on installation of kernel-devel for OVS module compilation:

  2020-04-26 06:28:30.004 | No match for argument: kernel-devel-5.5.17
  2020-04-26 06:28:30.004 | Error: Unable to find a match: kernel-devel-5.5.17
  2020-04-26 06:28:30.004 | YUM_FAILED 1

  
  Strange is based on the logs from last success run, the package 
kernel-devel-5.5.17 has been installed properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1877377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868110] Re: [OVN] neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log randomly fails

2020-04-23 Thread Maciej Jozefczyk
We can still find some failures:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_abb/717851/2/gate
/neutron-functional/abb91cb/testr_results.html

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e2c/717083/6/check
/neutron-functional/e2c63e4/testr_results.html

** Changed in: neutron
   Status: Fix Released => In Progress

** Changed in: neutron
   Status: In Progress => Confirmed

** Changed in: neutron
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868110

Title:
  [OVN]
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log
  randomly fails

Status in neutron:
  In Progress

Bug description:
  The functional test 
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log

  Randomly fails on our CI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1868110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1874447] [NEW] [OVN] Tempest test neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle fails randomly

2020-04-23 Thread Maciej Jozefczyk
Public bug reported:

We can see occasional test failures of tempest test with OVN:

neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle


Traceback (most recent call last):
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 78, in wait_until_true
eventlet.sleep(sleep)
  File "/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 
36, in sleep
hub.switch()
  File "/usr/local/lib/python3.6/dist-packages/eventlet/hubs/hub.py", line 298, 
in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 240, in test_trunk_subport_lifecycle
self._wait_for_port(port)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 141, in _wait_for_port
"status {!r}.".format(port['id'], status)))
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 82, in wait_until_true
raise exception
RuntimeError: Timed out waiting for port 'cffcacde-a34e-4e1a-90ca-8d48776b9851' 
to transition to get status 'ACTIVE'.


Example failure:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8c/717851/2/check/neutron-ovn-tempest-ovs-release/d8c0282/testr_results.html

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: ovn tempest

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1874447

Title:
  [OVN] Tempest test
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  fails randomly

Status in neutron:
  Confirmed

Bug description:
  We can see occasional test failures of tempest test with OVN:

  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle

  
  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 78, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 
36, in sleep
  hub.switch()
File "/usr/local/lib/python3.6/dist-packages/eventlet/hubs/hub.py", line 
298, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 240, in test_trunk_subport_lifecycle
  self._wait_for_port(port)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 141, in _wait_for_port
  "status {!r}.".format(port['id'], status)))
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 82, in wait_until_true
  raise exception
  RuntimeError: Timed out waiting for port 
'cffcacde-a34e-4e1a-90ca-8d48776b9851' to transition to get status 'ACTIVE'.


  Example failure:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8c/717851/2/check/neutron-ovn-tempest-ovs-release/d8c0282/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1874447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871608] [NEW] [OVN] Cannot create metadata port for segmented network

2020-04-08 Thread Maciej Jozefczyk
Public bug reported:

While following instructions for Routed Provided Networks [1] and using
OVN Neutron raises error during creation on second segment subnet:


 How to reproduce ===

sudo iniset /etc/neutron/plugins/ml2/ml2_conf.ini  ml2_type_vlan 
network_vlan_ranges  segment-1:100:102,segment-2:200:202
sudo systemctl restart devstack@q-svc

openstack network create --share --provider-physical-network segment-1 
--provider-network-type vlan --provider-segment 100 public-multisegment
openstack network segment set --name segment-1 $(openstack network segment list 
--network public-multisegment -c ID -f value)
openstack network segment create --physical-network segment-2 --network-type 
vlan --segment 200 --network public-multisegment segment-2

openstack subnet create --network public-multisegment --network-segment 
segment-1 --ip-version 4 --subnet-range 172.24.4.0/24 --allocation-pool 
start=172.24.4.100,end=172.24.4.200 public-multisegment-segment-1-v4
openstack subnet create --network public-multisegment --network-segment 
segment-2 --ip-version 4 --subnet-range 172.24.6.0/24 --allocation-pool 
start=172.24.6.100,end=172.24.6.200 public-multisegment-segment-2-v4

EXCEPTION RAISED ON LAST COMMAND:
Apr 08 11:23:35 central neutron-server[10871]: DEBUG 
neutron_lib.callbacks.manager [None req-e975c78f-bb1d-449d-9517-0a9386733b13 
demo admin] Notify callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group
_handler--9223372036853431474'] for port, before_update {{(pid=10878) 
_notify_loop 
/usr/local/lib/python3.6/dist-packages/neutron_lib/callbacks/manager.py:193}}   

Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers [None req-e975c78f-bb1d-449d-9517-0a9386733b13 
demo admin] Mechanism driver 'ovn' failed in create_subnet_postcommit: 
neutron.services.segments.exceptions.Fi
xedIpsSubnetsNotOnSameSegment: Cannot allocate addresses from different 
segments.   

   
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers Traceback (most recent call last): 

  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/home/vagrant/neutron/neutron/plugins/ml2/managers.py", line 477, in 
_call_on_drivers
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers getattr(driver.obj, method_name)(context)  

  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/home/vagrant/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 441, in create_subnet_postcommit 
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers context.network.current)   

  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/home/vagrant/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 2165, in create_subnet  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers if subnet['enable_dhcp']:  

  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/home/vagrant/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 2356, in update_metadata_port   
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers metadata_port['id'], port)  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/home/vagrant/neutron/neutron/common/utils.py", line 685, in inner 
  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers return f(self, context, *args, **kwargs)   

  
Apr 08 11:23:35 central neutron-server[10871]: ERROR 
neutron.plugins.ml2.managers   File 
"/usr/local/lib/python3.6/dist-packages/neutron_lib/db/api.py", line 233, in 
wrapped  
Apr 08 

[Yahoo-eng-team] [Bug 1871355] [NEW] OVN octavia provider driver should spawn long-living process in driver agent

2020-04-07 Thread Maciej Jozefczyk
Public bug reported:

The OVN Octavia provider driver in OvnProviderHelper caches attributes
[1]:

ovn_nbdb_api_for_events = None
ovn_nb_idl_for_events = None
ovn_nbdb_api = None


to not re-create each time OVN IDL that is used for handling events.

Now we are able to use Octavia Driver Agent [2] instead to not have
those long-living IDLs in API process.

TODO:
- create driver agent and register its entry point
- While setting up the driver agent instance start IDL that will handle events:
https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L279
- stop caching ovn_nbdb_api, ovn_nb_idl_for_events and ovn_nbdb_api_for_events 
in the OvnProviderHelper.


[1] 
https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L273
[2] 
https://docs.openstack.org/octavia/latest/contributor/guides/providers.html#provider-agent-method-invocation

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871355

Title:
  OVN octavia provider driver should spawn long-living process in driver
  agent

Status in neutron:
  New

Bug description:
  The OVN Octavia provider driver in OvnProviderHelper caches attributes
  [1]:

  ovn_nbdb_api_for_events = None
  ovn_nb_idl_for_events = None
  ovn_nbdb_api = None

  
  to not re-create each time OVN IDL that is used for handling events.

  Now we are able to use Octavia Driver Agent [2] instead to not have
  those long-living IDLs in API process.

  TODO:
  - create driver agent and register its entry point
  - While setting up the driver agent instance start IDL that will handle 
events:
  
https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L279
  - stop caching ovn_nbdb_api, ovn_nb_idl_for_events and 
ovn_nbdb_api_for_events in the OvnProviderHelper.

  
  [1] 
https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L273
  [2] 
https://docs.openstack.org/octavia/latest/contributor/guides/providers.html#provider-agent-method-invocation

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871032] [NEW] OVN migration scripts doesn't work with SSL-only deployment

2020-04-06 Thread Maciej Jozefczyk
Public bug reported:

Based on comments in review:

https://review.opendev.org/#/c/702247/7/tools/ovn_migration/migrate-to-
ovn.yml@108

Looks like we don't support migration to OVN for TripleO deployments,
while SSL-only deployment is specified. This needs investigation.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871032

Title:
  OVN migration scripts doesn't work with SSL-only deployment

Status in neutron:
  New

Bug description:
  Based on comments in review:

  https://review.opendev.org/#/c/702247/7/tools/ovn_migration/migrate-
  to-ovn.yml@108

  Looks like we don't support migration to OVN for TripleO deployments,
  while SSL-only deployment is specified. This needs investigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869877] [NEW] Segment doesn't exist network info

2020-03-31 Thread Maciej Jozefczyk
Public bug reported:

Each neutron network has at least one segment.

While the network has only one segment, the 'segment' key is not added
to the info returned by API, but it is merged with a network,

Example:
(Pdb++) pp context.current
{'admin_state_up': True,
 'availability_zone_hints': [],
 'availability_zones': [],
 'created_at': '2020-03-25T09:04:26Z',
 'description': 'test',
 'dns_domain': '',
 'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
 'ipv4_address_scope': None,
 'ipv6_address_scope': None,
 'is_default': True,
 'l2_adjacency': True,
 'mtu': 1500,
 'name': 'public',
 'port_security_enabled': True,
 'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'revision_number': 57,
 'router:external': True,
 'provider:network_type': 'flat',
 'provider:physical_network': 'public',
 'provider:segmentation_id': None,
 'shared': False,
 'status': 'ACTIVE',
 'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
 'e5ae108b-a04b-4f23-84ff-e89db3222772'],
 'tags': [],
 'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'updated_at': '2020-03-25T13:55:38Z',
 'vlan_transparent': None}


When then network has more than one segment defined, then the network info 
looks as follows, the segment key is there:

(Pdb++) pp context.current
{'admin_state_up': True,
 'availability_zone_hints': [],
 'availability_zones': [],
 'created_at': '2020-03-25T09:04:26Z',
 'description': 'test',
 'dns_domain': '',
 'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
 'ipv4_address_scope': None,
 'ipv6_address_scope': None,
 'is_default': True,
 'l2_adjacency': True,
 'mtu': 1500,
 'name': 'public',
 'port_security_enabled': True,
 'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'revision_number': 57,
 'router:external': True,
 'segments': [{'provider:network_type': 'flat',
   'provider:physical_network': 'public',
   'provider:segmentation_id': None},
  {'provider:network_type': 'flat',
   'provider:physical_network': 'public2',
   'provider:segmentation_id': None}],
 'shared': False,
 'status': 'ACTIVE',
 'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
 'e5ae108b-a04b-4f23-84ff-e89db3222772'],
 'tags': [],
 'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'updated_at': '2020-03-25T13:55:38Z',
 'vlan_transparent': None}


We should make this behavior to be unique - add segments to the keys() in all 
cases.
The segments should also include each segment 'id' - it is required for OVN to 
setup localnet ports.

** Affects: neutron
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869877

Title:
  Segment doesn't exist network info

Status in neutron:
  Triaged

Bug description:
  Each neutron network has at least one segment.

  While the network has only one segment, the 'segment' key is not added
  to the info returned by API, but it is merged with a network,

  Example:
  (Pdb++) pp context.current
  {'admin_state_up': True,
   'availability_zone_hints': [],
   'availability_zones': [],
   'created_at': '2020-03-25T09:04:26Z',
   'description': 'test',
   'dns_domain': '',
   'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
   'ipv4_address_scope': None,
   'ipv6_address_scope': None,
   'is_default': True,
   'l2_adjacency': True,
   'mtu': 1500,
   'name': 'public',
   'port_security_enabled': True,
   'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'revision_number': 57,
   'router:external': True,
   'provider:network_type': 'flat',
   'provider:physical_network': 'public',
   'provider:segmentation_id': None,
   'shared': False,
   'status': 'ACTIVE',
   'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
   'e5ae108b-a04b-4f23-84ff-e89db3222772'],
   'tags': [],
   'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'updated_at': '2020-03-25T13:55:38Z',
   'vlan_transparent': None}

  
  When then network has more than one segment defined, then the network info 
looks as follows, the segment key is there:

  (Pdb++) pp context.current
  {'admin_state_up': True,
   'availability_zone_hints': [],
   'availability_zones': [],
   'created_at': '2020-03-25T09:04:26Z',
   'description': 'test',
   'dns_domain': '',
   'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
   'ipv4_address_scope': None,
   'ipv6_address_scope': None,
   'is_default': True,
   'l2_adjacency': True,
   'mtu': 1500,
   'name': 'public',
   'port_security_enabled': True,
   'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'revision_number': 57,
   'router:external': True,
   'segments': [{'provider:network_type': 'flat',
 'provider:physical_network': 'public',
 'provider:segmentation_id': None},
{'provider:network_type': 'flat',
 'provider:physical_network': 'public2',
 'provider:segmentation_id': None}],
   'shared': False,
   

[Yahoo-eng-team] [Bug 1868110] [NEW] [OVN] neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log randomly fails

2020-03-19 Thread Maciej Jozefczyk
Public bug reported:

The functional test 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log

Randomly fails on our CI.

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868110

Title:
  [OVN]
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log
  randomly fails

Status in neutron:
  New

Bug description:
  The functional test 
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log

  Randomly fails on our CI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1868110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1866087] [NEW] [OVN Octavia Provider] Deleting of listener fails

2020-03-04 Thread Maciej Jozefczyk
Public bug reported:

Sometimes, while removing a listener the command fails with log below.

The problem has been recently found on OVN octavia provider gate.


Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): 
DbRemoveCommand(table=Load_Balancer, 
record=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084, co
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
LbDelCommand(lb=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084, vip=None, 
if_exists=False) {{(
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): 
DbClearCommand(table=Load_Balancer, 
record=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084, col
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
ERROR ovsdbapp.backend.ovs_idl.transaction [-] Traceback (most recent call 
last):
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 122, in run
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
txn.results.put(txn.do_commit())
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 86, in do_commit
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
command.run_idl(txn)
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/command.py", 
line 182, in run_idl
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
record = self.api.lookup(self.table, self.record)
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
line 107, in lookup
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
return self._lookup(table, record)
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
line 151, in _lookup
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
row = idlutils.row_by_value(self, rl.table, rl.column, record)
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]:   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", 
line 65, in row_by_value
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
raise RowNotFound(table=table, col=column, match=match)
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Load_Balancer with 
name=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084
Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]


Looks like in this situation the LB had multiple protocols configured (TCP and 
UDP). While removing fist listener from the LB the one of created OVN LB rows 
needs to be deleted, but then driver wants to update the vip entries on it. 
That is not needed.

** Affects: neutron
 Importance: High
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866087

Title:
  [OVN Octavia Provider] Deleting of listener fails

Status in neutron:
  In Progress

Bug description:
  Sometimes, while removing a listener the command fails with log below.

  The problem has been recently found on OVN octavia provider gate.

  
  Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): 
DbRemoveCommand(table=Load_Balancer, 
record=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084, co
  Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
LbDelCommand(lb=86c3b5dc-5ec7-48c0-9fe7-d67fc78ef084, vip=None, 
if_exists=False) {{(
  Mar 04 14:44:18 mjozefcz-ovn-provider-master devstack@o-api.service[30146]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): 
DbClearComm

[Yahoo-eng-team] [Bug 1866039] [NEW] [OVN] QoS gives different bandwidth limit measures than ml2/ovs

2020-03-04 Thread Maciej Jozefczyk
Public bug reported:

There is a difference in QoS tempest tests results between ml2/ovs and
ml2/ovn.

In the change [1] that enables QoS tempest tests for OVN the test 
neutron_tempest_plugin.scenario.test_qos.QoSTest.test_qos_basic_and_update
fails on the last check [2], after the policy is updated to be configured with 
values:

max_kbps=constants.LIMIT_KILO_BITS_PER_SECOND * 3
max_burst_kbps=constants.LIMIT_KILO_BITS_PER_SECOND * 3, 

Which means:
max_kbps = 3000
max_burst_kbps = 3000

Previous QoS validations in this test passes with values (max_kbps,
max_burst_kbps): (1000, 1000) and (2000, 2000).

I added some more debug log to the tempest test here [3], so that we can
compare test expected and measured values. Those are taken from test
runs from gates.


---
Expected is calculated as:
TOLERANCE_FACTOR = 1.5
constants.LIMIT_KILO_BITS_PER_SECOND = 1000
MULTIPLEXING_FACTOR = 1 or 2 or 3 depends on stage of the test

LIMIT_BYTES_SEC = (constants.LIMIT_KILO_BITS_PER_SECOND * 1024 *
   TOLERANCE_FACTOR / 8.0) * MULTIPLEXING_FACTOR
---
Results:
If expected <= measured, the test passes.

|max_kbps/max_burst_kbps|expected(bps)|ovs(bps)|ovn(bps)|linux_bridge(bps)|
|(1000, 1000)|192000|112613|141250|129124|
|(2000, 2000)|384000|311978|408886, 411005, 385152, 422114, 352903|300163|
|(3000, 3000)|576000|523677|820522,. failed|459569|

As we see only for (3000, 3000) OVN test failed. For (2000, 2000) it
passed after 5 retries.

---

So lets see how the QoS is configured on OVN nowadays:

stack@mjozefcz-devstack-qos-2:~/logs$ neutron qos-bandwidth-limit-rule-list  
047f7a8c-e143-471f-979c-4a4d95cefa5e 
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+---+--++--+
| direction | id   | max_burst_kbps | max_kbps |
+---+--++--+
| egress| 9dd84dc7-f216-432f-b1aa-ec17eb488720 |   3000 | 3000 |
+---+--++--+


Configured OVN NBDB:
stack@mjozefcz-devstack-qos-2:~/logs$ ovn-nbctl list qos
_uuid   : 1176fe8f-695d-4f79-a99f-f0df8a7b8652
action  : {}
bandwidth   : {burst=3000, rate=3000}
direction   : from-lport
external_ids: {}
match   : "inport == \"4521ef05-d139-4d84-a100-efb83fde2b47\""
priority: 2002

Configured meter on bridge:
stack@mjozefcz-devstack-qos-2:~/logs$ sudo ovs-ofctl -O OpenFlow13  dump-meters 
br-int
OFPST_METER_CONFIG reply (OF1.3) (xid=0x2):
meter=1 kbps burst stats bands=
type=drop rate=3000 burst_size=3000


Flow in bridge:
stack@mjozefcz-devstack-qos-2:~/logs$ sudo ovs-ofctl -O OpenFlow13 dump-flows 
br-int | grep meter
 cookie=0x398f0e17, duration=71156.273s, table=16, n_packets=136127, 
n_bytes=41572857, priority=2002,reg14=0x4,metadata=0x1 
actions=meter:1,resubmit(,17)


--

Questions:
* Why the test results are different compared to ml2/OVS?
* Maybe burst values should be configured differently?


[1] https://review.opendev.org/#/c/704833/
[2] 
https://github.com/openstack/neutron-tempest-plugin/blob/328edc882a3debf4f1b39687dfb559d7c5c385f3/neutron_tempest_plugin/scenario/test_qos.py#L271
[3] https://review.opendev.org/#/c/711048/

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New


** Tags: ovn qos

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Summary changed:

- [OVN] QoS gives different burst limit values
+ [OVN] QoS gives different bandwidth limit values

** Summary changed:

- [OVN] QoS gives different bandwidth limit values
+ [OVN] QoS gives different bandwidth limit measures than ml2/ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866039

Title:
  [OVN] QoS gives different bandwidth limit measures than ml2/ovs

Status in neutron:
  New

Bug description:
  There is a difference in QoS tempest tests results between ml2/ovs and
  ml2/ovn.

  In the change [1] that enables QoS tempest tests for OVN the test 
neutron_tempest_plugin.scenario.test_qos.QoSTest.test_qos_basic_and_update
  fails on the last check [2], after the policy is updated to be configured 
with values:

  max_kbps=constants.LIMIT_KILO_BITS_PER_SECOND * 3
  max_burst_kbps=constants.LIMIT_KILO_BITS_PER_SECOND * 3, 

  Which means:
  max_kbps = 3000
  max_burst_kbps = 3000

  Previous QoS validations in this test pa

[Yahoo-eng-team] [Bug 1861500] Re: [OVN] Key error while setting QoS policies

2020-03-02 Thread Maciej Jozefczyk
This has been fixed by: https://review.opendev.org/#/c/705452/

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861500

Title:
  [OVN] Key error while setting QoS policies

Status in neutron:
  Fix Released

Bug description:
  When QoS is enabled we found on gates a few errors related to OVN:

  
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers [None 
req-be677605-23f9-43b8-821d-a3c931739987 tempest-QoSTest-185948236 
tempest-QoSTest-185948236] Mechanism driver 'ovn' failed in 
update_network_postcommit: KeyError: 'direction'
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 475, in 
_call_on_drivers
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 390, in update_network_postcommit
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers 
self._ovn_client.update_network(context.current)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1705, in update_network
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers 
self._qos_driver.update_network(network)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/services/qos/drivers/ovn/driver.py", line 154, in 
update_network
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers 
self._update_network_ports(context, network.get('id'), options)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/services/qos/drivers/ovn/driver.py", line 143, in 
_update_network_ports
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers 
self._driver.update_port(port, qos_options=options)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 667, in update_port
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers if_delete=True)
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 704, in _create_qos_rules
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers direction = 
'from-lport' if qos_options['direction'] ==\
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers KeyError: 'direction'
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1861500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865453] [NEW] neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly

2020-03-02 Thread Maciej Jozefczyk
Public bug reported:

Sometimes we see random failures of the test:

neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before


neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_beforetesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
return func(*args, **keywargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 280, in test_virtual_port_created_before
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 417, in assertIn
self.assertThat(haystack, Contains(needle), message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: '88c0378b-71bd-454b-a0df-8c70b57d257a' 
not in '49043b88-554f-48d0-888d-eeaa749e752f'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865453

Title:
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before
  fails randomly

Status in neutron:
  New

Bug description:
  Sometimes we see random failures of the test:

  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before

  
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_beforetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
  return func(*args, **keywargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 280, in test_virtual_port_created_before
  ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 417, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'88c0378b-71bd-454b-a0df-8c70b57d257a' not in 
'49043b88-554f-48d0-888d-eeaa749e752f'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864833] [NEW] [OVN] Functional tests start with OVSDB binary 2.9 instead 2.12

2020-02-26 Thread Maciej Jozefczyk
Public bug reported:

In OVN functional we start ovsdb per each test. We don't start ovsdb
2.12 but 2.9, even we compile OVS/OVN 2.12 on gates:

2020-02-26T02:39:25.824Z|3|ovsdb_server|INFO|ovsdb-server (Open
vSwitch) 2.9.5

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864833

Title:
  [OVN] Functional tests start with OVSDB binary 2.9 instead 2.12

Status in neutron:
  New

Bug description:
  In OVN functional we start ovsdb per each test. We don't start ovsdb
  2.12 but 2.9, even we compile OVS/OVN 2.12 on gates:

  2020-02-26T02:39:25.824Z|3|ovsdb_server|INFO|ovsdb-server (Open
  vSwitch) 2.9.5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864639] [NEW] [OVN] UpdateLRouterPortCommand and AddLRouterPortCommand needs to specify network

2020-02-25 Thread Maciej Jozefczyk
Public bug reported:

On tempest gates there are a few issues related to wrong networks column
value. It cannot be empty [1] - 'set of 1 or more strings'.


Logs:
Feb 24 23:41:49.108841 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
AddLRouterPortCommand(name=lrp-17de4d5e-18a4-42da-be41-897adb24629c, 
lrouter=neutron-830c8317-481b-48c0-89cd-a92ed2e654f3, may_exist=True, 
columns={'mac': 'fa:16:3e:88:43:c8', 'networks': [], 'external_ids': 
{'neutron:revision_number': '1', 'neutron:subnet_ids': '', 
'neutron:network_name': 'neutron-03f4c0b2-c9c7-4318-b1d2-85e1610e35df', 
'neutron:router_name': '830c8317-481b-48c0-89cd-a92ed2e654f3'}, 'options': {}, 
'gateway_chassis': ['39028b55-969f-4d32-bf43-a99fcf6a01ca']}) {{(pid=32265) 
do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
Feb 24 23:41:49.110213 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to write bad value to column 
networks (ovsdb error: 0 values when type requires between 1 and 
9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values when type 
requires between 1 and 9223372036854775807

Feb 24 23:42:22.181527 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(name=lrp-41627bfd-a84f-4ff0-99d2-c41d800bb97b, 
columns={'external_ids': {'neutron:revision_number': '1', 'neutron:subnet_ids': 
'', 'neutron:network_name': 'neutron-d8975d0f-780c-4c64-adf6-26e81d566b14', 
'neutron:router_name': 'd5941192-9084-4d33-8c05-9964e21749e2'}, 'options': {}, 
'networks': [], 'ipv6_ra_configs': {}}, if_exists=True) {{(pid=32270) do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
Feb 24 23:42:22.182042 ubuntu-bionic-ovh-gra1-0014784290 neutron-server[31863]: 
ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to write bad value to column 
networks (ovsdb error: 0 values when type requires between 1 and 
9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values when type 
requires between 1 and 9223372036854775807


[1] http://www.openvswitch.org/support/dist-docs/ovn-nb.5.txt

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864639

Title:
  [OVN] UpdateLRouterPortCommand and AddLRouterPortCommand needs to
  specify network

Status in neutron:
  New

Bug description:
  On tempest gates there are a few issues related to wrong networks
  column value. It cannot be empty [1] - 'set of 1 or more strings'.

  
  Logs:
  Feb 24 23:41:49.108841 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running 
txn n=1 command(idx=1): 
AddLRouterPortCommand(name=lrp-17de4d5e-18a4-42da-be41-897adb24629c, 
lrouter=neutron-830c8317-481b-48c0-89cd-a92ed2e654f3, may_exist=True, 
columns={'mac': 'fa:16:3e:88:43:c8', 'networks': [], 'external_ids': 
{'neutron:revision_number': '1', 'neutron:subnet_ids': '', 
'neutron:network_name': 'neutron-03f4c0b2-c9c7-4318-b1d2-85e1610e35df', 
'neutron:router_name': '830c8317-481b-48c0-89cd-a92ed2e654f3'}, 'options': {}, 
'gateway_chassis': ['39028b55-969f-4d32-bf43-a99fcf6a01ca']}) {{(pid=32265) 
do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
  Feb 24 23:41:49.110213 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to 
write bad value to column networks (ovsdb error: 0 values when type requires 
between 1 and 9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values 
when type requires between 1 and 9223372036854775807

  Feb 24 23:42:22.181527 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running 
txn n=1 command(idx=1): 
UpdateLRouterPortCommand(name=lrp-41627bfd-a84f-4ff0-99d2-c41d800bb97b, 
columns={'external_ids': {'neutron:revision_number': '1', 'neutron:subnet_ids': 
'', 'neutron:network_name': 'neutron-d8975d0f-780c-4c64-adf6-26e81d566b14', 
'neutron:router_name': 'd5941192-9084-4d33-8c05-9964e21749e2'}, 'options': {}, 
'networks': [], 'ipv6_ra_configs': {}}, if_exists=True) {{(pid=32270) do_commit 
/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:84}}
  Feb 24 23:42:22.182042 ubuntu-bionic-ovh-gra1-0014784290 
neutron-server[31863]: ERROR ovsdbapp.backend.ovs_idl.vlog [-] attempting to 
write bad value to column networks (ovsdb error: 0 values when type requires 
between 1 and 9223372036854775807): ovs.db.error.Error: ovsdb error: 0 values 
when type requires between 1 and 9223372036854775807


  
  [1] 

[Yahoo-eng-team] [Bug 1864620] [NEW] [OVN] neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote often fails

2020-02-25 Thread Maciej Jozefczyk
Public bug reported:

We started to see
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
failures on:

Example failure:
https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/709110/1/check/neutron-ovn-tempest-ovs-release/0feed71/testr_results.html


Traceback (most recent call last):
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 385, in test_multiple_ports_portrange_remote
test_ip, port)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 59, in _verify_http_connection
raise e
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 51, in _verify_http_connection
ret = utils.call_url_remote(ssh_client, url)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 128, in call_url_remote
return ssh_client.exec_command(cmd)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 311, 
in wrapped_f
return self.call(f, *args, **kw)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 391, 
in call
do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 338, 
in iter
return fut.result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
  File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in 
__get_result
raise self._exception
  File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 394, 
in call
result = fn(*args, **kwargs)
  File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/ssh.py", line 
178, in exec_command
return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 204, in exec_command
stderr=err_data, stdout=out_data)
neutron_tempest_plugin.common.utils.SSHExecCommandFailed: Command 'curl 
http://10.1.0.11:80 --retry 3 --connect-timeout 2' failed, exit status: 28, 
stderr:
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 1 seconds. 3 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 2 seconds. 2 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0Warning: Transient problem: timeout Will retry in 4 seconds. 1 retries left.

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 
0curl: (28) Connection timed out after 2002 milliseconds

stdout:

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864620

Title:
  [OVN]
  
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
  often fails

Status in neutron:
  New

Bug description:
  We started to see
  
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote
  failures on:

  Example failure:
  
https://369db73f1617af64c678-28a40d3de53ae200fe2797286e214883.ssl.cf5.rackcdn.com/709110/1/check/neutron-ovn-tempest-ovs-release/0feed71/testr_results.html

  
  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 385, in test_multiple_ports_portrange_remote
  test_ip, port)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 59, in _verify_http_connection
  raise e
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 51, in _verify_http_connection
  ret = utils.call_url_remote(ssh_client, url)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 128, in call_url_remote
  return ssh_client.exec_command(cmd)
File "/usr/local/lib/python3.6/dist-packages/tenacity/__init__.py", line 
311, in wrapped_f
  return self.call(f, *args, **kw)
File 

[Yahoo-eng-team] [Bug 1864027] [NEW] [OVN] DHCP doesn't work while instance has disabled port security

2020-02-20 Thread Maciej Jozefczyk
Public bug reported:

While instance has disabled port security its not able to reach DHCP service.
Looks like the change [1] introduced this regression.

Port has [unknown] address set:
+---++
root@mjozefcz-ovn-train-lb:~# ovn-nbctl list logical_switch_port 
a09a1ac7-62ad-46ad-b802-c4abf65dcf70
_uuid   : 32a741bc-a185-4291-8b36-dc9c387bb662
addresses   : [unknown]
dhcpv4_options  : 7c94ec89-3144-4920-b624-193d968c637a
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="10.2.1.134/24", 
"neutron:device_id"="9f4a705f-b438-4da1-975d-1a0cdf81e124", 
"neutron:device_owner"="compute:nova", 
"neutron:network_name"=neutron-cd1ee69d-06b6-4502-ba26-e1280fd66ad9, 
"neutron:port_fip"="172.24.4.132", "neutron:port_name"="", 
"neutron:project_id"="98b165bfeeca4efd84724f3118d84f6f", 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
ha_chassis_group: []
name: "a09a1ac7-62ad-46ad-b802-c4abf65dcf70"
options : {requested-chassis=mjozefcz-ovn-train-lb}
parent_name : []
port_security   : []
tag : []
tag_request : []
type: ""
up  : true


ovn-controller doesn't respond for DHCP requests.


It was caught by failing OVN Provider driver tempest test:
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest


[1] https://review.opendev.org/#/c/702249/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864027

Title:
  [OVN] DHCP doesn't work while instance has disabled port security

Status in neutron:
  New

Bug description:
  While instance has disabled port security its not able to reach DHCP service.
  Looks like the change [1] introduced this regression.

  Port has [unknown] address set:
  
+---++
  root@mjozefcz-ovn-train-lb:~# ovn-nbctl list logical_switch_port 
a09a1ac7-62ad-46ad-b802-c4abf65dcf70
  _uuid   : 32a741bc-a185-4291-8b36-dc9c387bb662
  addresses   : [unknown]
  dhcpv4_options  : 7c94ec89-3144-4920-b624-193d968c637a
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="10.2.1.134/24", 
"neutron:device_id"="9f4a705f-b438-4da1-975d-1a0cdf81e124", 
"neutron:device_owner"="compute:nova", 
"neutron:network_name"=neutron-cd1ee69d-06b6-4502-ba26-e1280fd66ad9, 
"neutron:port_fip"="172.24.4.132", "neutron:port_name"="", 
"neutron:project_id"="98b165bfeeca4efd84724f3118d84f6f", 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
  ha_chassis_group: []
  name: "a09a1ac7-62ad-46ad-b802-c4abf65dcf70"
  options : {requested-chassis=mjozefcz-ovn-train-lb}
  parent_name : []
  port_security   : []
  tag : []
  tag_request : []
  type: ""
  up  : true

  
  ovn-controller doesn't respond for DHCP requests.

  
  It was caught by failing OVN Provider driver tempest test:
  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest


  
  [1] https://review.opendev.org/#/c/702249/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863892] [NEW] [OVN] Provider driver IPv6 traffic doesn't work

2020-02-19 Thread Maciej Jozefczyk
Public bug reported:

OVN LB that has IPv6 members and listeners doesn't work. Members are not
reachable via listener address.

** Affects: neutron
 Importance: High
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: Maciej Pasternacki (maciej) => Maciej Jozefczyk 
(maciej.jozefczyk)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863892

Title:
  [OVN] Provider driver IPv6 traffic doesn't work

Status in neutron:
  Confirmed

Bug description:
  OVN LB that has IPv6 members and listeners doesn't work. Members are
  not reachable via listener address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863893] [NEW] [OVN] OVN LoadBalancer VIP shouldn't have addresses set

2020-02-19 Thread Maciej Jozefczyk
Public bug reported:

While using OVN Load Balancer Octavia creates a VIP port, named
ovn_const.LB_VIP_PORT_PREFIX-port_id.

Unfortunately commit [1] introduced regression. When environment has virtual 
port type available it fails to clear addresses field for OVN Octavia VIP port.
It should not clean it while the port is Octavia Amphorea VIP, that internally 
uses keepalived to manage VIP.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863893

Title:
  [OVN] OVN LoadBalancer VIP shouldn't have addresses set

Status in neutron:
  New

Bug description:
  While using OVN Load Balancer Octavia creates a VIP port, named
  ovn_const.LB_VIP_PORT_PREFIX-port_id.

  Unfortunately commit [1] introduced regression. When environment has virtual 
port type available it fails to clear addresses field for OVN Octavia VIP port.
  It should not clean it while the port is Octavia Amphorea VIP, that 
internally uses keepalived to manage VIP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862618] [NEW] [OVN] functional test test_virtual_port_delete_parents is unstable

2020-02-10 Thread Maciej Jozefczyk
Public bug reported:

Functional test:

neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_delete_parents

randomly fails with:

ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_delete_parentstesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
return func(*args, **keywargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 376, in test_virtual_port_delete_parents
self.assertEqual("", ovn_vport.type)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: '' != 'virtual'

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862618

Title:
  [OVN] functional test test_virtual_port_delete_parents is unstable

Status in neutron:
  Confirmed

Bug description:
  Functional test:

  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_delete_parents

  randomly fails with:

  ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_delete_parentstesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
  return func(*args, **keywargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 376, in test_virtual_port_delete_parents
  self.assertEqual("", ovn_vport.type)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '' != 'virtual'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861500] [NEW] [OVN] Key error while setting QoS policies

2020-01-31 Thread Maciej Jozefczyk
Public bug reported:

When QoS is enabled we found on gates a few errors related to OVN:


Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers [None 
req-be677605-23f9-43b8-821d-a3c931739987 tempest-QoSTest-185948236 
tempest-QoSTest-185948236] Mechanism driver 'ovn' failed in 
update_network_postcommit: KeyError: 'direction'
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers Traceback (most recent call last):
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 475, in 
_call_on_drivers
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers getattr(driver.obj, method_name)(context)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 390, in update_network_postcommit
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers 
self._ovn_client.update_network(context.current)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1705, in update_network
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers self._qos_driver.update_network(network)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/services/qos/drivers/ovn/driver.py", line 154, in 
update_network
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers self._update_network_ports(context, 
network.get('id'), options)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/services/qos/drivers/ovn/driver.py", line 143, in 
_update_network_ports
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers self._driver.update_port(port, 
qos_options=options)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 667, in update_port
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers if_delete=True)
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 704, in _create_qos_rules
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers direction = 'from-lport' if 
qos_options['direction'] ==\
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers KeyError: 'direction'
Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.managers

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: In Progress


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Changed in: neutron
   Status: New => In Progress

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861500

Title:
  [OVN] Key error while setting QoS policies

Status in neutron:
  In Progress

Bug description:
  When QoS is enabled we found on gates a few errors related to OVN:

  
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers [None 
req-be677605-23f9-43b8-821d-a3c931739987 tempest-QoSTest-185948236 
tempest-QoSTest-185948236] Mechanism driver 'ovn' failed in 
update_network_postcommit: KeyError: 'direction'
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  Jan 31 12:36:45.466010 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 475, in 
_call_on_drivers
  Jan 31 12:36:45.466010 ubunt

[Yahoo-eng-team] [Bug 1861502] [NEW] [OVN] Mechanism driver - failing to recreate floating IP

2020-01-31 Thread Maciej Jozefczyk
Public bug reported:

We observe on gates ERROR logs like:

Please investigate.

Jan 31 12:43:54.475677 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
DEBUG neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-1b9ba8f3-8770-44b1-a398-c75e1fe7c0a4 None None] Maintenance task: Fixing 
resource 
c9a56d17-e82a-4758-87b6-afb6af68ab22 (type: floatingips) at create/update 
{{(pid=378) check_for_inconsistencies 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:286}}
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-1b9ba8f3-8770-44b1-a398-c75e1fe7c0a4 None None] Maintenance task: Failed to 
fix re
source c9a56d17-e82a-4758-87b6-afb6af68ab22 (type: floatingips): TypeError: 
create_floatingip() missing 1 required positional argument: 'floatingip'
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance Traceback 
(most recent call last):
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py"
, line 297, in check_for_inconsistencies
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._fix_create_update(admin_context, row)
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py"
, line 158, in _fix_create_update
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
res_map['ovn_create'](n_obj)
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance TypeError: 
create_floatingip() missing 1 required positional argument: 'floatingip'
Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 neutron-server[32458]: 
ERROR neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861502

Title:
  [OVN] Mechanism driver - failing to recreate floating IP

Status in neutron:
  New

Bug description:
  We observe on gates ERROR logs like:

  Please investigate.

  Jan 31 12:43:54.475677 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-1b9ba8f3-8770-44b1-a398-c75e1fe7c0a4 None None] Maintenance task: Fixing 
resource 
  c9a56d17-e82a-4758-87b6-afb6af68ab22 (type: floatingips) at create/update 
{{(pid=378) check_for_inconsistencies 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:286}}
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-1b9ba8f3-8770-44b1-a398-c75e1fe7c0a4 None None] Maintenance task: Failed to 
fix re
  source c9a56d17-e82a-4758-87b6-afb6af68ab22 (type: floatingips): TypeError: 
create_floatingip() missing 1 required positional argument: 'floatingip'
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance Traceback (most 
recent call last):
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py"
  , line 297, in check_for_inconsistencies
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._fix_create_update(admin_context, row)
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py"
  , line 158, in _fix_create_update
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
res_map['ovn_create'](n_obj)
  Jan 31 12:43:54.542343 ubuntu-bionic-rax-ord-0014258431 
neutron-server[32458]: ERROR 

[Yahoo-eng-team] [Bug 1860662] [NEW] [OVN] FIP on OVN Load balancer doesn't work if member has FIP assigned on DVR setup

2020-01-23 Thread Maciej Jozefczyk
Public bug reported:

[OVN] FIP on OVN Load balancer doesn't work if member has FIP assigned
on DVR setup

For now we don't have solution for that. That needs further work in
core-ovn side.

For now propose a workaround - centralize traffic of member FIP.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Description changed:

  [OVN] FIP on OVN Load balancer doesn't work if member has FIP assigned
  on DVR setup
  
  For now we don't have solution for that. That needs further work in
  core-ovn side.
  
- For propose a workaround - centralize traffic of member FIP.
+ For now propose a workaround - centralize traffic of member FIP.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860662

Title:
  [OVN] FIP on OVN Load balancer doesn't work if member has FIP assigned
  on DVR setup

Status in neutron:
  New

Bug description:
  [OVN] FIP on OVN Load balancer doesn't work if member has FIP assigned
  on DVR setup

  For now we don't have solution for that. That needs further work in
  core-ovn side.

  For now propose a workaround - centralize traffic of member FIP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860141] [NEW] [OVN] Provider driver fails while LB VIP port already created

2020-01-17 Thread Maciej Jozefczyk
Public bug reported:

Sometimes there is a RACE condition on creation of VIP port that ends
with exception and blocks LB stack creation.


Example error:

2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
[req-a1e64a8e-5971-4b2f-afdf-33e026f193d6 - 968cd882ee5145d4a3e30b9612b0cae0 - 
default default] Provider 'ovn' raised a driver error: An unknown driver e
rror occurred.: octavia_lib.api.drivers.exceptions.DriverError: ('An unknown 
driver error occurred.', IpAddressAlreadyAllocatedClient())
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils Traceback (most 
recent call last):
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1843, in create_vip_port
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils project_id, 
lb_id, vip_dict)['port']
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1523, in create_vip_port
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
network_driver.neutron_client.create_port(port)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 803, in 
create_port
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
self.post(self.ports_path, body=body)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 359, in 
post
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils headers=headers, 
params=params)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 294, in 
do_request
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
self._handle_fault_response(status_code, replybody, resp)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 269, in 
_handle_fault_response
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
exception_handler_v20(status_code, error_body)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 93, in 
exception_handler_v20
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
request_ids=request_ids)
2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
neutronclient.common.exceptions.IpAddressAlreadyAllocatedClient: IP address 
172.30.188.22 already allocated in subnet 6cdca17f-c896-4684-9feb-6d0aa4aa3cb

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860141

Title:
  [OVN] Provider driver fails while LB VIP port already created

Status in neutron:
  Confirmed

Bug description:
  Sometimes there is a RACE condition on creation of VIP port that ends
  with exception and blocks LB stack creation.

  
  Example error:

  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils 
[req-a1e64a8e-5971-4b2f-afdf-33e026f193d6 - 968cd882ee5145d4a3e30b9612b0cae0 - 
default default] Provider 'ovn' raised a driver error: An unknown driver e
  rror occurred.: octavia_lib.api.drivers.exceptions.DriverError: ('An unknown 
driver error occurred.', IpAddressAlreadyAllocatedClient())
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils Traceback (most 
recent call last):
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1843, in create_vip_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils project_id, 
lb_id, vip_dict)['port']
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
1523, in create_vip_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
network_driver.neutron_client.create_port(port)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 803, in 
create_port
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils return 
self.post(self.ports_path, body=body)
  2020-01-17 08:57:10.331 24 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py", line 359, in 
post
  2020-

[Yahoo-eng-team] [Bug 1860140] [NEW] [OVN] Provider driver sends malformed update to Octavia

2020-01-17 Thread Maciej Jozefczyk
Public bug reported:

In some corner cases while updating a member OVN Provider driver sends a
malformed status update to Octavia that breaks the update operation and
resources stuck in PENDING_UPDATE state.

Example error below:

2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver [-] 
Unexpected exception in request_handler: 
octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The status update had 
an unknown error.', "E
rror while updating the load balancer status: 'NoneType' object has no 
attribute 'update'")
 
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
   
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
442, in _update_status_to_octavia   
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
self._octavia_driver_lib.update_loadbalancer_status(status) 
 
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/octavia_lib/api/drivers/driver_lib.py", line 
126, in update_loadbalancer_status 
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
status_record=response.pop(constants.STATUS_RECORD, None))  
 
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'
2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: Confirmed


** Tags: ovn

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860140

Title:
  [OVN] Provider driver sends  malformed update to Octavia

Status in neutron:
  Confirmed

Bug description:
  In some corner cases while updating a member OVN Provider driver sends
  a malformed status update to Octavia that breaks the update operation
  and resources stuck in PENDING_UPDATE state.

  Example error below:

  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver [-] 
Unexpected exception in request_handler: 
octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The status update had 
an unknown error.', "E
  rror while updating the load balancer status: 'NoneType' object has no 
attribute 'update'")
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver Traceback 
(most recent call last):
   
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 
442, in _update_status_to_octavia   
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
self._octavia_driver_lib.update_loadbalancer_status(status) 
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver   File 
"/usr/lib/python3.6/site-packages/octavia_lib/api/drivers/driver_lib.py", line 
126, in update_loadbalancer_status 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
status_record=response.pop(constants.STATUS_RECORD, None))  
 
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'
  2020-01-17 09:04:28.792 27 ERROR networking_ovn.octavia.ovn_driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1859977] [NEW] [OVN] Update of floatingip creates new row in NBDB NAT table

2020-01-16 Thread Maciej Jozefczyk
Public bug reported:

While updating a FIP (for example its description via neutron API) the
corresponding NAT row gets duplicated in NBDB table.

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L797

In `_create_or_update_floatingip` function each time this creates new
row instead updating existing one.

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1859977

Title:
  [OVN] Update of floatingip creates new row in NBDB NAT table

Status in neutron:
  New

Bug description:
  While updating a FIP (for example its description via neutron API) the
  corresponding NAT row gets duplicated in NBDB table.

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L797

  In `_create_or_update_floatingip` function each time this creates new
  row instead updating existing one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1859977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1856523] [NEW] Sometimes instance can't get public keys due to cirros metadata request failure

2019-12-15 Thread Maciej Jozefczyk
Public bug reported:

On our CI we see random failures of random jobs related to getting public keys 
from metadata.
As an example I would like to show this change [1]. In addition to current 
implementation of tests it adds three instances and test security groups.

Sometimes random jobs like:
neutron-tempest-plugin-scenario-linuxbridge
neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-stein
and others fail on checking SSH connectivity to just created instance. 

* It didn't work because the instance refused public key authentication, 
example:

2019-12-13 14:43:48,694 31953 INFO [tempest.lib.common.ssh] Creating ssh 
connection to '172.24.5.186:22' as 'cirros' with public key authentication
2019-12-13 14:43:48,704 31953 WARNING  [tempest.lib.common.ssh] Failed to 
establish authenticated ssh connection to cirros@172.24.5.186 ([Errno None] 
Unable to connect to port 22 on 172.24.5.186). Number attempts: 1. Retry after 
2 seconds.


* While checking the instance console log we can find that the instance failed 
to get public keys list on boot (FIP: 172.24.5.186, Instance IP: 10.1.0.10):
-
cirros-ds 'net' up at 11.67
checking http://169.254.169.254/2009-04-04/instance-id
successful after 1/20 tries: up 12.13. iid=i-003c
failed to get http://169.254.169.254/2009-04-04/meta-data/public-keys
warning: no ec2 metadata for public-keys
-

* In addition to current Neutron logs I added more debugs to Neutron Metadata 
Agent in order to find out if the response from Nova Metadata is empty, then I 
verified Neutron Metadata logs related to this instance:
-
Dec 13 14:43:49.572244 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: DEBUG neutron.agent.metadata.agent [-] REQUEST: 
HEADERS {'X-Forwarded-For': '10.1.0.10', 'X-Instance-ID': 
'e77a44fc-249f-4c85-8f9c-40f299534c12', 'X-Tenant-ID': 
'8975f89b119046b48f5a674fa6a296c3', 'X-Instance-ID-Signature': 
'908153d94493c68c9cb8fae8aa78fab18244a260d7fe55b5b707ed9b369f45cd'} DATA: b'' 
URL: http://10.210.224.88:8775/2009-04-04/meta-data/public-keys {{(pid=17720) 
_proxy_request /opt/stack/neutron/neutron/agent/metadata/agent.py:214}}
Dec 13 14:43:49.572451 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: DEBUG neutron.agent.metadata.agent [-] RESPONSE: 
HEADERS: {'Content-Length': '32', 'Content-Type': 'text/plain; charset=UTF-8', 
'Connection': 'close'} DATA: b'0=tempest-keypair-test-231375855' {{(pid=17720) 
_proxy_request /opt/stack/neutron/neutron/agent/metadata/agent.py:217}}
Dec 13 14:43:49.572977 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: INFO eventlet.wsgi.server [-] 10.1.0.10, 
"GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 200  len: 168 time: 
0.3123491
-

The response was 200 with body: '0=tempest-keypair-test-231375855'. It
is the key used also for other instances, so that worked.


Conclusions:
1) Neutron metadata responds with 200
2) Nova metadata responds with 200 and valid data

Questions:
1) Is this cirros issue? Why there is no retry? 
2) Maybe its network issue that the data are not send back (connection dropped 
during delivery)?
3) Why we don't have more logs in cirros on this request failure?

[1] https://review.opendev.org/#/c/682369/
[2] https://review.opendev.org/#/c/698001/

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Sometimes instance cant get public keys due to cirros
+ Sometimes instance can't get public keys due to cirros metadata request 
failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856523

Title:
  Sometimes instance can't get public keys due to cirros metadata
  request failure

Status in neutron:
  New

Bug description:
  On our CI we see random failures of random jobs related to getting public 
keys from metadata.
  As an example I would like to show this change [1]. In addition to current 
implementation of tests it adds three instances and test security groups.

  Sometimes random jobs like:
  neutron-tempest-plugin-scenario-linuxbridge
  neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-stein
  and others fail on checking SSH connectivity to just created instance. 

  * It didn't work because the instance refused public key authentication, 
example:
  

  2019-12-13 14:43:48,694 31953 INFO [tempest.lib.common.ssh] Creating ssh 

[Yahoo-eng-team] [Bug 1848213] [NEW] Do not pass port-range to backend if all ports specified in security group rule

2019-10-15 Thread Maciej Jozefczyk
Public bug reported:

If user creates a security group rule specifying all the ports, like
above:

openstack security group rule create --protocol udp --ingress --dst-port
1:65535 47420676-21d8-4d82-b43c-73e100c5b397

the rule shouldn't be passed with ranges to the neutron ml2 backend. For
some backends, like OVN, this leads to not optimal flows creation.

We have potentially two ways to solve this:
1) Do not accept such kind of requests (HTTP 400)
2) Modify the rule in-fly somewhere around _validate_port_range() in 
./neutron/db/securitygroups_db.py to drop max and min ports, and accept all 
traffic for given protocol.

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New

** Description changed:

- If user creates a security group rule specyfing all the ports, like
+ If user creates a security group rule specifying all the ports, like
  above:
  
  openstack security group rule create --protocol udp --ingress --dst-port
  1:65535 47420676-21d8-4d82-b43c-73e100c5b397
  
  the rule shouldn't be passed with ranges to the neutron ml2 backend. For
  some backends, like OVN, this leads to not optimal flows creation.
  
  We have potentially two ways to solve this:
  1) Do not accept such kind of requests (HTTP 400)
  2) Modify the rule in-fly somewhere around _validate_port_range() in 
./neutron/db/securitygroups_db.py to drop max and min ports, and accept all 
traffic for given protocol.

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1848213

Title:
  Do not pass port-range to backend if all ports specified in security
  group rule

Status in neutron:
  New

Bug description:
  If user creates a security group rule specifying all the ports, like
  above:

  openstack security group rule create --protocol udp --ingress --dst-
  port 1:65535 47420676-21d8-4d82-b43c-73e100c5b397

  the rule shouldn't be passed with ranges to the neutron ml2 backend.
  For some backends, like OVN, this leads to not optimal flows creation.

  We have potentially two ways to solve this:
  1) Do not accept such kind of requests (HTTP 400)
  2) Modify the rule in-fly somewhere around _validate_port_range() in 
./neutron/db/securitygroups_db.py to drop max and min ports, and accept all 
traffic for given protocol.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1848213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834045] [NEW] Live-migration double binding doesn't work with OVN

2019-06-24 Thread Maciej Jozefczyk
Public bug reported:

For ml2/OVN live-migration doesn't work. After spending some time
debugging this issue I found that its potentially more complicated and
not related to OVN intself.

Here is the full story behind not working live-migration while using OVN
in latest u/s master.

To speedup live-migration double-binding was introduced in neutron [1] and nova 
[2]. It implements this blueprint [3]. In short words it creates double binding 
(ACTIVE and INACTIVE) to verify if network bind is possible to be done on 
destination host and then starts live-migration (to not waste time in case of 
rollback).
This mechanism started to be default in Stein [4]. So before actual qemu 
live-migration neutron should send 'network-vif-plugged' to nova and then 
migration is being run.

While using OVN this mechanism doesn't work. Notification 'network-vif-
plugged' is not being send so live-migration is stuck at the beginning.

Lets check how those notifications are send. On every change of 'status'
field (sqlalchemy event) in neutron.ports row [5] function [6] is
executed and it is responsible for sending 'network-vif-unplugged' and
'network-vif-plugged' notifications.

During pre_live_migration tasks two bindings and bindings levels are created. 
At the end of this process I found that commit_port_binding() is executed [7]. 
At this time neutron port status in the db is DOWN. 
I found that at the end of commit_port_binding() [8] after 
neutron_lib.callbacks.registry notification is send the port status moves to 
UP. For ml2/OVN it stays DOWN. This is the first difference that I found 
between ml2/ovs and ml2/ovn.

After a bit digging I figured out how 'network-vif-plugged' is triggered in 
ml2/ovs.
Lets see how this is done.

1. On list of registered callbacks in ml2/ovs [8] we have configured
callback from class ovo_rpc._ObjectChangeHandler [9] and at the end of
commit_port_binding() this callback is used.

-
neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event
-

2. It is responsible for pushing new port object revisions to agents,
like:


Jun 24 10:01:01 test-migrate-1 neutron-server[3685]: DEBUG 
neutron.api.rpc.handlers.resources_rpc [None 
req-1430f349-d644-4d33-8833-90fad0124dcd service neutron] Pushing event updated 
for resources: {'Port': 
['ID=3704a567-ef4c-4f6d-9557-a1191de07c4a,revision_number=10']} {{(pid=3697) 
push /opt/stack/neutron/neutron/api/rpc/handlers/resources_rpc.py:243}}


3. OVS agent consumes it and sends back RPC to the neutron server that port is 
actually UP (on source node!):

Jun 24 10:01:01 test-migrate-1 neutron-openvswitch-agent[18660]: DEBUG 
neutron.agent.resource_cache [None req-1430f349-d644-4d33-8833-90fad0124dcd 
service neutron] Resource Port 3704a567-ef4c-4f6d-9557-a1191de07c4a updated 
(revision_number 8->10). Old fields: {'status': u'ACTIVE', 'bindings': 
[PortBinding(host='test-migrate-1',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={},status='INACTIVE',vif_details={"port_filter":
 true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal'), 
PortBinding(host='test-migrate-2',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={"migrating_to":
 "test-migrate-1"},status='ACTIVE',vif_details={"port_filter": true, 
"bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal')], 'binding_levels': 
[PortBindingLevel(driver='openvswitch',host='test-migrate-1',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59),
 
PortBindingLevel(driver='openvswitch',host='test-migrate-2',level=0,port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,segment=NetworkSegment(c6866834-4577-497f-a6c8-ff9724a82e59),segment_id=c6866834-4577-497f-a6c8-ff9724a82e59)]}
 New fields: {'status': u'DOWN', 'bindings': 
[PortBinding(host='test-migrate-1',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={},status='ACTIVE',vif_details={"port_filter":
 true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": 
false},vif_type='ovs',vnic_type='normal'), 
PortBinding(host='test-migrate-2',port_id=3704a567-ef4c-4f6d-9557-a1191de07c4a,profile={"migrating_to":
 
"test-migrate-1"},status='INACTIVE',vif_details=None,vif_type='unbound',vnic_type='normal')],
 'binding_levels': 

[Yahoo-eng-team] [Bug 1821303] [NEW] Online data migration bases on hit count rather than total count

2019-03-22 Thread Maciej Jozefczyk
Public bug reported:

Imagine online data migration script reported 50 matched rows, but no
executed migrations, like:

Running batches of 50 until complete
50 rows matched query fake_migration, 50 migrated
50 rows matched query fake_migration, 40 migrated
50 rows matched query fake_migration, 0 migrated
++--+---+
|   Migration| Total Needed | Completed |
++--+---+
| fake_migration | 150  |90|
++--+---+
"""

After last run online data migration will not step to next batch, even
there are still rows considered to be checked/migrated.

It is because the condition if migration has been done looks for 'completed' 
counter instead of 'total needed' counter.
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L733
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L744

For some of online data migration scripts, like:
https://github.com/openstack/nova/blob/master/nova/objects/virtual_interface.py#L154

operator could be mislead, because migration ends but in fact there are
still rows that needs to be checked.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821303

Title:
  Online data migration bases on hit count rather than total count

Status in OpenStack Compute (nova):
  New

Bug description:
  Imagine online data migration script reported 50 matched rows, but no
  executed migrations, like:

  Running batches of 50 until complete
  50 rows matched query fake_migration, 50 migrated
  50 rows matched query fake_migration, 40 migrated
  50 rows matched query fake_migration, 0 migrated
  ++--+---+
  |   Migration| Total Needed | Completed |
  ++--+---+
  | fake_migration | 150  |90|
  ++--+---+
  """

  After last run online data migration will not step to next batch, even
  there are still rows considered to be checked/migrated.

  It is because the condition if migration has been done looks for 'completed' 
counter instead of 'total needed' counter.
  https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L733
  https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L744

  For some of online data migration scripts, like:
  
https://github.com/openstack/nova/blob/master/nova/objects/virtual_interface.py#L154

  operator could be mislead, because migration ends but in fact there
  are still rows that needs to be checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1796854] [NEW] Neutron doesn't respect advscv role while creating port

2018-10-09 Thread Maciej Jozefczyk
Public bug reported:

Neutron doesn't allow user with role 'advsvc' to add port in other user tenant 
network.
Introduced change:
https://review.openstack.org/#/c/101281/10
Should allow that, but in fact in neutron-lib there is no validation for advsvc 
role:
https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/attributes.py#L28

Error:
Specifying 'project_id' or 'tenant_id' other than the authenticated project in 
request requires admin privileges



Version

Devstack master.



How to reproduce


1. Setup devstack master, add new project and user to this project with role 
advsvc
source devstack/openrc admin demo

openstack project create advsvc-project
openstack user create --project advsvc-project --password test 
advsvc-project-user
openstack role create advsvc
openstack role add --user advsvc-project-user --project advsvc-project advsvc
openstack role add --user advsvc-project-user --project advsvc-project member


2. Create network in other project.
openstack project create test-project
openstack user create --project test-project --password test test-project-user
openstack role add --user test-project-user --project test-project member

neutron net-create private-net-test-user --provider:network_type=vxlan
--provider:segmentation_id=1234 --project-id [[ test-project-id ]]

neutron subnet-create private-net-test-user --name private-subnet-test-
user --allocation-pool start=10.13.12.100,end=10.13.12.130 10.13.12.0/24
--dns-nameserver 8.8.8.8 --project-id [[ test-project-id ]]

3. Create a port in test-project tenant by user with advsvc role:

stack@mjozefcz-devstack:~$ neutron port-create --tenant-id 
865073224f7b4e9d9fdd4a446e3a4af4 private-net-test-user
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Specifying 'project_id' or 'tenant_id' other than the authenticated project in 
request requires admin privileges
Neutron server returns request_ids: ['req-e841edb1-2cf2-47b6-a493-11a56114a323']

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1796854

Title:
  Neutron doesn't respect advscv role while creating port

Status in neutron:
  New

Bug description:
  Neutron doesn't allow user with role 'advsvc' to add port in other user 
tenant network.
  Introduced change:
  https://review.openstack.org/#/c/101281/10
  Should allow that, but in fact in neutron-lib there is no validation for 
advsvc role:
  
https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/attributes.py#L28

  Error:
  Specifying 'project_id' or 'tenant_id' other than the authenticated project 
in request requires admin privileges


  
  Version
  
  Devstack master.

  
  
  How to reproduce
  

  1. Setup devstack master, add new project and user to this project with role 
advsvc
  source devstack/openrc admin demo

  openstack project create advsvc-project
  openstack user create --project advsvc-project --password test 
advsvc-project-user
  openstack role create advsvc
  openstack role add --user advsvc-project-user --project advsvc-project advsvc
  openstack role add --user advsvc-project-user --project advsvc-project member

  
  2. Create network in other project.
  openstack project create test-project
  openstack user create --project test-project --password test test-project-user
  openstack role add --user test-project-user --project test-project member

  neutron net-create private-net-test-user --provider:network_type=vxlan
  --provider:segmentation_id=1234 --project-id [[ test-project-id ]]

  neutron subnet-create private-net-test-user --name private-subnet-
  test-user --allocation-pool start=10.13.12.100,end=10.13.12.130
  10.13.12.0/24 --dns-nameserver 8.8.8.8 --project-id [[ test-project-id
  ]]

  3. Create a port in test-project tenant by user with advsvc role:

  stack@mjozefcz-devstack:~$ neutron port-create --tenant-id 
865073224f7b4e9d9fdd4a446e3a4af4 private-net-test-user
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Specifying 'project_id' or 'tenant_id' other than the authenticated project 
in request requires admin privileges
  Neutron server returns request_ids: 
['req-e841edb1-2cf2-47b6-a493-11a56114a323']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1796854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751923] [NEW] _heal_instance_info_cache periodic task bases on port list from memory, not from neutron server

2018-02-26 Thread Maciej Jozefczyk
ot;: [], "cidr": 
"fdda:5d77:e18e::/64", "gateway": {"meta": {}, "version": 6, "type": "gateway", 
"address": "fdda:5d77:e18e::1"}}, {"ips": [{"meta": {}, "version": 4, "type": 
"fixed", "floating_ips": [], "address": "10.0.0.5"}], "version": 4, "meta": 
{"dhcp_server": "10.0.0.2"}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", 
"gateway": {"meta": {}, "version": 4, "type": "gateway", "address": 
"10.0.0.1"}}], "meta": {"injected": false, "tenant_id": 
"0314943f52014a5b9bc56b73bec475e6", "mtu": 1450}, "id": 
"96343d33-5dd2-4289-b0cc-e6c664c2ddd9", "label": "private"}, "devname": 
"tapb89d6863-fb", "vnic_type": "normal", "qbh_params": null, "meta": {}, 
"details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": 
true}, "address": "fa:16:3e:53:23:1c", "active": true, "type": "ovs", "id": 
"b89d6863-fb4c-405c-89f9-698bd9773ad6", "qbg_params": null}, {"profile": {}, 
"ovs_interfaceid": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", 
"preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": 
[{"ips": [{"meta": {}, "version": 6, "type": "fixed", "floating_ips": [], 
"address": "2001:db8::e"}], "version": 6, "meta": {}, "dns": [], "routes": [], 
"cidr": "2001:db8::/64", "gateway": {"meta": {}, "version": 6, "type": 
"gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [], "address": "172.24.4.15"}], "version": 4, 
"meta": {}, "dns": [], "routes": [], "cidr": "172.24.4.0/24", "gateway": 
{"meta": {}, "version": 4, "type": "gateway", "address": "172.24.4.1"}}], 
"meta": {"injected": false, "tenant_id": "9c6f74dab29f4c738e82320075fa1f57", 
"mtu": 1500}, "id": "9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, 
"devname": "tapa74c9ee8-c4", "vnic_type": "normal", "qbh_params": null, "meta": 
{}, "details": {"port_filter": true, "datapath_type": "system", 
"ovs_hybrid_plug": true}, "address": "fa:16:3e:cf:0c:e0", "active": true, 
"type": "ovs", "id": "a74c9ee8-c426-48ef-890f-3988ecbe95ff", "qbg_params": 
null}, {"profile": {}, "ovs_interfaceid": 
"71e6c6ad-8016-450f-93f2-75e7e014084d", "preserve_on_delete": false, "network": 
{"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 6, "type": 
"fixed", "floating_ips": [], "address": "2001:db8::c"}], "version": 6, "meta": 
{}, "dns": [], "routes": [], "cidr": "2001:db8::/64", "gateway": {"meta": {}, 
"version": 6, "type": "gateway", "address": "2001:db8::2"}}, {"ips": [{"meta": 
{}, "version": 4, "type": "fixed", "floating_ips": [], "address": 
"172.24.4.16"}], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": 
"172.24.4.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "172.24.4.1"}}], "meta": {"injected": false, "tenant_id": 
"9c6f74dab29f4c738e82320075fa1f57", "mtu": 1500}, "id": 
"9e702a96-2744-40a2-a649-33f935d83ad3", "label": "public"}, "devname": 
"tap71e6c6ad-80", "vnic_type": "normal", "qbh_params": null, "meta&qu

[Yahoo-eng-team] [Bug 1747437] [NEW] DHCP TestDeviceManager tests fail when IPv6 is not enabled on testing host

2018-02-05 Thread Maciej Jozefczyk
Public bug reported:

When an instance has not enabled IPv6 listed tests are failing because
of expected calls checks:

networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_device_is_ready
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_ipv6


Expected:
[call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 
'12345678-1234-5678-1234567890ab', 'enable_dhcp': True, 'tenant_id': 
'---', 'ip_version': 4, 'id': 
'---', 'allocation_pools': {'start': '172.9.9.2', 
'id': '', 'end': '172.9.9.254'}, 'name': '', 'host_routes': [], 
'dns_nameservers': [], 'gateway_ip': '172.9.9.1', 'ipv6_address_mode': None, 
'cidr': '172.9.9.0/24', 'ipv6_ra_mode': None}, 'ip_address': '172.9.9.9'}], 
'device_id': 'dhcp-12345678-1234--1234567890ab', 'network_id': 
'12345678-1234-5678-1234567890ab', 'device_owner': '', 'mac_address': 
'aa:bb:cc:dd:ee:ff', 'id': '12345678-1234--1234567890ab', 
'allocation_pools': {'start': '172.9.9.2', 'id': '', 'end': '172.9.9.254'}}),
 call.configure_ipv6_ra('qdhcp-12345678-1234-5678-1234567890ab', 'default', 
0),
 call.plug('12345678-1234-5678-1234567890ab', 
'12345678-1234--1234567890ab', 'tap12345678-12', 'aa:bb:cc:dd:ee:ff', 
mtu=None, namespace='qdhcp-12345678-1234-5678-1234567890ab'),
 call.init_l3('tap12345678-12', ['172.9.9.9/24', '169.254.169.254/16'], 
namespace='qdhcp-12345678-1234-5678-1234567890ab')]


Actual:
[call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 
'12345678-1234-5678-1234567890ab', 'enable_dhcp': True, 'tenant_id': 
'---', 'ip_version': 4, 'id': 
'---', 'allocation_pools': {'start': '172.9.9.2', 
'id': '', 'end': '172.9.9.254'}, 'name': '', 'host_routes': [], 
'dns_nameservers': [], 'gateway_ip': '172.9.9.1', 'ipv6_address_mode': None, 
'cidr': '172.9.9.0/24', 'ipv6_ra_mode': None}, 'ip_address': '172.9.9.9'}], 
'device_id': 'dhcp-12345678-1234--1234567890ab', 'network_id': 
'12345678-1234-5678-1234567890ab', 'device_owner': '', 'mac_address': 
'aa:bb:cc:dd:ee:ff', 'id': '12345678-1234--1234567890ab', 
'allocation_pools': {'start': '172.9.9.2', 'id': '', 'end': '172.9.9.254'}}),
 call.plug('12345678-1234-5678-1234567890ab', 
'12345678-1234--1234567890ab', 'tap12345678-12', 'aa:bb:cc:dd:ee:ff', 
mtu=None, namespace='qdhcp-12345678-1234-5678-1234567890ab'),
 call.init_l3('tap12345678-12', ['172.9.9.9/24', '169.254.169.254/16'], 
namespace='qdhcp-12345678-1234-5678-1234567890ab')]


The problem occurs because
neutron.common.ipv6_utils.is_enabled_and_bind_by_default() is not
mocked.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747437

Title:
  DHCP TestDeviceManager tests fail when IPv6 is not enabled on testing
  host

Status in neutron:
  New

Bug description:
  When an instance has not enabled IPv6 listed tests are failing because
  of expected calls checks:

  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup
  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_device_is_ready
  
networking_ovh.tests.unit.agent.linux.test_dhcp_agent.TestDeviceManager.test_setup_ipv6

  
  Expected:
  [call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 
'12345678-1234-5678-1234567890ab', 'enable_dhcp': True, 'tenant_id': 
'---', 'ip_version': 4, 'id': 
'---', 'allocation_pools': {'start': '172.9.9.2', 
'id': '', 'end': '172.9.9.254'}, 'name': '', 'host_routes': [], 
'dns_nameservers': [], 'gateway_ip': '172.9.9.1', 'ipv6_address_mode': None, 
'cidr': '172.9.9.0/24', 'ipv6_ra_mode': None}, 'ip_address': '172.9.9.9'}], 
'device_id': 'dhcp-12345678-1234--1234567890ab', 'network_id': 
'12345678-1234-5678-1234567890ab', 'device_owner': '', 'mac_address': 
'aa:bb:cc:dd:ee:ff', 'id': '12345678-1234--1234567890ab', 
'allocation_pools': {'start': '172.9.9.2', 'id': '', 'end': '172.9.9.254'}}),
   call.configure_ipv6_ra('qdhcp-12345678-1234-5678-1234567890ab', 
'default', 0),
   call.plug('12345678-1234-5678-1234567890ab', 
'12345678-1234--1234567890ab', 'tap12345678-12', 'aa:bb:cc:dd:ee:ff', 
mtu=None, namespace='qdhcp-12345678-1234-5678-1234567890ab'),
   call.init_l3('tap12345678-12', ['172.9.9.9/24', '169.254.169.254/16'], 
namespace='qdhcp-12345678-1234-5678-1234567890ab')]

  
  Actual:
  [call.get_device_name({'fixed_ips': [{'subnet_id': 
'---', 'subnet': {'network_id': 

[Yahoo-eng-team] [Bug 1742747] [NEW] RT overrides default allocation_ratios for ram cpu and disk

2018-01-11 Thread Maciej Jozefczyk
 
 615 compute_node.update_from_virt_driver(resources)

 
(Pdb++) self.cpu_allocation_ratio
0.0

self.cpu_allocation_ratio comes directly from config:
https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L397
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L148


Environment
===
Latest master


How to reproduce
===
1. Spawn devstack
2. Leave configuration files untouched
3. Observe overrides in 
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py?utf8=✓#L611
4. Watch how RT sends it to placement and placement responds with 400 - bad 
request.

** Affects: nova
     Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742747

Title:
  RT overrides default allocation_ratios for ram cpu and disk

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Resource tracker overrides default allocation ratio values with values
  from configuration files without checking it those values are "valid
  ones".

  Allocation ratios values are taken directly from configuration files. This is 
a good approach unless allocation ratios in configuration file are set to 0.0. 
Here comes a problem. Default configuration parameter sets those ratios to be 
0.0:
  https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L397
  So if allocation ratio is set as 0.0 (or not set, because 0.0 is default 
value), we would have issues with send this ratio with RT update to placement. 
  *BUT here comes the solution*:
  
https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L198

  When we read ComputeNode object from DB we also check if ratios are
  0.0, if yes we override them (CPU-16x, RAM-1.5x, DISK-1x).

  But just after initialization of ComputeNode object here:
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py?utf8=✓#L539
  We copy actual resources to it (thanks to _copy_resources).

  We override allocations from ComputeNode to those that are taken from
  configuration file - yes, thats ok. If operator wants to change ratios
  - he will do it in conf file and then restart the service.

  But what if he would leave those parameters untouched in config? Here comes 
the problem!
  Those params would be always set to 0.0 - placement api doesn't like it and 
raise:
  InvalidInventoryCapacity: Invalid inventory for 'VCPU' on resource provider 
'52559824-5fb1-424b-a4cf-79da9199447d'. The reserved value is greater than or 
equal to total.
  The exception is raised here:
  
https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L228


  
  Some code around problem:
  Code:
  > /opt/stack/nova/nova/compute/resource_tracker.py(610)
   602 def _copy_resources(self, compute_node, resources):  

   
   603 """Copy resource values to supplied compute_node.""" 

   
   604 # purge old stats and init with anything passed in by the 
driver  

  
   605 self.stats.clear()   

   
   606 self.stats.digest_stats(resources.get('stats'))  

   
   607 compute_node.stats = copy.deepcopy(self.stats)   

   
   608  

 

[Yahoo-eng-team] [Bug 1729621] [NEW] Inconsistent value for vcpu_used

2017-11-02 Thread Maciej Jozefczyk
Public bug reported:

Description
===

Nova updates hypervisor resources using function called
./nova/compute/resource_tracker.py:update_available_resource().

In case of *shutdowned* instances it could impact inconsistent values
for resources like vcpu_used.

Resources are taken from function self.driver.get_available_resource():
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/compute/resource_tracker.py#L617
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/virt/libvirt/driver.py#L5766

This function calculates allocated vcpu's based on function _get_vcpu_total().
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/virt/libvirt/driver.py#L5352

As we see in _get_vcpu_total() function calls *self._host.list_guests()*
without "only_running=False" parameter. So it doesn't respect shutdowned
instances.

At the end of resource update process function _update_available_resource() is 
beign called:
> /opt/stack/nova/nova/compute/resource_tracker.py(733)

 677 @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE)
 678 def _update_available_resource(self, context, resources):
 679
 681 # initialize the compute node object, creating it
 682 # if it does not already exist.
 683 self._init_compute_node(context, resources)

It initialize compute node object with resources that are calculated
without shutdowned instances. If compute node object already exists it
*UPDATES* its fields - *for a while nova-api has other resources values
than it its in real.*

 731 # update the compute_node
 732 self._update(context, cn)

The inconsistency is automatically fixed during other code execution:
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/compute/resource_tracker.py#L709

But for heavy-loaded hypervisors (like 100 active instances and 30
shutdowned instances) it creates wrong informations in nova database for
about 4-5 seconds (in my usecase) - it could impact other issues like
spawning on already full hypervisor (because scheduler has wrong
informations about hypervisor usage).

Steps to reproduce
==

1) Start devstack
2) Create 120 instances
3) Stop some instances
4) Watch blinking values in nova hypervisor-show
nova hypervisor-show e6dfc16b-7914-48fb-a235-6fe3a41bb6db

Expected result
===
Returned values should be the same during test.

Actual result
=
while true; do echo -n "$(date) "; echo "select hypervisor_hostname, vcpus_used 
from compute_nodes where hypervisor_hostname='example.compute.node.com';" | 
mysql nova_cell1; sleep 0.3; done

Thu Nov  2 14:50:09 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  117
Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:18 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:18 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:18 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:19 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:19 UTC 2017 example.compute.node.com  120
Thu Nov  2 14:50:19 UTC 2017 example.compute.node.com  120

Bad values were stored in nova DB for about 5 seconds. During this time
nova-scheduler could take this host.

Environment
===
Devstack master (f974e3c3566f379211d7fdc790d07b5680925584).
For sure releases down to Newton are impacted.

** Affects: nova
 Importance: Undecided
 Status: New

** 

[Yahoo-eng-team] [Bug 1285000] Re: instance data resides on destination node when vm is deleted during live-migration

2017-08-08 Thread Maciej Jozefczyk
CONFIRMED FOR: Newton

** Changed in: nova
   Status: Expired => In Progress

** Changed in: nova
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285000

Title:
  instance data resides on destination node when vm is deleted during
  live-migration

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  If the VM is deleted during live-migration process, there is possibility that 
the instance data residing on the destination compute node is not deleted.
  Please refer to http://paste.openstack.org/show/69730/ reproduce the issue.

  IMO, One of the possible solution is to restrict the user from
  deleting the VM when live-migration is in progress.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1285000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708465] [NEW] Neutron duplicated provider rule for ICMPv6 Router Advertisements

2017-08-03 Thread Maciej Jozefczyk
Public bug reported:

Change https://review.openstack.org/#/c/432506/ introduced new way of
providing provider rules to sg agent. ICMPv6 RA rule generation has been
moved to neutron/db/securitygroups_rpc_base.py, but its not removed from
neutron/agent/linux/iptables_firewall.py.

In result each time we update SG rule in neutron logs there is a warning
about rules duplication:

2017-08-03 10:41:12.873 28184 WARNING
neutron.agent.linux.iptables_manager [-] Duplicate iptables rule
detected. This may indicate a bug in the the iptables rule generation
code. Line: -A neutron-openvswi-PREROUTING -i gwbf6069f7-2cc -j CT


=== How to reproduce ===
1. Spawn devstack.
2. Boot VM
3. Add new rule to SG which this VM uses.
4. Observe neutron-openvswitch-agent logs.


=== Environment ===
Upstream master devstack.

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708465

Title:
  Neutron duplicated provider rule for ICMPv6 Router Advertisements

Status in neutron:
  New

Bug description:
  Change https://review.openstack.org/#/c/432506/ introduced new way of
  providing provider rules to sg agent. ICMPv6 RA rule generation has
  been moved to neutron/db/securitygroups_rpc_base.py, but its not
  removed from neutron/agent/linux/iptables_firewall.py.

  In result each time we update SG rule in neutron logs there is a
  warning about rules duplication:

  2017-08-03 10:41:12.873 28184 WARNING
  neutron.agent.linux.iptables_manager [-] Duplicate iptables rule
  detected. This may indicate a bug in the the iptables rule generation
  code. Line: -A neutron-openvswi-PREROUTING -i gwbf6069f7-2cc -j CT

  
  === How to reproduce ===
  1. Spawn devstack.
  2. Boot VM
  3. Add new rule to SG which this VM uses.
  4. Observe neutron-openvswitch-agent logs.

  
  === Environment ===
  Upstream master devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1695888] [NEW] Test are unable to run because of old tox version

2017-06-05 Thread Maciej Jozefczyk
Public bug reported:

After latest changes on tox.ini:
http://git.openstack.org/cgit/openstack/neutron/commit/?id=0479f0f9d2bf8fb857b2c683b5d7310e6dd9bf15

configuration file is unable to be parsed, because of error:

# tox -e py27
Traceback (most recent call last):
  File "/usr/local/bin/tox", line 11, in 
sys.exit(cmdline())
  File "/usr/local/lib/python2.7/dist-packages/tox/session.py", line 38, in main
config = prepare(args)
  File "/usr/local/lib/python2.7/dist-packages/tox/session.py", line 26, in 
prepare
config = parseconfig(args)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 229, in 
parseconfig
parseini(config, inipath)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 729, in 
__init__
self.make_envconfig(name, section, reader._subs, config)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 760, in 
make_envconfig
res = meth(env_attr.name, env_attr.default)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 929, in 
getargvlist
return _ArgvlistReader.getargvlist(self, s)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1097, in 
getargvlist
replaced = reader._replace(current_command)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 973, in 
_replace
return Replacer(self, crossonly=crossonly).do_replace(value)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 992, in 
do_replace
return self.RE_ITEM_REF.sub(self._replace_match, x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1021, in 
_replace_match
return self._replace_substitution(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1067, in 
_replace_substitution
val = self._substitute_from_other_section(sub_key)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1058, in 
_substitute_from_other_section
crossonly=self.crossonly)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 973, in 
_replace
return Replacer(self, crossonly=crossonly).do_replace(value)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 992, in 
do_replace
return self.RE_ITEM_REF.sub(self._replace_match, x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1021, in 
_replace_match
return self._replace_substitution(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1067, in 
_replace_substitution
val = self._substitute_from_other_section(sub_key)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 1061, in 
_substitute_from_other_section
"substitution key %r not found" % key)
tox.ConfigError: ConfigError: substitution key 'posargs' not found


It is directly connected to solved #issue279 described here: 
https://tox.readthedocs.io/en/latest/changelog.html


We need to change minimum version of tox from 2.3.1 to 2.3.2.
Gates are passing, because they use tox from pip (tox-2.7.0 ) instead from 
Ubuntu system.


*Reproduce Steps
** git clone https://github.com/openstack/neutron.git
** pip install tox==2.3.1
** tox -e py27


* Versions
** Version: master upstream neutron
** Tox version: 2.3.1

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1695888

Title:
  Test are unable to run because of old tox version

Status in neutron:
  In Progress

Bug description:
  After latest changes on tox.ini:
  
http://git.openstack.org/cgit/openstack/neutron/commit/?id=0479f0f9d2bf8fb857b2c683b5d7310e6dd9bf15

  configuration file is unable to be parsed, because of error:

  # tox -e py27
  Traceback (most recent call last):
File "/usr/local/bin/tox", line 11, in 
  sys.exit(cmdline())
File "/usr/local/lib/python2.7/dist-packages/tox/session.py", line 38, in 
main
  config = prepare(args)
File "/usr/local/lib/python2.7/dist-packages/tox/session.py", line 26, in 
prepare
  config = parseconfig(args)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 229, in 
parseconfig
  parseini(config, inipath)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 729, in 
__init__
  self.make_envconfig(name, section, reader._subs, config)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 760, in 
make_envconfig
  res = meth(env_attr.

[Yahoo-eng-team] [Bug 1692513] Re: Nova uses bad URL for noVNC

2017-05-23 Thread Maciej Jozefczyk
Good point, Mateusz. If have checked Ubuntu packages, the latest is 
1:0.4+dfsg+1+20131010+gitf68af8af3d-4, and its provided also for Ubuntu Artful. 
So for now problem doesn't exist. 
The only change should be for Devstack to clone from tag (eg. newest one - 
v0.6.2) rather than from master.


** No longer affects: horizon

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1692513

Title:
  Nova uses bad URL for noVNC

Status in devstack:
  In Progress

Bug description:
  Description
  ===
  Two weeks ago in noVNC project file vnc_auto.html was renamed to 
vnc_lite.html 
  https://github.com/novnc/noVNC - commit 
53f41f969228a47201ffe533f1ee550ff28d9041
  It looks the same as old one.

  Because of that we need to change this URL also in Nova. Already
  everytime nova-novncproxy fails with 404 error while trying to get
  vnc_auto.html from filesystem.

  Steps to reproduce
  ==
  1. Spawn latest devstack.
  2. Create VM.
  3. Issue nova get-vnc-console INSTANCE_UUID novnc.

  Expected result
  ===
  noVNC console will be shown.

  Actual result
  =
  Given link is broken (HTTP 404).

  
  Environment
  ===
  nova - master branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1692513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692513] Re: Nova uses bad URL for noVNC

2017-05-22 Thread Maciej Jozefczyk
PR Nova:
https://review.openstack.org/#/c/466710/2
PR Horizon:
review.openstack.org/#/c/466712/1
PR Devstack:
https://review.openstack.org/#/c/466714/1

** Also affects: devstack
   Importance: Undecided
   Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Changed in: devstack
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

** Changed in: horizon
   Status: New => In Progress

** Changed in: devstack
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1692513

Title:
  Nova uses bad URL for noVNC

Status in devstack:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Two weeks ago in noVNC project file vnc_auto.html was renamed to 
vnc_lite.html 
  https://github.com/novnc/noVNC - commit 
53f41f969228a47201ffe533f1ee550ff28d9041
  It looks the same as old one.

  Because of that we need to change this URL also in Nova. Already
  everytime nova-novncproxy fails with 404 error while trying to get
  vnc_auto.html from filesystem.

  Steps to reproduce
  ==
  1. Spawn latest devstack.
  2. Create VM.
  3. Issue nova get-vnc-console INSTANCE_UUID novnc.

  Expected result
  ===
  noVNC console will be shown.

  Actual result
  =
  Given link is broken (HTTP 404).

  
  Environment
  ===
  nova - master branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1692513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692513] [NEW] Nova uses bad URL for noVNC

2017-05-22 Thread Maciej Jozefczyk
Public bug reported:

Description
===
Two weeks ago in noVNC project file vnc_auto.html was renamed to vnc_lite.html 
https://github.com/novnc/noVNC - commit 53f41f969228a47201ffe533f1ee550ff28d9041
It looks the same as old one.

Because of that we need to change this URL also in Nova. Already
everytime nova-novncproxy fails with 404 error while trying to get
vnc_auto.html from filesystem.

Steps to reproduce
==
1. Spawn latest devstack.
2. Create VM.
3. Issue nova get-vnc-console INSTANCE_UUID novnc.

Expected result
===
noVNC console will be shown.

Actual result
=
Given link is broken (HTTP 404).


Environment
===
nova - master branch

** Affects: nova
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1692513

Title:
  Nova uses bad URL for noVNC

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Two weeks ago in noVNC project file vnc_auto.html was renamed to 
vnc_lite.html 
  https://github.com/novnc/noVNC - commit 
53f41f969228a47201ffe533f1ee550ff28d9041
  It looks the same as old one.

  Because of that we need to change this URL also in Nova. Already
  everytime nova-novncproxy fails with 404 error while trying to get
  vnc_auto.html from filesystem.

  Steps to reproduce
  ==
  1. Spawn latest devstack.
  2. Create VM.
  3. Issue nova get-vnc-console INSTANCE_UUID novnc.

  Expected result
  ===
  noVNC console will be shown.

  Actual result
  =
  Given link is broken (HTTP 404).

  
  Environment
  ===
  nova - master branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1692513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666831] [NEW] Nova recreates instance directory after migration/resize

2017-02-22 Thread Maciej Jozefczyk
Public bug reported:

Description
===
Nova recreates instance directory on source host after successful 
migration/resize when using QEMU Qcow2 file drives.


Nova after migration executes method driver.confirm_migration().
This method cleans instance directory (instance directory with suffix _resize):

nova/virt/libvirt/driver.py
1115 if os.path.exists(target):
1116 # Deletion can fail over NFS, so retry the deletion as 
required.
1117 # Set maximum attempt as 5, most test can remove the directory
1118 # for the second time.
1119 utils.execute('rm', '-rf', target, delay_on_retry=True,
1120   attempts=5)

After that Nova executes:
1122 root_disk = self.image_backend.by_name(instance, 'disk')

root_disk is used to remove rdb snapshots, but during execution of
self.image_backend.by_name() nova recreates instance directory.

Flow:

driver.confirm_migration()->self._cleanup_resize()->self.image_backend.by_name()
-> (nova/virt/libvirt/imagebackend.py)
image_backend.by_name()->Qcow2.__init__()->Qcow2.resolve_driver_format().

Qcow2.resolve_driver_format():
 344 if self.disk_info_path is not None:
 345 fileutils.ensure_tree(os.path.dirname(self.disk_info_path))
 346 write_to_disk_info_file()


Steps to reproduce
==

- spawn instance
- migrate/resize instance
- check that instance dir on old host still exists (example: 
/home/instances//disk.info


Expected result
===
After migration directory /home/instances/ and file 
/home/instances/ should not exist.

Actual result
=
Nova leaves instance directory after migration/resize.


Environment
===
1. Openstack Newton (it seems master is affected too).

2. Libvirt + KVM

3. Qcow2 file images on local disk.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666831

Title:
  Nova recreates instance directory after migration/resize

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Nova recreates instance directory on source host after successful 
migration/resize when using QEMU Qcow2 file drives.

  
  Nova after migration executes method driver.confirm_migration().
  This method cleans instance directory (instance directory with suffix 
_resize):

  nova/virt/libvirt/driver.py
  1115 if os.path.exists(target):
  1116 # Deletion can fail over NFS, so retry the deletion as 
required.
  1117 # Set maximum attempt as 5, most test can remove the 
directory
  1118 # for the second time.
  1119 utils.execute('rm', '-rf', target, delay_on_retry=True,
  1120   attempts=5)

  After that Nova executes:
  1122 root_disk = self.image_backend.by_name(instance, 'disk')

  root_disk is used to remove rdb snapshots, but during execution of
  self.image_backend.by_name() nova recreates instance directory.

  Flow:

  
driver.confirm_migration()->self._cleanup_resize()->self.image_backend.by_name()
  -> (nova/virt/libvirt/imagebackend.py)
  image_backend.by_name()->Qcow2.__init__()->Qcow2.resolve_driver_format().

  Qcow2.resolve_driver_format():
   344 if self.disk_info_path is not None:
   345 
fileutils.ensure_tree(os.path.dirname(self.disk_info_path))
   346 write_to_disk_info_file()

  
  Steps to reproduce
  ==

  - spawn instance
  - migrate/resize instance
  - check that instance dir on old host still exists (example: 
/home/instances//disk.info

  
  Expected result
  ===
  After migration directory /home/instances/ and file 
/home/instances/ should not exist.

  Actual result
  =
  Nova leaves instance directory after migration/resize.

  
  Environment
  ===
  1. Openstack Newton (it seems master is affected too).

  2. Libvirt + KVM

  3. Qcow2 file images on local disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp