[Yahoo-eng-team] [Bug 1375625] Re: Problem in l3-agent tenant-network interface would cause split-brain in HA router

2017-07-12 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375625

Title:
  Problem in l3-agent tenant-network interface would cause split-brain
  in HA router

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Confirmed

Bug description:
  Assuming l3-agents have  1 NIC (ie eth0) assigned to tenant-network (tunnel) 
traffic and another (ie eth1) assigned to external network,.
  Disconnecting eth0 would prevent keeplived reports and trigger one of the 
slaves to become master. However, since the error is outside the router 
namespace, the original master is unaware of that and would not trigger "fault" 
state. Instead it will continue to receive traffic on the, yet active, external 
network interface - eth1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433172] Re: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

2017-07-12 Thread Ihar Hrachyshka
The bug is in keepalived not neutron, moving to Won't Fix.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433172

Title:
  L3 HA routers master state flapping between nodes after router updates
  or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

Status in neutron:
  Won't Fix
Status in openstack-ansible:
  Fix Released

Bug description:
  keepalived 1.2.14 introduced a regression when running it in no-preempt mode. 
More details here in a thread I started on the keepalived-devel list:
  http://sourceforge.net/p/keepalived/mailman/message/33604497/

  A fix was backported to 1.2.15-6, and is present in 1.2.16.

  Current status (Updated on the 30th of April, 2015):
  Fedora 20, 21 and 22 have 1.2.16.
  CentOS and RHEL are on 1.2.13

  Ubuntu is using 1.2.10 or older.
  Debian is using 1.2.13.

  In summary, as long as you're not using 1.2.14 or 1.2.15 (Excluding
  1.2.15-6), you're OK, which should be the case if you're using the
  latest keepalived packaged for your distro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7

2017-07-12 Thread Ihar Hrachyshka
It's keepalived issue, and supported platforms like centos/rhel or
xenial, already ship fixed packages. We also documented the issue in
networking guide. There seems to be nothing we can do more on neutron
side, so moving the bug to Won't Fix.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time for keepalived v1.2.7

Status in neutron:
  Won't Fix
Status in openstack-ansible:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640701] Re: _notify_l3_agent_ha_port_update failed for stable/mitaka

2017-07-12 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640701

Title:
  _notify_l3_agent_ha_port_update failed for stable/mitaka

Status in neutron:
  Fix Released

Bug description:
  Backport https://review.openstack.org/#/c/364407/ bring
  _notify_l3_agent_ha_port_update to Mitaka code with several changes.
  This code if giving constant errors in neutron-server logs
  http://paste.openstack.org/show/588382/.

  Newton version and later are not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696094] Re: CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-server create-subnet timing out

2017-07-12 Thread Ihar Hrachyshka
It was not a neutron bug but eventlet/dns issue, so marking the bug as
Invalid for neutron.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696094

Title:
  CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-
  server create-subnet timing out

Status in neutron:
  Invalid
Status in tripleo:
  Fix Released

Bug description:
  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-
  ha/2ea94ab/console.html#_2017-06-05_23_52_38_539282

  2017-06-05 23:50:34.148537 | 
+---+--+
  2017-06-05 23:50:35.545475 | neutron CLI is deprecated and will be removed in 
the future. Use openstack CLI instead.
  2017-06-05 23:52:38.539282 | 504 Gateway Time-out
  2017-06-05 23:52:38.539408 | The server didn't respond in time.
  2017-06-05 23:52:38.539437 | 

  It happens on where subnet creation should be.
  I see in logs ovs-vsctl failure, but not sure it's not red herring.

  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-ha/2ea94ab/logs/controller-1-tripleo-
  ci-b-bar/var/log/messages

  Jun  5 23:48:22 localhost ovs-vsctl: ovs|1|vsctl|INFO|Called as 
/bin/ovs-vsctl --timeout=5 --id=@manager -- create Manager 
"target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
  Jun  5 23:48:22 localhost ovs-vsctl: ovs|2|db_ctl_base|ERR|transaction 
error: {"details":"Transaction causes multiple rows in \"Manager\" table to 
have identical values (\"ptcp:6640:127.0.0.1\") for index on column \"target\". 
 First row, with UUID 7e2b866a-40d5-4f9c-9e08-0be3bb34b199, existed in the 
database before this transaction and was not modified by the transaction.  
Second row, with UUID 49488cff-271a-457a-b1e7-e6ca3da6f069, was inserted by 
this transaction.","error":"constraint violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703917] [NEW] Sometimes test_update_user_password fails with Unauthorized

2017-07-12 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/51/473751/11/gate/gate-tempest-dsvm-neutron-
dvr-ubuntu-xenial/aeb2743/console.html

2017-07-12 09:30:35.693828 | Traceback (most recent call last):
2017-07-12 09:30:35.693890 |   File 
"tempest/api/identity/admin/v3/test_users.py", line 89, in 
test_update_user_password
2017-07-12 09:30:35.693932 | password=new_password).response
2017-07-12 09:30:35.693989 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 132, in auth
2017-07-12 09:30:35.694037 | resp, body = self.post(self.auth_url, 
body=body)
2017-07-12 09:30:35.694088 |   File "tempest/lib/common/rest_client.py", 
line 270, in post
2017-07-12 09:30:35.694143 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
2017-07-12 09:30:35.694201 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 161, in request
2017-07-12 09:30:35.694254 | raise 
exceptions.Unauthorized(resp_body['error']['message'])
2017-07-12 09:30:35.694298 | tempest.lib.exceptions.Unauthorized: 
Unauthorized
2017-07-12 09:30:35.694348 | Details: The request you have made requires 
authentication.

Logstash:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_update_user_password%5C%22

20 hits in 7 days.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1703917

Title:
  Sometimes test_update_user_password fails with Unauthorized

Status in OpenStack Identity (keystone):
  New

Bug description:
  http://logs.openstack.org/51/473751/11/gate/gate-tempest-dsvm-neutron-
  dvr-ubuntu-xenial/aeb2743/console.html

  2017-07-12 09:30:35.693828 | Traceback (most recent call last):
  2017-07-12 09:30:35.693890 |   File 
"tempest/api/identity/admin/v3/test_users.py", line 89, in 
test_update_user_password
  2017-07-12 09:30:35.693932 | password=new_password).response
  2017-07-12 09:30:35.693989 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 132, in auth
  2017-07-12 09:30:35.694037 | resp, body = self.post(self.auth_url, 
body=body)
  2017-07-12 09:30:35.694088 |   File "tempest/lib/common/rest_client.py", 
line 270, in post
  2017-07-12 09:30:35.694143 | return self.request('POST', url, 
extra_headers, headers, body, chunked)
  2017-07-12 09:30:35.694201 |   File 
"tempest/lib/services/identity/v3/token_client.py", line 161, in request
  2017-07-12 09:30:35.694254 | raise 
exceptions.Unauthorized(resp_body['error']['message'])
  2017-07-12 09:30:35.694298 | tempest.lib.exceptions.Unauthorized: 
Unauthorized
  2017-07-12 09:30:35.694348 | Details: The request you have made requires 
authentication.

  Logstash:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_update_user_password%5C%22

  20 hits in 7 days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1703917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681866] Re: Bad response code while validating token: 502

2017-07-03 Thread Ihar Hrachyshka
Happening in Ocata too: http://logs.openstack.org/86/474286/1/gate/gate-
tempest-dsvm-neutron-dvr-ubuntu-
xenial/2b247e0/logs/screen-q-svc.txt.gz#_2017-06-26_21_16_20_582

** Changed in: tempest
   Importance: Medium => High

** Also affects: devstack
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1681866

Title:
  Bad response code while validating token: 502

Status in devstack:
  New
Status in OpenStack Identity (keystone):
  New
Status in tempest:
  Confirmed

Bug description:
  Found this while investigating a gate failure [1].

  Tempest logs say "2017-04-11 10:07:02,765 23082 INFO
  [tempest.lib.common.rest_client] Request
  (TestSecurityGroupsBasicOps:_run_cleanups): 503 DELETE
  https://198.72.124.138:8774/v2.1/servers/f736a878-2ac4-4c37-b6a8-e5cd8df5a7fd
  0.018s"

  That 503 looks suspicious. So I go to the nova-api logs. Which gives

  2017-04-11 10:07:02.762 32191 ERROR keystonemiddleware.auth_token [...] Bad 
response code while validating token: 502
  2017-04-11 10:07:02.763 32191 WARNING keystonemiddleware.auth_token [...] 
Identity response: 
  
  502 Proxy Error
  
  Proxy Error
  The proxy server received an invalid
  response from an upstream server.
  The proxy server could not handle the request GET/identity_admin/v3/auth/tokens.
  Reason: Error reading from remote server
  
  Apache/2.4.18 (Ubuntu) Server at 198.72.124.138 Port 443
  

  2017-04-11 10:07:02.763 32191 CRITICAL keystonemiddleware.auth_token
  [...] Unable to validate token: Failed to fetch token data from
  identity server

  So Apache is complaining, some network connection issue, related to
  proxy-ing. So I open "logs/apache/tls-proxy_error.txt.gz" and find

  [Tue Apr 11 10:07:02.761420 2017] [proxy_http:error] [pid 7136:tid 
140090189690624] (20014)Internal error (specific information not available): 
[client 198.72.124.138:38722] [frontend 198.72.124.138:443] AH01102: error 
reading status line from remote server 198.72.124.138:80
  [Tue Apr 11 10:07:02.761454 2017] [proxy:error] [pid 7136:tid 
140090189690624] [client 198.72.124.138:38722] [frontend 198.72.124.138:443] 
AH00898: Error reading from remote server returned by 
/identity_admin/v3/auth/tokens

  Interesting. Google says that adding "proxy-initial-not-pooled" to the
  apache2 vhost config could help.

  Anyway, a good elasticsearch query for this is

  message:"Bad response code while validating token: 502"

  8 hits, no worries.


  [1] : http://logs.openstack.org/03/455303/2/check/gate-tempest-dsvm-
  neutron-full-ubuntu-xenial/aa8c7fd/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1681866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698355] Re: py35 dsvm job failing with RemoteDisconnected error

2017-06-16 Thread Ihar Hrachyshka
Neutron fix: https://review.openstack.org/#/c/474575/

Also oslo.serialization that resulted in memory exhaustion:
https://review.openstack.org/#/c/475052/

** Also affects: oslo.serialization
   Importance: Undecided
   Status: New

** Changed in: oslo.serialization
   Importance: Undecided => Critical

** Changed in: oslo.serialization
   Status: New => Confirmed

** Changed in: heat
   Status: New => Confirmed

** Changed in: oslo.serialization
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698355

Title:
  py35 dsvm job failing with RemoteDisconnected error

Status in heat:
  Confirmed
Status in neutron:
  In Progress
Status in oslo.serialization:
  Confirmed

Bug description:
  traceback:

  2017-06-16 10:24:47.339195 | 2017-06-16 10:24:47.338 | 
  2017-06-16 10:24:47.340517 | 2017-06-16 10:24:47.340 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-06-16 10:24:47.342125 | 2017-06-16 10:24:47.341 | 

  2017-06-16 10:24:47.343471 | 2017-06-16 10:24:47.343 | 
  2017-06-16 10:24:47.344919 | 2017-06-16 10:24:47.344 | Captured traceback:
  2017-06-16 10:24:47.346272 | 2017-06-16 10:24:47.346 | ~~~
  2017-06-16 10:24:47.347614 | 2017-06-16 10:24:47.347 | b'Traceback (most 
recent call last):'
  2017-06-16 10:24:47.348873 | 2017-06-16 10:24:47.348 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 376, in 
_stack_delete'
  2017-06-16 10:24:47.350049 | 2017-06-16 10:24:47.349 | b'
success_on_not_found=True)'
  2017-06-16 10:24:47.351627 | 2017-06-16 10:24:47.351 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 357, in 
_wait_for_stack_status'
  2017-06-16 10:24:47.352791 | 2017-06-16 10:24:47.352 | b'
fail_regexp):'
  2017-06-16 10:24:47.353977 | 2017-06-16 10:24:47.353 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 321, in 
_verify_status'
  2017-06-16 10:24:47.355411 | 2017-06-16 10:24:47.355 | b'
stack_status_reason=stack.stack_status_reason)'
  2017-06-16 10:24:47.356920 | 2017-06-16 10:24:47.356 | 
b"heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerv2Test-1133164547 is in DELETE_FAILED status due to 
'Resource DELETE failed: ConnectFailure: resources.sec_group: Unable to 
establish connection to 
http://10.1.43.45:9696/v2.0/security-group-rules/8d33f0cf-d473-455a-8fe2-978c64af5e0d:
 ('Connection aborted.', RemoteDisconnected('Remote end closed connection 
without response',))'"
  2017-06-16 10:24:47.358227 | 2017-06-16 10:24:47.357 | b''

  http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
  convg-mysql-lbaasv2-py35-ubuntu-
  xenial/e07f32f/console.html#_2017-06-16_10_24_47_356920

  
  heat engine log:

  http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
  convg-mysql-lbaasv2-py35-ubuntu-
  xenial/e07f32f/logs/screen-h-eng.txt.gz?level=INFO#_Jun_16_10_24_22_312023

  
  In the same job nova is failing to connect to neutron with the same error

  
  
http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial/e07f32f/logs/screen-n-api.txt.gz?level=ERROR#_Jun_16_10_24_22_633655

  
  It seems to be happening for security-groups and ports/floating-ip stuff 
mostly.

  
  Not sure if this is a neutronclient/urllib3 issue(I see a new openstacksdk 
release[1])or something specific to changes merged recently to neuton[2].

  
  [1] 
https://github.com/openstack/requirements/commit/1b30d517efd442867888359e4619d822f13a3cf2

  [2] https://review.openstack.org/#/q/topic:bp/push-notifications

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1698355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632540] Re: l3-agent print the ERROR log in l3 log file continuously , finally fill file space, leading to crash the l3-agent service

2017-06-14 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

** Tags removed: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632540

Title:
  l3-agent print the ERROR log in l3 log file continuously ,finally fill
  file space,leading to crash the l3-agent service

Status in neutron:
  Fix Released

Bug description:
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
[req-5d499217-05b6-4a56-a3b7-5681adb53d6c - d2b95803757641b6bc55f6309c12c6e9 - 
- -] Failed to process compatible router 'da82aeb4-07a4-45ca-ae7a-570aec69df29'
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 501, in 
_process_router_update
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 438, in 
_process_router_if_compatible
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 446, in 
_process_added_router
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent ri.process(self)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
488, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py", line 
30, in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
super(DvrRouterBase, self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 386, in 
process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 385, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.logger(e)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 382, in call
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent return func(*args, 
**kwargs)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 964, 
in process
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.process_address_scope()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_edge_router.py", line 
239, in process_address_scope
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
self.snat_iptables_manager, ports_scopemark)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent self.gen.next()
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 461, in defer_apply
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent raise 
n_exc.IpTablesApplyException(msg)
  2016-10-12 10:04:38.587 25667 ERROR neutron.agent.l3.agent 
IpTablesApplyException: Failure applying iptables rules

  for example,this ERROR information will fill l3-agent log file
  continuously until solving the problem ,it will fill the log file
  space.

  because we resyc the ERROR update into the queue when the update is
  not been handle successfully.then the greenthread in l3-agent will
  deal with the update periodicly,so print the log periodicly, but the
  l3 agent has been deal with this update,wo should delete this update.

  we could disable l3-agent in a networknode in ha model, then create
  router,then restart the 

[Yahoo-eng-team] [Bug 1697533] [NEW] test_install_flood_to_tun failed with: 'tun_id=0x378' not in u' unchanged'

2017-06-12 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/59/471059/2/check/gate-neutron-dsvm-
functional-ubuntu-xenial/cd6149b/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/base.py", line 118, in func
return f(self, *args, **kwargs)
  File "neutron/tests/common/helpers.py", line 241, in check_ovs_and_skip
return f(test)
  File "neutron/tests/functional/agent/test_ovs_flows.py", line 445, in 
test_install_flood_to_tun
self.assertIn(("tun_id=0x%(tun_id)x" % kwargs), trace["Final flow"])
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
self.assertThat(haystack, Contains(needle), message)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'tun_id=0x378' not in u' unchanged'

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure ovs

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: functional-tests gate-failure ovs

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697533

Title:
  test_install_flood_to_tun failed with: 'tun_id=0x378' not in u'
  unchanged'

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/59/471059/2/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/cd6149b/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 118, in func
  return f(self, *args, **kwargs)
File "neutron/tests/common/helpers.py", line 241, in check_ovs_and_skip
  return f(test)
File "neutron/tests/functional/agent/test_ovs_flows.py", line 445, in 
test_install_flood_to_tun
  self.assertIn(("tun_id=0x%(tun_id)x" % kwargs), trace["Final flow"])
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'tun_id=0x378' not in u' unchanged'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696537] [NEW] test_keepalived_multiple_sighups_does_not_forfeit_mastership fails when neutron-server tries to bind with Linuxbridge driver (agent not enabled)

2017-06-07 Thread Ihar Hrachyshka
Public bug reported:

This happens locally and in gate. Gate example:
http://logs.openstack.org/59/471059/2/check/gate-neutron-dsvm-fullstack-
ubuntu-xenial/df11b90/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/base.py", line 118, in func
return f(self, *args, **kwargs)
  File "neutron/tests/fullstack/test_l3_agent.py", line 252, in 
test_keepalived_multiple_sighups_does_not_forfeit_mastership
tenant_id, '13.37.0.0/24', network['id'], router['id'])
  File "neutron/tests/fullstack/test_l3_agent.py", line 61, in 
_create_and_attach_subnet
router_interface_info['port_id'])
  File "neutron/tests/fullstack/test_l3_agent.py", line 51, in 
block_until_port_status_active
common_utils.wait_until_true(lambda: is_port_status_active(), sleep=1)
  File "neutron/common/utils.py", line 685, in wait_until_true
raise WaitTimeout("Timed out after %d seconds" % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

This is not 100% failure rate, depends on which driver server picks to
bind ports: ovs or linuxbridge. If the latter, it just spins attempting
to bind with it over and over, until bails out. It never tries to switch
to ovs.

In server log, we see this: http://logs.openstack.org/59/471059/2/check
/gate-neutron-dsvm-fullstack-ubuntu-xenial/df11b90/logs/dsvm-fullstack-
logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_mastership
/neutron-server--2017-06-05--
21-41-34-957535.txt.gz#_2017-06-05_21_42_13_400

2017-06-05 21:42:13.400 12566 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-6618e950-5260-404d-a511-e314408542f5 - - - - -] Port 
4f8dcf10-6f91-4860-b239-6b04460244a3 on network 
155ebfd5-20cf-44bc-9cb5-bc885b8d2eae not bound, no agent of type Linux bridge 
agent registered on host host-745fd526 bind_port 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/mech_agent.py:103
2017-06-05 21:42:13.401 12566 ERROR neutron.plugins.ml2.managers 
[req-6618e950-5260-404d-a511-e314408542f5 - - - - -] Failed to bind port 
4f8dcf10-6f91-4860-b239-6b04460244a3 on host host-745fd526 for vnic_type normal 
using segments []
2017-06-05 21:42:13.401 12566 INFO neutron.plugins.ml2.plugin 
[req-6618e950-5260-404d-a511-e314408542f5 - - - - -] Attempt 2 to bind port 
4f8dcf10-6f91-4860-b239-6b04460244a3
...
2017-06-05 21:42:13.822 12566 ERROR neutron.plugins.ml2.managers 
[req-6618e950-5260-404d-a511-e314408542f5 - - - - -] Failed to bind port 
4f8dcf10-6f91-4860-b239-6b04460244a3 on host host-745fd526 for vnic_type normal 
using segments []

The fullstack test case configures both ml2 drivers.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696537

Title:
  test_keepalived_multiple_sighups_does_not_forfeit_mastership fails
  when neutron-server tries to bind with Linuxbridge driver (agent not
  enabled)

Status in neutron:
  Confirmed

Bug description:
  This happens locally and in gate. Gate example:
  http://logs.openstack.org/59/471059/2/check/gate-neutron-dsvm-
  fullstack-ubuntu-xenial/df11b90/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 118, in func
  return f(self, *args, **kwargs)
File "neutron/tests/fullstack/test_l3_agent.py", line 252, in 
test_keepalived_multiple_sighups_does_not_forfeit_mastership
  tenant_id, '13.37.0.0/24', network['id'], router['id'])
File "neutron/tests/fullstack/test_l3_agent.py", line 61, in 
_create_and_attach_subnet
  router_interface_info['port_id'])
File "neutron/tests/fullstack/test_l3_agent.py", line 51, in 
block_until_port_status_active
  common_utils.wait_until_true(lambda: is_port_status_active(), sleep=1)
File "neutron/common/utils.py", line 685, in wait_until_true
  raise WaitTimeout("Timed out after %d seconds" % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

  This is not 100% failure rate, depends on which driver server picks to
  bind ports: ovs or linuxbridge. If the latter, it just spins
  attempting to bind with it over and over, until bails out. It never
  tries to switch to ovs.

  In server log, we see this:
  http://logs.openstack.org/59/471059/2/check/gate-neutron-dsvm-
  fullstack-ubuntu-xenial/df11b90/logs/dsvm-fullstack-
  
logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_mastership
  /neutron-server--2017-06-05--
  21-41-34-957535.txt.gz#_2017-06-05_21_42_13_400

  2017-06-05 21:42:13.400 12566 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-6618e950-5260-404d-a511-e314408542f5 - - - - -] Port 

[Yahoo-eng-team] [Bug 1694772] Re: test_keepalived_multiple_sighups_does_not_forfeit_mastership fails because of "AttributeError: 'module' object has no attribute 'LINUX_DEV_LEN'"

2017-06-06 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694772

Title:
  test_keepalived_multiple_sighups_does_not_forfeit_mastership fails
  because of "AttributeError: 'module' object has no attribute
  'LINUX_DEV_LEN'"

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/82/465182/1/check/gate-neutron-dsvm-
  fullstack-ubuntu-xenial/a0a5ab4/logs/dsvm-fullstack-
  
logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_mastership/neutron-l3-agent
  --2017-05-31--17-35-57-322793.txt.gz?level=TRACE

  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[req-2ccd326a-186e-4c00-85cc-17ed633b93dc - - - - -] 'module' object has no 
attribute 'LINUX_DEV_LEN'
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 287, in call
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 1094, in process
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self.process_external(agent)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 869, in 
process_external
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self._process_external_gateway(ex_gw_port, agent.pd)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 750, in 
_process_external_gateway
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
interface_name = self.get_external_device_name(ex_gw_port_id)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/tests/common/agents/l3_agent.py", line 96, in 
get_external_device_name
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[:iptables_firewall.LINUX_DEV_LEN])
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
AttributeError: 'module' object has no attribute 'LINUX_DEV_LEN'

  This is probably Newton only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694614] Re: Docs job on jenkins failing

2017-05-31 Thread Ihar Hrachyshka
Still failing on pyroute2 (but how has it passed the gate for the first
patch?..)

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694614

Title:
  Docs job on jenkins failing

Status in neutron:
  Confirmed

Bug description:
  I see failed gate-neutron-docs-ubuntu-xenial in many jobs since
  yesterday. It always fails with error like e.g. in
  http://logs.openstack.org/60/469260/2/check/gate-neutron-docs-ubuntu-
  xenial/e55b579/console.html#_2017-05-30_22_18_04_277430

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694769] Re: Nova fails to plug port because of missing ipset when calling iptables-restore

2017-05-31 Thread Ihar Hrachyshka
Backports here:
https://review.openstack.org/#/q/I19d62a8ac730aba2586b9f8eb08e153746ec2bcb,n,z

** Also affects: os-vif
   Importance: Undecided
   Status: New

** Changed in: os-vif
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694769

Title:
  Nova fails to plug port because of missing ipset when calling
  iptables-restore

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Confirmed

Bug description:
  This is Ocata, linuxbridge.

  http://logs.openstack.org/95/466395/3/gate/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-xenial/e5923b4/logs/testr_results.html.gz

File "tempest/common/compute.py", line 188, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 76, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d failed to build and is in ERROR status
  Details: {u'created': u'2017-05-27T03:00:23Z', u'code': 500, u'message': u'No 
valid host was found. There are not enough hosts available.'}

  The failure in nova-cpu log:
  http://logs.openstack.org/95/466395/3/gate/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-
  xenial/e5923b4/logs/screen-n-cpu.txt.gz#_2017-05-27_03_00_21_716

  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager 
[req-06c29149-80d9-4923-b9c4-54591a3f5e7e 
tempest-ServerActionsTestJSON-1792219232 
tempest-ServerActionsTestJSON-1792219232] [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] Instance failed to spawn
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] Traceback (most recent call last):
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2124, in _build_resources
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] yield resources
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1930, in 
_build_and_run_instance
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] block_device_info=block_device_info)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2698, in spawn
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] destroy_disks_on_failure=True)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5114, in 
_create_domain_and_network
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] destroy_disks_on_failure)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.force_reraise()
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] six.reraise(self.type_, self.value, 
self.tb)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5077, in 
_create_domain_and_network
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.plug_vifs(instance, network_info)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 749, in plug_vifs
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.vif_driver.plug(instance, vif)
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 786, in plug
  2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1694772] [NEW] test_keepalived_multiple_sighups_does_not_forfeit_mastership fails because of "AttributeError: 'module' object has no attribute 'LINUX_DEV_LEN'"

2017-05-31 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/82/465182/1/check/gate-neutron-dsvm-fullstack-
ubuntu-xenial/a0a5ab4/logs/dsvm-fullstack-
logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_mastership/neutron-l3-agent
--2017-05-31--17-35-57-322793.txt.gz?level=TRACE

2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[req-2ccd326a-186e-4c00-85cc-17ed633b93dc - - - - -] 'module' object has no 
attribute 'LINUX_DEV_LEN'
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 287, in call
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 1094, in process
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self.process_external(agent)
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 869, in 
process_external
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self._process_external_gateway(ex_gw_port, agent.pd)
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 750, in 
_process_external_gateway
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
interface_name = self.get_external_device_name(ex_gw_port_id)
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/tests/common/agents/l3_agent.py", line 96, in 
get_external_device_name
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[:iptables_firewall.LINUX_DEV_LEN])
2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
AttributeError: 'module' object has no attribute 'LINUX_DEV_LEN'

This is probably Newton only.

** Affects: neutron
     Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: fullstack gate-failure l3-ha

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Tags added: fullstack gate-failure l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694772

Title:
  test_keepalived_multiple_sighups_does_not_forfeit_mastership fails
  because of "AttributeError: 'module' object has no attribute
  'LINUX_DEV_LEN'"

Status in neutron:
  In Progress

Bug description:
  http://logs.openstack.org/82/465182/1/check/gate-neutron-dsvm-
  fullstack-ubuntu-xenial/a0a5ab4/logs/dsvm-fullstack-
  
logs/TestHAL3Agent.test_keepalived_multiple_sighups_does_not_forfeit_mastership/neutron-l3-agent
  --2017-05-31--17-35-57-322793.txt.gz?level=TRACE

  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[req-2ccd326a-186e-4c00-85cc-17ed633b93dc - - - - -] 'module' object has no 
attribute 'LINUX_DEV_LEN'
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 287, in call
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 1094, in process
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self.process_external(agent)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 869, in 
process_external
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
self._process_external_gateway(ex_gw_port, agent.pd)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 750, in 
_process_external_gateway
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
interface_name = self.get_external_device_name(ex_gw_port_id)
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/tests/common/agents/l3_agent.py", line 96, in 
get_external_device_name
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
[:iptables_firewall.LINUX_DEV_LEN])
  2017-05-31 17:36:06.311 32260 ERROR neutron.agent.l3.router_info 
Attribute

[Yahoo-eng-team] [Bug 1694769] [NEW] Nova fails to plug port because of missing ipset when calling iptables-restore

2017-05-31 Thread Ihar Hrachyshka
Public bug reported:

This is Ocata, linuxbridge.

http://logs.openstack.org/95/466395/3/gate/gate-tempest-dsvm-neutron-
linuxbridge-ubuntu-xenial/e5923b4/logs/testr_results.html.gz

  File "tempest/common/compute.py", line 188, in create_test_server
clients.servers_client, server['id'], wait_until)
  File "tempest/common/waiters.py", line 76, in wait_for_server_status
server_id=server_id)
tempest.exceptions.BuildErrorException: Server 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d failed to build and is in ERROR status
Details: {u'created': u'2017-05-27T03:00:23Z', u'code': 500, u'message': u'No 
valid host was found. There are not enough hosts available.'}

The failure in nova-cpu log: http://logs.openstack.org/95/466395/3/gate
/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-
xenial/e5923b4/logs/screen-n-cpu.txt.gz#_2017-05-27_03_00_21_716

2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager 
[req-06c29149-80d9-4923-b9c4-54591a3f5e7e 
tempest-ServerActionsTestJSON-1792219232 
tempest-ServerActionsTestJSON-1792219232] [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] Instance failed to spawn
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] Traceback (most recent call last):
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2124, in _build_resources
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] yield resources
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1930, in 
_build_and_run_instance
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] block_device_info=block_device_info)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2698, in spawn
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] destroy_disks_on_failure=True)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5114, in 
_create_domain_and_network
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] destroy_disks_on_failure)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.force_reraise()
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] six.reraise(self.type_, self.value, 
self.tb)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5077, in 
_create_domain_and_network
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.plug_vifs(instance, network_info)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 749, in plug_vifs
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self.vif_driver.plug(instance, vif)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 786, in plug
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] self._plug_os_vif(instance, vif_obj)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/vif.py", line 766, in _plug_os_vif
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] raise exception.InternalError(msg)
2017-05-27 03:00:21.716 1385 ERROR nova.compute.manager [instance: 
2a04ac11-2ec6-4a0d-a8f5-c89d129e881d] InternalError: Failure running os_vif 
plugin plug method: Failed to plug VIF 

[Yahoo-eng-team] [Bug 1694764] [NEW] test_metadata_proxy_respawned failed to spawn metadata proxy

2017-05-31 Thread Ihar Hrachyshka
Public bug reported:

This is on Newton.

http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-functional-
ubuntu-xenial/2eae399/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/functional/agent/test_dhcp_agent.py", line 322, in 
test_metadata_proxy_respawned
exception=RuntimeError("Metadata proxy didn't respawn"))
  File "neutron/common/utils.py", line 821, in wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
hub.switch()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
return self.greenlet.switch()
RuntimeError: Metadata proxy didn't respawn

The proxy process is started here:
http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-functional-
ubuntu-xenial/2eae399/logs/dsvm-functional-
logs/neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_metadata_proxy_respawned.txt.gz#_2017-05-27_16_55_06_897

Then later (not sure if related) we see this:

2017-05-27 16:55:12.768 11565 DEBUG neutron.agent.linux.external_process
[req-4c03cff5-0c93-422e-a542-423c54d67807 - - - - -] Process for
bcaf27b6-7bdc-4569-93f0-1a4d51e21040 pid 27762 is stale, ignoring signal
9 disable neutron/agent/linux/external_process.py:121

Nothing interesting can be found in syslog.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: functional-tests gate-failure

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694764

Title:
  test_metadata_proxy_respawned failed to spawn metadata proxy

Status in neutron:
  Confirmed

Bug description:
  This is on Newton.

  http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-
  functional-ubuntu-xenial/2eae399/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_dhcp_agent.py", line 322, in 
test_metadata_proxy_respawned
  exception=RuntimeError("Metadata proxy didn't respawn"))
File "neutron/common/utils.py", line 821, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
  hub.switch()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  return self.greenlet.switch()
  RuntimeError: Metadata proxy didn't respawn

  The proxy process is started here:
  http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-
  functional-ubuntu-xenial/2eae399/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_metadata_proxy_respawned.txt.gz#_2017-05-27_16_55_06_897

  Then later (not sure if related) we see this:

  2017-05-27 16:55:12.768 11565 DEBUG
  neutron.agent.linux.external_process [req-
  4c03cff5-0c93-422e-a542-423c54d67807 - - - - -] Process for
  bcaf27b6-7bdc-4569-93f0-1a4d51e21040 pid 27762 is stale, ignoring
  signal 9 disable neutron/agent/linux/external_process.py:121

  Nothing interesting can be found in syslog.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646107] Re: Functional job times out on xenial

2017-05-31 Thread Ihar Hrachyshka
I think this is not happening anymore. I will close the bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646107

Title:
  Functional job times out on xenial

Status in neutron:
  Fix Released

Bug description:
  logstash-querry:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name%3A%5C
  %22gate-neutron-dsvm-functional-ubuntu-
  
xenial%5C%22%20AND%20build_status%3A%5C%22FAILURE%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20message%3A%5C%22Killed%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20timeout%20-s%209%20%5C%22

  9 hits in 2 days

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1646107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691805] Re: functional pecan tests fail with old Routes (< 2.3.0)

2017-05-31 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691805

Title:
  functional pecan tests fail with old Routes (< 2.3.0)

Status in neutron:
  Fix Released

Bug description:
  Several neutron functional tests are failing when Routes is <2.3.0.
  Specifically the pecan_wsgi.test_controllers.

  Tests:
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestRouterController.test_methods
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestV2Controller.test_get_no_trailing_slash
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestResourceController.test_methods

  Failure:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_middleware/catch_errors.py", 
line 40, in __call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1316, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1280, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 145, in __call__
  return resp(environ, start_response)
File "/usr/lib/python2.7/site-packages/routes/middleware.py", line 80, in 
__call__
  config.environ = environ
File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 22, in 
__setattr__
  self.load_wsgi_environ(value)
File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 51, in 
load_wsgi_environ
  result = mapper.routematch(path)
File "/usr/lib/python2.7/site-packages/routes/mapper.py", line 688, in 
routematch
  raise RoutesException('URL or environ must be provided')
  RoutesException: URL or environ must be provided
  }}}

  Traceback (most recent call last):
File "neutron/tests/base.py", line 115, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/pecan_wsgi/test_controllers.py", line 110, 
in test_get_no_trailing_slash
  self.assertEqual(response.status_int, 404)
File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 350, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 435, in 
assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 500 != 404

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694753] [NEW] QosPolicyDbObjectTestCase.test_update_objects failed because it couldn't find a matching object

2017-05-31 Thread Ihar Hrachyshka
Public bug reported:

2017-05-31 15:22:20.901842 | FAIL: 
neutron.tests.unit.objects.qos.test_policy.QosPolicyDbObjectTestCase.test_update_objects
2017-05-31 15:22:20.901856 | tags: worker-4
2017-05-31 15:22:20.901894 | 
--
2017-05-31 15:22:20.901912 | Traceback (most recent call last):
2017-05-31 15:22:20.901946 |   File "neutron/tests/base.py", line 115, in func
2017-05-31 15:22:20.901964 | return f(self, *args, **kwargs)
2017-05-31 15:22:20.901992 |   File "neutron/tests/unit/objects/test_base.py", 
line 1747, in test_update_objects
2017-05-31 15:22:20.902013 | self.assertEqual(1, len(new_objs))
2017-05-31 15:22:20.902066 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2017-05-31 15:22:20.902093 | self.assertThat(observed, matcher, message)
2017-05-31 15:22:20.902139 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2017-05-31 15:22:20.902155 | raise mismatch_error
2017-05-31 15:22:20.902175 | testtools.matchers._impl.MismatchError: 1 != 0

http://logs.openstack.org/31/469231/2/check/neutron-coverage-ubuntu-
xenial/98cd310/console.html

Of all filters used to match the object that could change in flight is
revision_number. Others cannot change between updates and fetches.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694753

Title:
  QosPolicyDbObjectTestCase.test_update_objects failed because it
  couldn't find a matching object

Status in neutron:
  New

Bug description:
  2017-05-31 15:22:20.901842 | FAIL: 
neutron.tests.unit.objects.qos.test_policy.QosPolicyDbObjectTestCase.test_update_objects
  2017-05-31 15:22:20.901856 | tags: worker-4
  2017-05-31 15:22:20.901894 | 
--
  2017-05-31 15:22:20.901912 | Traceback (most recent call last):
  2017-05-31 15:22:20.901946 |   File "neutron/tests/base.py", line 115, in func
  2017-05-31 15:22:20.901964 | return f(self, *args, **kwargs)
  2017-05-31 15:22:20.901992 |   File 
"neutron/tests/unit/objects/test_base.py", line 1747, in test_update_objects
  2017-05-31 15:22:20.902013 | self.assertEqual(1, len(new_objs))
  2017-05-31 15:22:20.902066 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2017-05-31 15:22:20.902093 | self.assertThat(observed, matcher, message)
  2017-05-31 15:22:20.902139 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-05-31 15:22:20.902155 | raise mismatch_error
  2017-05-31 15:22:20.902175 | testtools.matchers._impl.MismatchError: 1 != 0

  http://logs.openstack.org/31/469231/2/check/neutron-coverage-ubuntu-
  xenial/98cd310/console.html

  Of all filters used to match the object that could change in flight is
  revision_number. Others cannot change between updates and fetches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694524] [NEW] Neutron OVS agent fails to start when neutron-server is not available

2017-05-30 Thread Ihar Hrachyshka
Public bug reported:

2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-34115ed3-3043-4fcb-ba3f-ab0e4eb0e83c - - - - -] Agent main thread died of 
an exception
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2166, in main
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 180, in __init__
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.setup_rpc()
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 362, in setup_rpc
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.plugin_rpc = OVSPluginApi(topics.PLUGIN)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 182, in __init__
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.remote_resource_cache = create_cache_for_l2_agent()
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 174, in 
create_cache_for_l2_agent
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
rcache.bulk_flood_cache()
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/resource_cache.py", line 55, in 
bulk_flood_cache
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
for resource in puller.bulk_pull(context, rtype):
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 48, in wrapper
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return method(*args, **kwargs)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py", 
line 109, in bulk_pull
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
version=resource_type_cls.VERSION, filter_kwargs=filter_kwargs)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 174, in call
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
time.sleep(wait)
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.force_reraise()
2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 

[Yahoo-eng-team] [Bug 1693931] [NEW] functional test_next_port_closed test case failed with ProcessExecutionError when killing netcat

2017-05-26 Thread Ihar Hrachyshka
Public bug reported:

It's newton.

http://logs.openstack.org/27/467427/1/check/gate-neutron-dsvm-
functional-ubuntu-xenial/2860749/testr_results.html.gz


Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
return self._cleanups(raise_errors=raise_first)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 89, in __call__
reraise(error[0], error[1], error[2])
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 83, in __call__
cleanup(*args, **kwargs)
  File "neutron/tests/common/conn_testers.py", line 103, in cleanup
nc.stop_processes()
  File "neutron/tests/common/net_helpers.py", line 508, in stop_processes
proc.kill()
  File "neutron/tests/common/net_helpers.py", line 261, in kill
utils.execute(['kill', '-%d' % sig, pid], run_as_root=True)
  File "neutron/agent/linux/utils.py", line 148, in execute
raise ProcessExecutionError(msg, returncode=returncode)
neutron.agent.linux.utils.ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: 
; Stderr:

In test log, we see:

2017-05-24 01:18:15.766 12435 DEBUG neutron.agent.linux.utils 
[req-110582f4-e8e8-4df0-a516-38de7479450b - - - - -] Running command (rootwrap 
daemon): ['kill', '-9', '31781'] execute_rootwrap_daemon 
neutron/agent/linux/utils.py:105
2017-05-24 01:18:15.783 12435 ERROR neutron.agent.linux.utils 
[req-110582f4-e8e8-4df0-a516-38de7479450b - - - - -] Exit code: 1; Stdin: ; 
Stdout: ; Stderr: 

The PID is not mentioned in syslog.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: functional-tests gate-failure

** Tags added: gate-failure

** Tags added: functional-tests

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: High => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1693931

Title:
  functional test_next_port_closed test case failed with
  ProcessExecutionError when killing netcat

Status in neutron:
  Confirmed

Bug description:
  It's newton.

  http://logs.openstack.org/27/467427/1/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/2860749/testr_results.html.gz

  
  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 89, in __call__
  reraise(error[0], error[1], error[2])
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 83, in __call__
  cleanup(*args, **kwargs)
File "neutron/tests/common/conn_testers.py", line 103, in cleanup
  nc.stop_processes()
File "neutron/tests/common/net_helpers.py", line 508, in stop_processes
  proc.kill()
File "neutron/tests/common/net_helpers.py", line 261, in kill
  utils.execute(['kill', '-%d' % sig, pid], run_as_root=True)
File "neutron/agent/linux/utils.py", line 148, in execute
  raise ProcessExecutionError(msg, returncode=returncode)
  neutron.agent.linux.utils.ProcessExecutionError: Exit code: 1; Stdin: ; 
Stdout: ; Stderr:

  In test log, we see:

  2017-05-24 01:18:15.766 12435 DEBUG neutron.agent.linux.utils 
[req-110582f4-e8e8-4df0-a516-38de7479450b - - - - -] Running command (rootwrap 
daemon): ['kill', '-9', '31781'] execute_rootwrap_daemon 
neutron/agent/linux/utils.py:105
  2017-05-24 01:18:15.783 12435 ERROR neutron.agent.linux.utils 
[req-110582f4-e8e8-4df0-a516-38de7479450b - - - - -] Exit code: 1; Stdin: ; 
Stdout: ; Stderr: 

  The PID is not mentioned in syslog.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1693931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693917] [NEW] test_user_account_lockout failed in gate because authN attempts took longer than usual

2017-05-26 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/99/460399/2/check/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/f7eb334/logs/testr_results.html.gz

ft1.2: 
tempest.api.identity.v3.test_users.IdentityV3UsersTest.test_user_account_lockout[id-a7ad8bbf-2cff-4520-8c1d-96332e151658]_StringException:
 pythonlogging:'': {{{
2017-05-24 21:05:50,147 32293 INFO [tempest.lib.common.rest_client] Request 
(IdentityV3UsersTest:test_user_account_lockout): 201 POST 
https://15.184.66.148/identity/v3/auth/tokens
2017-05-24 21:05:50,147 32293 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: 
Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '344', u'x-openstack-request-id': 
'req-11e47cfa-6b25-47d4-977a-94f3e6d95665', 'status': '201', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'x-subject-token': '', u'content-type': 'application/json'}
Body: {"token": {"issued_at": "2017-05-24T21:05:50.00Z", 
"audit_ids": ["GQR0RZcDSWC_bslZSUzpGg"], "methods": ["password"], "expires_at": 
"2017-05-24T22:05:50.00Z", "user": {"password_expires_at": null, "domain": 
{"id": "default", "name": "Default"}, "id": "415e3f0e215f44a586bdf62e7ea6e02d", 
"name": "tempest-IdentityV3UsersTest-343470382"}}}
2017-05-24 21:05:50,237 32293 INFO [tempest.lib.common.rest_client] Request 
(IdentityV3UsersTest:test_user_account_lockout): 401 POST 
https://15.184.66.148/identity/v3/auth/tokens
2017-05-24 21:05:50,238 32293 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: 
Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '114', u'x-openstack-request-id': 
'req-0a45b9b8-4c7c-409c-9c8d-f6b2661c234f', 'status': '401', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'content-type': 'application/json', u'www-authenticate': 'Keystone 
uri="https://15.184.66.148/identity;'}
Body: {"error": {"message": "The request you have made requires 
authentication.", "code": 401, "title": "Unauthorized"}}
2017-05-24 21:05:54,909 32293 INFO [tempest.lib.common.rest_client] Request 
(IdentityV3UsersTest:test_user_account_lockout): 401 POST 
https://15.184.66.148/identity/v3/auth/tokens
2017-05-24 21:05:54,910 32293 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: 
Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '114', u'x-openstack-request-id': 
'req-3dbd065f-826b-497d-86bc-2bc78a0de997', 'status': '401', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:50 GMT', 
u'content-type': 'application/json', u'www-authenticate': 'Keystone 
uri="https://15.184.66.148/identity;'}
Body: {"error": {"message": "The request you have made requires 
authentication.", "code": 401, "title": "Unauthorized"}}
2017-05-24 21:05:55,106 32293 INFO [tempest.lib.common.rest_client] Request 
(IdentityV3UsersTest:test_user_account_lockout): 201 POST 
https://15.184.66.148/identity/v3/auth/tokens
2017-05-24 21:05:55,106 32293 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: 
Response - Headers: {u'vary': 'X-Auth-Token', 'content-location': 
'https://15.184.66.148/identity/v3/auth/tokens', u'connection': 'close', 
u'content-length': '344', u'x-openstack-request-id': 
'req-1d367c81-2ffa-4812-904a-16be33d12fc0', 'status': '201', u'server': 
'Apache/2.4.18 (Ubuntu)', u'date': 'Wed, 24 May 2017 21:05:54 GMT', 
u'x-subject-token': '', u'content-type': 'application/json'}
Body: {"token": {"issued_at": "2017-05-24T21:05:55.00Z", 
"audit_ids": ["qlWnVS-MShm4hcBujHTL1g"], "methods": ["password"], "expires_at": 
"2017-05-24T22:05:55.00Z", "user": {"password_expires_at": null, "domain": 
{"id": "default", "name": "Default"}, "id": "415e3f0e215f44a586bdf62e7ea6e02d", 
"name": "tempest-IdentityV3UsersTest-343470382"}}}
}}}

Traceback (most recent call last):
  File "tempest/api/identity/v3/test_users.py", line 154, in 
test_user_account_lockout
password=password)
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: > returned {u'token': 

[Yahoo-eng-team] [Bug 1577488] Re: [RFE]"Fast exit" for compute node egress flows when using DVR

2017-05-25 Thread Ihar Hrachyshka
We land a revert, so I reopen the bug.

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577488

Title:
  [RFE]"Fast exit" for compute node egress flows when using DVR

Status in neutron:
  Confirmed

Bug description:
  In its current state, distributed north-south flows with DVR can only
  be acheived when a floating IP is bound to a fixed IP. Without a
  floating IP associated, the north-south flows are steered through the
  centralized SNAT node, even if you are directly routing the tenant
  network without any SNAT. When DVR is combined with either BGP or IPv6
  proxy neighbor discovery, it becomes possible to route traffic
  directly to a fixed IP by advertising the FIP gateway port on a
  compute as the next-hop.  For packets egressing the compute node, we
  need the ability to bypass re-direction of packets to the central SNAT
  node in cases where no floating IP is associated with a fixed IP. By
  enabling this data flow on egress from a compute node, it leaves the
  operator with the option of not running any SNAT nodes. Distributed
  SNAT is not a consideration as the targeted use cases involve
  scenarios where the operator does not want to use any SNAT.

  It is important to note that the use cases this would support are use
  cases where the operator has no need for SNAT. In the scenarios that
  would be supported by this RFE, the operator intends to run a routing
  protocol or IPv6 proxy neighbor discovery to directly route the fixed
  IP's of their tenants. It is also important to note that this RFE does
  not specify what technology the operator would use for routing their
  north-south DVR flows. The intent is simply to enable operators who
  have the infrastructure in place to handle north-south flows in a
  distributed fashion for their tenants.

  To enable this functionality, we have the following options:

  1. The semantics surrounding the "enable_snat" flag when set to
  "False" on a distributed router could use some refinement. We could
  use this flag to enable SNAT node bypass (fast-exit). This approach
  has the benefit of cleaning up some semantics that seem loosley
  defined, and allows us to piggyback on an existing attribute without
  extending the model. The drawback is that this field is exposed to
  tenants who most likely are not aware of how their network traffic is
  routed by the provider network. Tenants probably don't need to be made
  aware that they are "fast exit" treatment through the API, and it may
  not make sense to place the burden on them to set this flag
  appropriately.

  2. Add a new L3 agent mode called "dvr_fast_exit". When the L3 agent
  is run in this mode, all router instances hosted on an L3 agent will
  send egress traffic directly out through the FIP namespace and out to
  the gateway, completely disabling SNAT support on all routers hosted
  on the agent. This approach involves a simple change to skip
  programmming  the "steal" rule that sends traffic to the SNAT node
  when run in this mode. This is likely the least invasive change, but
  also has some drawbacks in that upgrading to using this flag requires
  an agent restart and all agents should be run in this mode. This
  approach would be well suited to green-field deployments, but doesn't
  work well with brown-field deployments.

  3. There could be a third option I haven't considered yet. It could be
  hashed out in a spec.

  In addition to the work discussed above, we need to be able to
  instantiate the FIP namespace and gateway port immediately when a
  router gateway is created instead of waiting for the first floating IP
  association on a node.

  Related WIP patches
  - https://review.openstack.org/#/c/297468/
  - https://review.openstack.org/#/c/283757/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1577488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693539] [NEW] Logstash is filled with server ERROR messages in _delete_interface_route_in_fip_ns

2017-05-25 Thread Ihar Hrachyshka
uot;/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
May 24 16:56:24.236969 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
six.reraise(self.type_, self.value, self.tb)
May 24 16:56:24.237046 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 703, in 
_run_as_root_detect_device_not_found
May 24 16:56:24.237465 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info return 
self._as_root(*args, **kwargs)
May 24 16:56:24.237549 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 367, in _as_root
May 24 16:56:24.237623 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
May 24 16:56:24.237704 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 100, in _as_root
May 24 16:56:24.237785 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
May 24 16:56:24.237871 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 109, in _execute
May 24 16:56:24.237957 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
log_fail_as_error=log_fail_as_error)
May 24 16:56:24.238037 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 151, in execute
May 24 16:56:24.238218 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info raise 
ProcessExecutionError(msg, returncode=returncode)
May 24 16:56:24.238306 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK 
answers: No such process

** Affects: neutron
 Importance: Critical
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: gate-failure l3-dvr-backlog

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1693539

Title:
  Logstash is filled with server ERROR messages in
  _delete_interface_route_in_fip_ns

Status in neutron:
  In Progress

Bug description:
  Example: http://logs.openstack.org/02/466902/8/check/gate-tempest-
  dsvm-neutron-dvr-ubuntu-
  xenial/4fdfddd/logs/screen-q-l3.txt.gz?level=INFO (be ware, it's
  huge!)

  May 24 16:56:24.233922 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: ; Stdout: ; Stderr: RTNETLINK answers: No such process
  May 24 16:56:24.235058 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info [-] Exit code: 2; 
Stdin: ; Stdout: ; Stderr: RTNETLINK answers: No such process
  May 24 16:56:24.235237 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info Traceback (most 
recent call last):
  May 24 16:56:24.235314 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 183, in call
  May 24 16:56:24.235388 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  May 24 16:56:24.235460 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line , in process
  May 24 16:56:24.235548 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info 
self._process_internal_ports()
  May 24 16:56:24.235631 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron.agent.l3.router_info   File 
"/opt/stack/new/neutron/neutron/agent/l3/router_info.py", line 545, in 
_process_internal_ports
  May 24 16:56:24.235708 ubuntu-xenial-osic-cloud1-s3500-8971260 
neutron-l3-agent[20130]: ERROR neutron

[Yahoo-eng-team] [Bug 1692984] [NEW] VM boot fails with "TCG doesn't support requested feature: CPUID.01H:ECX.vmx"

2017-05-23 Thread Ihar Hrachyshka
Public bug reported:

Example: http://logs.openstack.org/83/466983/1/check/gate-tempest-dsvm-
neutron-linuxbridge-ubuntu-xenial/bd69e6b/console.html

2017-05-23 03:28:58.049054 | Captured traceback-1:
2017-05-23 03:28:58.049065 | ~
2017-05-23 03:28:58.049079 | Traceback (most recent call last):
2017-05-23 03:28:58.049103 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 58, in tearDown
2017-05-23 03:28:58.049118 | self.server_check_teardown()
2017-05-23 03:28:58.049141 |   File "tempest/api/compute/base.py", line 
168, in server_check_teardown
2017-05-23 03:28:58.049153 | cls.server_id, 'ACTIVE')
2017-05-23 03:28:58.049187 |   File "tempest/common/waiters.py", line 76, 
in wait_for_server_status
2017-05-23 03:28:58.049199 | server_id=server_id)
2017-05-23 03:28:58.049232 | tempest.exceptions.BuildErrorException: Server 
c5817b9a-a6bc-4305-9b69-0a83da64a58e failed to build and is in ERROR status
2017-05-23 03:28:58.049409 | Details: {u'created': u'2017-05-23T03:04:30Z', 
u'message': u"internal error: process exited while connecting to monitor: 
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 
5]\n2017-05-23T03:04:29.489168Z qemu-system-x86_64: terminating on signal 15 
from pid 30399 (/usr/sbin/libvirtd)", u'code': 500}

In nova-compute log, we see:

May 23 03:04:30.104411 ubuntu-xenial-osic-cloud1-s3500-8933127 
nova-compute[22285]: ERROR oslo_messaging.rpc.server libvirtError: internal 
error: process exited while connecting to monitor: warning: TCG doesn't support 
requested feature: CPUID.01H:ECX.vmx [bit 5]
May 23 03:04:30.104503 ubuntu-xenial-osic-cloud1-s3500-8933127 
nova-compute[22285]: ERROR oslo_messaging.rpc.server 
2017-05-23T03:04:29.489168Z qemu-system-x86_64: terminating on signal 15 from 
pid 30399 (/usr/sbin/libvirtd)

Nothing interesting in syslog.

In libvirt log:

2017-05-23 03:04:29.489+: 30399: debug : qemuMonitorJSONIOProcessLine:191 : 
Line [{"QMP": {"version": {"qemu": {"micro": 0, "minor": 8, "major": 2}, 
"package": "(Debian 1:2.8+dfsg-3ubuntu2~cloud0)"}, "capabilities": []}}]
2017-05-23 03:04:29.489+: 30399: debug : qemuMonitorJSONIOProcess:260 : 
Total used 140 bytes out of 140 available in buffer
2017-05-23 03:04:29.489+: 30399: info : qemuMonitorIOWrite:534 : 
QEMU_MONITOR_IO_WRITE: mon=0x7f57f400d170 
buf={"execute":"qmp_capabilities","id":"libvirt-1"}
 len=49 ret=49 errno=0
2017-05-23 03:04:29.506+: 30399: debug : virNetlinkEventCallback:641 : 
dispatching to max 0 clients, called from event watch 7
2017-05-23 03:04:29.506+: 30399: debug : virNetlinkEventCallback:654 : 
event not handled.
2017-05-23 03:04:29.507+: 30399: debug : virNetlinkEventCallback:641 : 
dispatching to max 0 clients, called from event watch 7
2017-05-23 03:04:29.507+: 30399: debug : virNetlinkEventCallback:654 : 
event not handled.
2017-05-23 03:04:29.507+: 30399: debug : virNetlinkEventCallback:641 : 
dispatching to max 0 clients, called from event watch 7
2017-05-23 03:04:29.507+: 30399: debug : virNetlinkEventCallback:654 : 
event not handled.
2017-05-23 03:04:29.530+: 30399: error : qemuMonitorIORead:586 : Unable to 
read from monitor: Connection reset by peer
2017-05-23 03:04:29.530+: 30399: debug : qemuDomainLogContextRead:4166 : 
Context read 0x7f57f4021320 manager=0x7f57f4003a60 inode=5758332 pos=4109
2017-05-23 03:04:29.530+: 30399: error : qemuProcessReportLogError:1802 : 
internal error: qemu unexpectedly closed the monitor: warning: TCG doesn't 
support requested feature: CPUID.01H:ECX.vmx [bit 5]
2017-05-23T03:04:29.489168Z qemu-system-x86_64: terminating on signal 15 from 
pid 30399 (/usr/sbin/libvirtd)
2017-05-23 03:04:29.530+: 30399: debug : qemuMonitorIO:743 : Error on 
monitor internal error: qemu unexpectedly closed the monitor: warning: TCG 
doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
2017-05-23T03:04:29.489168Z qemu-system-x86_64: terminating on signal 15 from 
pid 30399 (/usr/sbin/libvirtd)
2017-05-23 03:04:29.530+: 30399: debug : qemuMonitorIO:774 : Triggering 
error callback
2017-05-23 03:04:29.530+: 30399: debug : qemuProcessHandleMonitorError:337 
: Received error on 0x7f57dc00aca0 'instance-0054'
2017-05-23 03:04:29.530+: 30401: debug : qemuMonitorSend:1021 : Send 
command resulted in error internal error: qemu unexpectedly closed the monitor: 
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
2017-05-23T03:04:29.489168Z qemu-system-x86_64: terminating on signal 15 from 
pid 30399 (/usr/sbin/libvirtd)
2017-05-23 03:04:29.530+: 30401: debug : qemuMonitorJSONCommandWithFd:301 : 
Receive command reply ret=-1 rxObject=(nil)
2017-05-23 03:04:29.685+: 31667: debug : qemuDomainObjBeginJobInternal:3268 
: Starting job: destroy (vm=0x7f57dc00aca0 name=instance-0054, current 
job=async nested async=start)
2017-05-23 03:04:29.685+: 31667: debug : 

[Yahoo-eng-team] [Bug 1691805] [NEW] functional pecan tests fail with old Routes (< 2.3.0)

2017-05-18 Thread Ihar Hrachyshka
Public bug reported:

Several neutron functional tests are failing when Routes is <2.3.0.
Specifically the pecan_wsgi.test_controllers.

Tests:
neutron.tests.functional.pecan_wsgi.test_controllers.TestRouterController.test_methods
neutron.tests.functional.pecan_wsgi.test_controllers.TestV2Controller.test_get_no_trailing_slash
neutron.tests.functional.pecan_wsgi.test_controllers.TestResourceController.test_methods

Failure:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo_middleware/catch_errors.py", line 
40, in __call__
response = req.get_response(self.application)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1316, in send
application, catch_exc_info=False)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1280, in 
call_application
app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/site-packages/webob/dec.py", line 145, in __call__
return resp(environ, start_response)
  File "/usr/lib/python2.7/site-packages/routes/middleware.py", line 80, in 
__call__
config.environ = environ
  File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 22, in 
__setattr__
self.load_wsgi_environ(value)
  File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 51, in 
load_wsgi_environ
result = mapper.routematch(path)
  File "/usr/lib/python2.7/site-packages/routes/mapper.py", line 688, in 
routematch
raise RoutesException('URL or environ must be provided')
RoutesException: URL or environ must be provided
}}}

Traceback (most recent call last):
  File "neutron/tests/base.py", line 115, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/pecan_wsgi/test_controllers.py", line 110, in 
test_get_no_trailing_slash
self.assertEqual(response.status_int, 404)
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 350, in 
assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/lib/python2.7/site-packages/testtools/testcase.py", line 435, in 
assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 500 != 404

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: functional-tests

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => In Progress

** Tags added: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691805

Title:
  functional pecan tests fail with old Routes (< 2.3.0)

Status in neutron:
  In Progress

Bug description:
  Several neutron functional tests are failing when Routes is <2.3.0.
  Specifically the pecan_wsgi.test_controllers.

  Tests:
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestRouterController.test_methods
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestV2Controller.test_get_no_trailing_slash
  
neutron.tests.functional.pecan_wsgi.test_controllers.TestResourceController.test_methods

  Failure:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_middleware/catch_errors.py", 
line 40, in __call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1316, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1280, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 145, in __call__
  return resp(environ, start_response)
File "/usr/lib/python2.7/site-packages/routes/middleware.py", line 80, in 
__call__
  config.environ = environ
File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 22, in 
__setattr__
  self.load_wsgi_environ(value)
File "/usr/lib/python2.7/site-packages/routes/__init__.py", line 51, in 
load_wsgi_environ
  result = mapper.routematch(path)
File "/usr/lib/python2.7/site-packages/routes/mapper.py", line 688, in 
routematch
  raise RoutesException('URL or environ must be provided')
  RoutesException: URL or environ must be provided
  }}}

  Traceback (most recent call last):
File "neutron/tests/base.py", line 115, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/pecan_wsgi/test_controllers.py", line 110, 
in test_get_no_trailing_slash
  self.assertEqual(response.status_int, 404)
File "/usr/lib/python2.7/site-packages/testtools/testcase.py

[Yahoo-eng-team] [Bug 1691248] [NEW] test_dhcpv6_64_subnets failed with "u'IpAddressAlreadyAllocated', u'message': u'IP address 2003::XXX already allocated in subnet"

2017-05-16 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/20/464020/5/check/gate-tempest-dsvm-neutron-
linuxbridge-ubuntu-xenial/a4d35a8/logs/testr_results.html.gz

Traceback (most recent call last):
  File "tempest/api/network/test_dhcp_ipv6.py", line 230, in 
test_dhcpv6_64_subnets
subnet_slaac = self.create_subnet(self.network, **kwargs)
  File "tempest/api/network/base.py", line 180, in create_subnet
**kwargs)
  File "tempest/lib/services/network/subnets_client.py", line 27, in 
create_subnet
return self.create_resource(uri, post_data)
  File "tempest/lib/services/network/base.py", line 60, in create_resource
resp, body = self.post(req_uri, req_post_data)
  File "tempest/lib/common/rest_client.py", line 270, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 659, in request
self._error_checker(resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 780, in _error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: An object with that identifier already exists
Details: {u'detail': u'', u'type': u'IpAddressAlreadyAllocated', u'message': 
u'IP address 2003::f816:3eff:fe20:97dd already allocated in subnet 
9fd8d6ef-eb85-468c-8453-6c532c61a433'}

The request that failed is req-e628a3d3-3c24-4ced-81b0-ec28993ef6f8.

In neutron-server log:

May 16 17:43:04.748624 ubuntu-xenial-osic-cloud1-s3700-8840984 
neutron-server[16388]: DEBUG neutron.db.api 
[req-e628a3d3-3c24-4ced-81b0-ec28993ef6f8 tempest-NetworksTestDHCPv6-363802631 
tempest-NetworksTestDHCPv6-363802631] Retry wrapper got retriable exception: 
UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 
were matched. {{(pid=16472) wrapped 
/opt/stack/new/neutron/neutron/db/api.py:129}}
May 16 17:43:04.749082 ubuntu-xenial-osic-cloud1-s3700-8840984 
neutron-server[16388]: DEBUG oslo_db.api 
[req-e628a3d3-3c24-4ced-81b0-ec28993ef6f8 tempest-NetworksTestDHCPv6-363802631 
tempest-NetworksTestDHCPv6-363802631] Performing DB retry for function 
neutron.db.db_base_plugin_v2._create_subnet_postcommit {{(pid=16472) wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py:152}}

May 16 17:43:04.998299 ubuntu-xenial-osic-cloud1-s3700-8840984 
neutron-server[16388]: DEBUG neutron.db.api 
[req-e628a3d3-3c24-4ced-81b0-ec28993ef6f8 tempest-NetworksTestDHCPv6-363802631 
tempest-NetworksTestDHCPv6-363802631] Retry wrapper got retriable exception: 
Failed to create a duplicate IpamAllocation: for attribute(s) ['PRIMARY'] with 
value(s) 2003::f816:3eff:fe20:97dd-a561d8cf-6822-4fb7-8b33-499041cefac4 
{{(pid=16472) wrapped /opt/stack/new/neutron/neutron/db/api.py:129}}
May 16 17:43:04.998410 ubuntu-xenial-osic-cloud1-s3700-8840984 
neutron-server[16388]: DEBUG oslo_db.api 
[req-e628a3d3-3c24-4ced-81b0-ec28993ef6f8 tempest-NetworksTestDHCPv6-363802631 
tempest-NetworksTestDHCPv6-363802631] Performing DB retry for function 
neutron.db.db_base_plugin_v2._create_subnet_postcommit {{(pid=16472) wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py:152}}

Seems like different reasons for retries? Is it some unfortunate
scenario where different errors depleted attempts?

** Affects: neutron
 Importance: High
 Status: New


** Tags: api db gate-failure l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Tags added: api db l3-ipam-dhcp

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691248

Title:
  test_dhcpv6_64_subnets failed with "u'IpAddressAlreadyAllocated',
  u'message': u'IP address 2003::XXX already allocated in subnet"

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/20/464020/5/check/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-xenial/a4d35a8/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/api/network/test_dhcp_ipv6.py", line 230, in 
test_dhcpv6_64_subnets
  subnet_slaac = self.create_subnet(self.network, **kwargs)
File "tempest/api/network/base.py", line 180, in create_subnet
  **kwargs)
File "tempest/lib/services/network/subnets_client.py", line 27, in 
create_subnet
  return self.create_resource(uri, post_data)
File "tempest/lib/services/network/base.py", line 60, in create_resource
  resp, body = self.post(req_uri, req_post_data)
File "tempest/lib/common/rest_client.py", line 270, in post
  return self.request('POST', url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 659, in request
  self._error_checker(resp, resp_body)
File "tempest/lib/common/rest_client.py", line 780, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'detail': u'', u'type': 

[Yahoo-eng-team] [Bug 1689482] Re: coverage job failure

2017-05-15 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1689482

Title:
  coverage job failure

Status in neutron:
  Fix Released

Bug description:
  neutron-coverage-ubuntu-xenial is failing with timeout,
  probably after the recent coverage update. (4.3.4 -> 4.4)

  succeed: 
http://logs.openstack.org/38/463238/1/gate/neutron-coverage-ubuntu-xenial/f92036d/
  failed: 
http://logs.openstack.org/75/437775/9/check/neutron-coverage-ubuntu-xenial/716a43a/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1689482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690165] [NEW] Gratutuious ARP updates sent by Neutron L3 agent may be ignored by Linux peers

2017-05-11 Thread Ihar Hrachyshka
Public bug reported:

An ufortunate scenario in Linux kernel explained in
https://patchwork.ozlabs.org/patch/760372/ may result in no gARP being
honoured by Linux network peers. To work the kernel bug around, we may
want to spread updates more, not to hit default kernel locktime which is
1s.

** Affects: neutron
 Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: l3-ha l3-ipam-dhcp

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Importance: Undecided => High

** Tags added: l3-ha l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690165

Title:
  Gratutuious ARP updates sent by Neutron L3 agent may be ignored by
  Linux peers

Status in neutron:
  In Progress

Bug description:
  An ufortunate scenario in Linux kernel explained in
  https://patchwork.ozlabs.org/patch/760372/ may result in no gARP being
  honoured by Linux network peers. To work the kernel bug around, we may
  want to spread updates more, not to hit default kernel locktime which
  is 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680183] Re: neutron-keepalived-state-change fails with "AssertionError: do not call blocking functions from the mainloop"

2017-04-28 Thread Ihar Hrachyshka
We still hit the issue, though the trace is a bit different now:

http://logs.openstack.org/38/284738/69/check/gate-neutron-dsvm-
fullstack-ubuntu-xenial/2e022c5/logs/syslog.txt.gz

Apr 28 17:24:20 ubuntu-xenial-rax-ord-8648308 
neutron-keepalived-state-change[21615]: 2017-04-28 17:24:20.423 21615 CRITICAL 
neutron [-] AssertionError: do not call blocking functions from the mainloop

  2017-04-28 17:24:20.423 21615 ERROR neutron Traceback (most recent call 
last):

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/bin/neutron-keepalived-state-change",
 line 10, in 

  2017-04-28 17:24:20.423 21615 ERROR neutron sys.exit(main())

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/cmd/keepalived_state_change.py", line 19, in 
main

  2017-04-28 17:24:20.423 21615 ERROR neutron 
keepalived_state_change.main()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/l3/keepalived_state_change.py", line 156, 
in main

  2017-04-28 17:24:20.423 21615 ERROR neutron 
cfg.CONF.monitor_cidr).start()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/linux/daemon.py", line 253, in start

  2017-04-28 17:24:20.423 21615 ERROR neutron self.run()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/l3/keepalived_state_change.py", line 69, 
in run

  2017-04-28 17:24:20.423 21615 ERROR neutron for iterable in 
self.monitor:

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/linux/async_process.py", line 261, in 
_iter_queue

  2017-04-28 17:24:20.423 21615 ERROR neutron yield 
queue.get(block=block)

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get

  2017-04-28 17:24:20.423 21615 ERROR neutron return waiter.wait()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait

  2017-04-28 17:24:20.423 21615 ERROR neutron return get_hub().switch()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch

  2017-04-28 17:24:20.423 21615 ERROR neutron return 
self.greenlet.switch()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run

  2017-04-28 17:24:20.423 21615 ERROR neutron self.wait(sleep_time)

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait

  2017-04-28 17:24:20.423 21615 ERROR neutron presult = 
self.do_poll(seconds)
 

[Yahoo-eng-team] [Bug 1687086] [NEW] nova fails to rescue an instance because ramdisk file doesn't exist

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/67/457467/4/gate/gate-tempest-dsvm-neutron-
dvr-ubuntu-
xenial/4d6be0a/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-04-20_16_18_59_065

2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager 
[req-26543eff-dd70-4526-bec6-fc977ea734dc 
tempest-ServerRescueNegativeTestJSON-295821689 
tempest-ServerRescueNegativeTestJSON-295821689] [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] Error trying to Rescue Instance
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] Traceback (most recent call last):
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3370, in rescue_instance
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rescue_image_meta, admin_password)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2636, in rescue
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self._create_domain(xml, 
post_xml_callback=gen_confdrive)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5002, in _create_domain
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] guest.launch(pause=pause)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 145, in launch
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self._encoded_xml, errors='ignore')
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self.force_reraise()
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] six.reraise(self.type_, self.value, 
self.tb)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 140, in launch
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] return 
self._domain.createWithFlags(flags)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rv = execute(f, *args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] six.reraise(c, e, tb)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rv = meth(*args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1065, in 
createWithFlags
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] libvirtError: unable to stat: 
/opt/stack/data/nova/instances/6e63ccaa-f174-4371-a169-d5303db821eb/ramdisk.rescue:
 No such file or directory
2017-04-20 

[Yahoo-eng-team] [Bug 1687074] [NEW] Sometimes ovsdb fails with "tcp:127.0.0.1:6640: error parsing stream"

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Example (Ocata): http://logs.openstack.org/67/460867/1/check/gate-
neutron-dsvm-functional-ubuntu-xenial/382d800/logs/dsvm-functional-
logs/neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures.txt.gz

2017-04-28 07:59:01.430 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: error parsing stream: line 0, column 1, byte 1: syntax 
error at beginning of input
2017-04-28 07:59:01.431 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 
txn command(idx=0): AddBridgeCommand(name=test-brc6de03bf, may_exist=True, 
datapath_type=None) do_commit neutron/agent/ovsdb/impl_idl.py:100
2017-04-28 07:59:01.433 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] OVSDB 
transaction returned TRY_AGAIN, retrying do_commit 
neutron/agent/ovsdb/impl_idl.py:111
2017-04-28 07:59:01.433 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Protocol error)

If we look at logstash here:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22tcp%3A127.0.0.1%3A6640%3A%20error%20parsing%20stream%5C%22

We see some interesting data points, sometimes it actually logs what's
in the buffer, and I see instances of:

2017-04-27 19:02:51.755
[neutron.tests.functional.tests.common.exclusive_resources.test_port.TestExclusivePort.test_port]
3300 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 1355, byte 1355: invalid keyword 'id'

2017-04-27 14:22:02.294
[neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_get_events_native_]
3433 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'new'

2017-04-27 04:46:17.667
[neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_bad_address_allocation]
4136 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'ace'

2017-04-26 18:04:59.110
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_correct_protection_allowed_address_pairs]
3477 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'err'

2017-04-25 19:00:01.452
[neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_agent_mtu_set_on_interface_driver]
4251 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'set'

2017-04-25 16:34:11.355
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_fails_incorrect_mac_protection]
3332 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 5, byte 5: invalid keyword 'tatus'

2017-04-25 03:28:25.858
[neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures]
4112 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'set'

2017-04-24 21:59:39.094
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_port_security_disabled]
3682 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 5, byte 5: invalid keyword 'rsion'

Terry says it doesn't resemble the protocol, but some random crap,
potentially from some random place in memory (SCARY!)

** Affects: neutron
 Importance: High
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: Confirmed


** Tags: fullstack functional-tests gate-failure ovs

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

** Changed in: ovsdbapp
   Status: New => Confirmed

** Tags added: fullstack functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687074

Title:
  Sometimes ovsdb fails with "tcp:127.0.0.1:6640: error parsing stream"

Status in neutron:
  Confirmed
Status in ovsdbapp:
  Confirmed

Bug description:
  Example (Ocata): http://logs.openstack.org/67/460867/1/check/gate-
  neutron-dsvm-functional-ubuntu-xenial/382d800/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures.txt.gz

  2017-04-28 07:59:01.430 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: error parsing stream: line 0, column 1, byte 1: syntax 
error at beginning of input
  2017-04-28 07:59:01.431 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 

[Yahoo-eng-team] [Bug 1627106] Re: TimeoutException while executing tests adding bridge using OVSDB native

2017-04-28 Thread Ihar Hrachyshka
This still happens. At least once in Ocata functional job:
http://logs.openstack.org/67/460867/1/check/gate-neutron-dsvm-
functional-ubuntu-xenial/382d800/console.html

Also logstash shows 45 hits overall for "message:"exceeded timeout 10
seconds"", almost all of them in fullstack now.

** Changed in: neutron
   Status: Fix Released => Confirmed

** Changed in: neutron
Milestone: pike-1 => pike-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627106

Title:
  TimeoutException while executing tests adding bridge using OVSDB
  native

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/91/366291/12/check/gate-neutron-dsvm-
  functional-ubuntu-trusty/a23c816/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 62, in 
test_post_commit_vswitchd_completed_no_failures
  self._add_br_and_test()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 56, in 
_add_br_and_test
  self._add_br()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 52, in 
_add_br
  tr.add(ovsdb.add_br(self.brname))
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 72, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands 
[AddBridgeCommand(name=test-br6925d8e2, datapath_type=None, may_exist=True)] 
exceeded timeout 10 seconds

  
  I suspect this one may hit us because we finally made timeout working with 
Icd745514adc14730b9179fa7a6dd5c115f5e87a5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687065] [NEW] functional tests are filled with POLLIN messages from ovs even when it's not using ovs itself

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Example: http://logs.openstack.org/27/451527/5/check/gate-neutron-dsvm-
functional-ubuntu-xenial/da67f5f/logs/dsvm-functional-
logs/neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_update.txt.gz

This test has nothing to do with ovs, but still, it's trashed with
POLLIN messages. Probably because some previous test case in the worker
initialized ovslib and got a logging thread spinned up.

Ideally, we would not have the thread running in non-ovs scope, meaning
we would need some way to kill/disable it when not needed. Maybe a
fixture in ovsdbapp for that matter (or ovs lib itself) that would
restore the state to pre-init could help. Then we could use the fixture
in our base test classes.

** Affects: neutron
 Importance: Low
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: Confirmed


** Tags: functional-tests usability

** Changed in: ovsdbapp
   Status: New => Confirmed

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: functional-tests usability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687065

Title:
  functional tests are filled with POLLIN messages from ovs even when
  it's not using ovs itself

Status in neutron:
  Confirmed
Status in ovsdbapp:
  Confirmed

Bug description:
  Example: http://logs.openstack.org/27/451527/5/check/gate-neutron-
  dsvm-functional-ubuntu-xenial/da67f5f/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_update.txt.gz

  This test has nothing to do with ovs, but still, it's trashed with
  POLLIN messages. Probably because some previous test case in the
  worker initialized ovslib and got a logging thread spinned up.

  Ideally, we would not have the thread running in non-ovs scope,
  meaning we would need some way to kill/disable it when not needed.
  Maybe a fixture in ovsdbapp for that matter (or ovs lib itself) that
  would restore the state to pre-init could help. Then we could use the
  fixture in our base test classes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687064] [NEW] ovs logs are trashed with healthcheck messages from ovslib

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Those messages are all over the place:

2017-04-28 14:34:06.478 16259 DEBUG ovsdbapp.backend.ovs_idl.vlog [-]
[POLLIN] on fd 14 __log_wakeup /usr/local/lib/python2.7/dist-
packages/ovs/poller.py:246

We should probably suppress them, they don't seem to carry any value. If
there is value in knowing when something stopped working, maybe consider
erroring in this failure mode instead of logging in happy path.

** Affects: neutron
 Importance: Low
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: New


** Tags: ovs usability

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: ovs usability

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687064

Title:
  ovs logs are trashed with healthcheck messages from ovslib

Status in neutron:
  Confirmed
Status in ovsdbapp:
  New

Bug description:
  Those messages are all over the place:

  2017-04-28 14:34:06.478 16259 DEBUG ovsdbapp.backend.ovs_idl.vlog [-]
  [POLLIN] on fd 14 __log_wakeup /usr/local/lib/python2.7/dist-
  packages/ovs/poller.py:246

  We should probably suppress them, they don't seem to carry any value.
  If there is value in knowing when something stopped working, maybe
  consider erroring in this failure mode instead of logging in happy
  path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630664] Re: Intermittent failure in n-api connecting to neutron to list ports after TLS was enabled in CI

2017-04-28 Thread Ihar Hrachyshka
I am seeing that still happening with keystone token fetch. It just hit
this Ocata patch: https://review.openstack.org/#/c/460909/

In http://logs.openstack.org/09/460909/2/check/gate-tempest-dsvm-
neutron-linuxbridge-ubuntu-xenial/67904c9/logs/apache/tls-
proxy_error.txt.gz we see:

[Fri Apr 28 12:46:47.763965 2017] [proxy_http:error] [pid 30068:tid 
140271090042624] (20014)Internal error (specific information not available): 
[client 104.130.119.120:50002] [frontend 104.130.119.120:443] AH01102: error 
reading status line from remote server 104.130.119.120:80
[Fri Apr 28 12:46:47.764003 2017] [proxy:error] [pid 30068:tid 140271090042624] 
[client 104.130.119.120:50002] [frontend 104.130.119.120:443] AH00898: Error 
reading from remote server returned by /identity_admin/v3/auth/tokens

The request that triggered the failure doesn't seem to show up in
keystone log.

** Project changed: nova => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630664

Title:
  Intermittent failure in n-api connecting to neutron to list ports
  after TLS was enabled in CI

Status in devstack:
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/00/382000/2/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/07e5243/logs/screen-n-api.txt.gz?level=TRACE#_2016-10-05_14_35_04_333

  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack 
[req-c1bbc78f-89e4-4de2-956d-9b71f8ad1a87 
tempest-TestNetworkAdvancedServerOps-960076899 
tempest-TestNetworkAdvancedServerOps-960076899] Caught error: Unable to 
establish connection to 
https://127.0.0.1:9696/v2.0/ports.json?device_id=bf9a5908-ebdd-4f67-aae4-a0a3e0cf0d09
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack Traceback (most recent 
call last):
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 89, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
req.get_response(self.application)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 323, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack response = 
req.get_response(self._app)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in 
__call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack response = 
self.app(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1684338] Re: tempest jobs failing with midonet-cluster complaining about keystone

2017-04-26 Thread Ihar Hrachyshka
It's not clear why it's a Neutron issue and not Midonet, so I changed
the component to networking-midonet for now. Feel free to move back or
add neutron to the list of affected projects if you have more
information that points to neutron.

** Project changed: neutron => networking-midonet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684338

Title:
  tempest jobs failing with midonet-cluster complaining about keystone

Status in networking-midonet:
  In Progress

Bug description:
  eg. http://logs.openstack.org/11/458011/1/check/gate-tempest-dsvm-
  networking-midonet-ml2-ubuntu-xenial/86d989d/logs/midonet-
  cluster.txt.gz

  2017.04.19 10:50:50.132 ERROR [rest-api-55] auth Login authorization error 
occurred for user null
  java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) 
~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:538) ~[na:1.8.0_121]
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.(HttpClient.java:211) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) 
~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
 ~[na:1.8.0_121]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler$1$1.getOutputStream(URLConnectionClientHandler.java:238)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.commitStream(CommittingOutputStream.java:117)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.write(CommittingOutputStream.java:89)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter$LoggingOutputStream.write(LoggingFilter.java:110)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1848)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1041)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:854) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:650) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
... 39 common frames omitted
  Wrapped by: com.sun.jersey.api.client.ClientHandlerException: 
java.net.ConnectException: Connection refused (Connection refused)
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter.handle(LoggingFilter.java:217) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.Client.handle(Client.java:652) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 

[Yahoo-eng-team] [Bug 1669900] Re: ovs-vswitchd crashed in functional test with segmentation fault

2017-04-26 Thread Ihar Hrachyshka
We switched to UCA that should deliver a new openvswitch to us (2.5.2).
Let's close the bug and monitor if it happens again. If it does, let's
reopen.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669900

Title:
  ovs-vswitchd crashed in functional test with segmentation fault

Status in neutron:
  Fix Released

Bug description:
  2017-03-03T18:39:35.095Z|00107|connmgr|INFO|test-br368b7744<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.144Z|00108|connmgr|INFO|br-tunb76d9d9d9<->unix: 9 
flow_mods in the last 0 s (9 adds)
  2017-03-03T18:39:35.148Z|00109|connmgr|INFO|br-tunb76d9d9d9<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.255Z|3|daemon_unix(monitor)|WARN|2 crashes: pid 7753 
died, killed (Segmentation fault), waiting until 10 seconds since last restart
  2017-03-03T18:39:43.255Z|4|daemon_unix(monitor)|ERR|2 crashes: pid 7753 
died, killed (Segmentation fault), restarting
  2017-03-03T18:39:43.256Z|5|ovs_numa|INFO|Discovered 4 CPU cores on NUMA 
node 0
  2017-03-03T18:39:43.256Z|6|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 
CPU cores
  2017-03-03T18:39:43.256Z|7|memory|INFO|8172 kB peak resident set size 
after 694.6 seconds
  
2017-03-03T18:39:43.256Z|8|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connecting...
  
2017-03-03T18:39:43.256Z|9|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connected

  
  
http://logs.openstack.org/73/441273/1/check/gate-neutron-dsvm-functional-ubuntu-xenial/82f5446/logs/openvswitch/ovs-vswitchd.txt.gz

  This triggered functional test failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671548] Re: Updating mac_address of port doesn't update its autoconfigured IPv6 address

2017-04-19 Thread Ihar Hrachyshka
We are reverting the patch.

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671548

Title:
  Updating mac_address of port doesn't update its autoconfigured IPv6
  address

Status in neutron:
  Confirmed

Bug description:
  PUT /v2.0/ports/d38564ff-8a98-4a21-a162-9b2841c78ebc.json HTTP/1.1
  ...
  {"port": {"mac_address": "fa:16:3e:d2:03:61"}}

  
  This updates the ports MAC address but doesn't update the IP address.
  If using slaac or stateless address mode it should as the IP address is 
derived for the MAC address.

  Version - Master from 20170127

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675363] Re: Bump default quotas for ports, subnets, and networks

2017-04-18 Thread Ihar Hrachyshka
https://review.openstack.org/457684

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

** Changed in: openstack-manuals
   Status: New => In Progress

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1675363

Title:
  Bump default quotas for ports, subnets, and networks

Status in neutron:
  Invalid
Status in openstack-manuals:
  In Progress

Bug description:
  https://review.openstack.org/444030
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 95f621f717b2e9fe0c89f7188f6d1668200475c8
  Author: Ihar Hrachyshka <ihrac...@redhat.com>
  Date:   Mon Mar 6 17:03:33 2017 +

  Bump default quotas for ports, subnets, and networks
  
  It's probably not very realistic to expect power users to be happy with
  the default quotas (10 networks, 50 ports, 10 subnets). I believe that
  larger defaults would be more realistic. This patch bumps existing
  quotas for the aforementioned neutron resources x10 times.
  
  DocImpact change default quotas in documentation if used in examples
anywhere.
  UpgradeImpact operators may need to revisit quotas they use.
  Closes-Bug: #1674787
  Change-Id: I04993934627d2d663a1bfccd7467ac4fbfbf1434

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1675363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604115] Re: test_cleanup_stale_devices functional test sporadic failures

2017-04-18 Thread Ihar Hrachyshka
No hits in 7 days. Closing, please reopen if we see it again.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604115

Title:
  test_cleanup_stale_devices functional test sporadic failures

Status in neutron:
  Fix Released

Bug description:
  19 hits in the last 7 days

  build_status:"FAILURE" AND message:", in test_cleanup_stale_devices"
  AND build_name:"gate-neutron-dsvm-functional"

  Example TRACE failure:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

  Example log from testrunner:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651126] Re: Update MTU on existing devices

2017-04-18 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651126

Title:
  Update MTU on existing devices

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/405532
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5c8dffa7fb6c95a04a7b50c7d6e63c9a2729231b
  Author: Ihar Hrachyshka <ihrac...@redhat.com>
  Date:   Tue Nov 29 22:24:29 2016 +

  Update MTU on existing devices
  
  This patch makes OVS and Linuxbridge interface drivers to set MTU on
  plug() attempt if the device already exists. This helps when network MTU
  changes (which happens after some configuration file changes).
  
  This will allow to update MTU values on agent restart, without the need
  to bind all ports to new nodes, that would involve migrating agents. It
  will also help in case when you have no other nodes to migrate to (in
  single node mode).
  
  Both OVS and Linuxbridge interface drivers are updated.
  
  Other drivers (in-tree IVS as well as 3party drivers) will use the
  default set_mtu implementation, that only warns about the missing
  feature (once per process startup).
  
  DocImpact suggest to restart agents after MTU config changes instead of
rewiring router/DHCP ports.
  
  Related: If438e4816b425e6c5021a55567dcaaa77d1f
  Related: If09eda334cddc74910dda7a4fb498b7987714be3
  Closes-Bug: #1649845
  Change-Id: I3c6d6cb55c5808facec38f87114c2ddf548f05f1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1651126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683249] Re: neutron dynamic routing api ocata job broken

2017-04-18 Thread Ihar Hrachyshka
Fixed with
https://review.openstack.org/#/q/I15275e82b03f87a4c4e13d3790db01973c3843cb,n,z

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683249

Title:
  neutron dynamic routing api ocata job broken

Status in neutron:
  Fix Released

Bug description:
  Looks like all tests are being skipped in dynamic routing ocata:

  http://logs.openstack.org/39/443839/1/check/gate-neutron-dynamic-
  routing-dsvm-tempest-api/85fa064/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667579] Re: swift-proxy-server fails to start with Python 3.5

2017-04-17 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

** Changed in: devstack
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667579

Title:
  swift-proxy-server fails to start with Python 3.5

Status in devstack:
  In Progress
Status in neutron:
  Invalid
Status in OpenStack Object Storage (swift):
  Confirmed

Bug description:
  Traceback (most recent call last):
File "/usr/local/bin/swift-proxy-server", line 6, in 
  exec(compile(open(__file__).read(), __file__, 'exec'))
File "/opt/stack/new/swift/bin/swift-proxy-server", line 23, in 
  sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
File "/opt/stack/new/swift/swift/common/wsgi.py", line 905, in run_wsgi
  loadapp(conf_path, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 389, in loadapp
  ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 373, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 296, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 320, in _loadconfig
  return loader.get_context(object_type, name, global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 450, in get_context
  global_additions=global_additions)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 562, in _pipeline_app_context
  for name in pipeline[:-1]]
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 562, in 
  for name in pipeline[:-1]]
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 454, in get_context
  section)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 476, in _context_from_use
  object_type, name=use, global_conf=global_conf)
File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
  object_type, name=name, global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 406, in get_context
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 296, in loadcontext
  global_conf=global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 328, in _loadegg
  return loader.get_context(object_type, name, global_conf)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 620, in get_context
  object_type, name=name)
File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", 
line 646, in find_egg_entry_point
  possible.append((entry.load(), protocol, entry.name))
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", 
line 2302, in load
  return self.resolve()
File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", 
line 2308, in resolve
  module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/opt/stack/new/swift/swift/common/middleware/slo.py", line 799
  def is_small_segment((seg_dict, start_byte, end_byte)):
   ^
  SyntaxError: invalid syntax

  http://logs.openstack.org/14/437514/3/check/gate-rally-dsvm-py35
  -neutron-neutron-ubuntu-xenial/3221186/logs/screen-s-proxy.txt.gz

  This currently blocks neutron gate where we have a voting py3 tempest
  job. The reason why swift is deployed with Python3.5 there is because
  we special case in devstack to deploy the service with Python3:

  http://git.openstack.org/cgit/openstack-
  dev/devstack/tree/inc/python#n167

  The short term solution is to disable the special casing. Swift should
  then work on fixing the code, and gate on Python3 (preferably the same
  job as neutron has).

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1667579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683369] [NEW] TagDbObjectTestCase.test_objects_exist_validate_filters_false may fail because of non-unique id for standardattributes

2017-04-17 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/00/425800/5/gate/gate-neutron-
python35/325d0d1/testr_results.html.gz

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_tag.py",
 line 33, in setUp
lambda: self._create_test_standard_attribute_id()
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_base.py",
 line 576, in update_obj_fields
val = v() if callable(v) else v
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_tag.py",
 line 33, in 
lambda: self._create_test_standard_attribute_id()
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_base.py",
 line 1381, in _create_test_standard_attribute_id
self.context, standard_attr.StandardAttribute, attrs)['id']
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/db/api.py", line 
61, in create_object
context.session.add(db_obj)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 567, in __exit__
self.rollback()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py",
 line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 187, in reraise
raise value
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 564, in __exit__
self.commit()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 461, in commit
self._prepare_impl()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 430, in _prepare_impl
self.session.dispatch.before_commit(self.session)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/event/attr.py",
 line 218, in __call__
fn(*args, **kw)
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", line 
283, in load_one_to_manys
session.flush()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2139, in flush
self._flush(objects)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2259, in _flush
transaction.rollback(_capture_exception=True)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py",
 line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 187, in reraise
raise value
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2223, in _flush
flush_context.execute()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py",
 line 389, in execute
rec.execute(self)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py",
 line 548, in execute
uow
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py",
 line 181, in save_obj
mapper, table, insert)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py",
 line 799, in _emit_insert_statements
execute(statement, multiparams)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 945, in execute
return meth(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/sql/elements.py",
 line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1053, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1189, in _execute_context
context)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1398, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 

[Yahoo-eng-team] [Bug 1683365] [NEW] test_rule_update_forbidden_for_regular_tenants_own_policy fails with NotFound

2017-04-17 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/06/440806/5/gate/gate-neutron-dsvm-api-ubuntu-
xenial/6d9b1d2/testr_results.html.gz

pythonlogging:'': {{{
2017-04-17 07:24:06,647 21614 INFO [tempest.lib.common.rest_client] Request 
(QosBandwidthLimitRuleTestJSON:test_rule_update_forbidden_for_regular_tenants_own_policy):
 201 POST http://15.184.66.59:9696/v2.0/qos/policies 0.156s
2017-04-17 07:24:06,647 21614 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'X-Auth-Token': '', 
'Content-Type': 'application/json'}
Body: {"policy": {"description": "test policy", "shared": false, 
"tenant_id": "dae0db2239d040a7a24a2548e7745010", "name": "test-policy"}}
Response - Headers: {u'connection': 'close', 'status': '201', 
u'x-openstack-request-id': 'req-f18fe708-dcc6-4e0a-9ec8-ea1f1e93e409', 
u'content-type': 'application/json', 'content-location': 
'http://15.184.66.59:9696/v2.0/qos/policies', u'content-length': '318', 
u'date': 'Mon, 17 Apr 2017 07:24:06 GMT'}
Body: 
{"policy":{"name":"test-policy","rules":[],"tenant_id":"dae0db2239d040a7a24a2548e7745010","created_at":"2017-04-17T07:24:06Z","updated_at":"2017-04-17T07:24:06Z","revision_number":1,"shared":false,"project_id":"dae0db2239d040a7a24a2548e7745010","id":"275ede54-e091-4615-8feb-1171c76c9f86","description":"test
 policy"}}
2017-04-17 07:24:07,162 21614 INFO [tempest.lib.common.rest_client] Request 
(QosBandwidthLimitRuleTestJSON:test_rule_update_forbidden_for_regular_tenants_own_policy):
 201 POST 
http://15.184.66.59:9696/v2.0/qos/policies/275ede54-e091-4615-8feb-1171c76c9f86/bandwidth_limit_rules
 0.514s
2017-04-17 07:24:07,163 21614 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'X-Auth-Token': '', 
'Content-Type': 'application/json'}
Body: {"bandwidth_limit_rule": {"max_burst_kbps": 1, "max_kbps": 1}}
Response - Headers: {u'connection': 'close', 'status': '201', 
u'x-openstack-request-id': 'req-a81448a4-580f-430d-8612-5b9f49279bde', 
u'content-type': 'application/json', 'content-location': 
'http://15.184.66.59:9696/v2.0/qos/policies/275ede54-e091-4615-8feb-1171c76c9f86/bandwidth_limit_rules',
 u'content-length': '102', u'date': 'Mon, 17 Apr 2017 07:24:07 GMT'}
Body: 
{"bandwidth_limit_rule":{"max_kbps":1,"id":"404613bf-d473-48e1-a8b0-9334afe6bf68","max_burst_kbps":1}}
2017-04-17 07:24:07,411 21614 INFO [tempest.lib.common.rest_client] Request 
(QosBandwidthLimitRuleTestJSON:test_rule_update_forbidden_for_regular_tenants_own_policy):
 404 PUT 
http://15.184.66.59:9696/v2.0/qos/policies/275ede54-e091-4615-8feb-1171c76c9f86/bandwidth_limit_rules/404613bf-d473-48e1-a8b0-9334afe6bf68
 0.247s
2017-04-17 07:24:07,412 21614 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'X-Auth-Token': '', 
'Content-Type': 'application/json'}
Body: {"bandwidth_limit_rule": {"max_burst_kbps": 4, "max_kbps": 2}}
Response - Headers: {u'connection': 'close', 'status': '404', 
u'x-openstack-request-id': 'req-d02feb71-801a-4554-8d19-dfbfedd1e23b', 
u'content-type': 'application/json', 'content-location': 
'http://15.184.66.59:9696/v2.0/qos/policies/275ede54-e091-4615-8feb-1171c76c9f86/bandwidth_limit_rules/404613bf-d473-48e1-a8b0-9334afe6bf68',
 u'content-length': '103', u'date': 'Mon, 17 Apr 2017 07:24:07 GMT'}
Body: {"NeutronError": {"message": "The resource could not be found.", 
"type": "HTTPNotFound", "detail": ""}}
}}}

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_qos.py", line 
478, in test_rule_update_forbidden_for_regular_tenants_own_policy
policy['id'], rule['id'], max_kbps=2, max_burst_kbps=4)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
485, in assertRaises
self.assertThat(our_callable, matcher)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
496, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
547, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/usr/local/lib/python2.7/dist-packages/testtools/matchers/_exception.py", line 
108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/usr/local/lib/python2.7/dist-packages/testtools/matchers/_higherorder.py", 
line 62, in match
mismatch = matcher.match(matchee)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
475, in match
reraise(*matchee)
  File 
"/usr/local/lib/python2.7/dist-packages/testtools/matchers/_exception.py", line 
101, in match
result = matchee()
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
1049, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File 
"/opt/stack/new/neutron/neutron/tests/tempest/services/network/json/network_client.py",
 

[Yahoo-eng-team] [Bug 1683227] [NEW] test_connection_from_same_address_scope failed with: Cannot find device "qg-74372988-7f"

2017-04-16 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/84/355284/2/check/gate-neutron-dsvm-
functional-ubuntu-xenial/d7635bd/testr_results.html.gz


Traceback (most recent call last):
  File "neutron/tests/base.py", line 113, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 341, in 
test_connection_from_same_address_scope
'scope1', 'scope1')
  File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 318, in 
_setup_address_scope
router = self.manage_router(self.agent, router_info)
  File "neutron/tests/functional/agent/l3/framework.py", line 319, in 
manage_router
agent._process_added_router(router)
  File "neutron/agent/l3/agent.py", line 455, in _process_added_router
ri.process()
  File "neutron/common/utils.py", line 186, in call
self.logger(e)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "neutron/common/utils.py", line 183, in call
return func(*args, **kwargs)
  File "neutron/agent/l3/router_info.py", line 1114, in process
self.process_external()
  File "neutron/agent/l3/router_info.py", line 889, in process_external
self._process_external_gateway(ex_gw_port)
  File "neutron/agent/l3/router_info.py", line 772, in _process_external_gateway
self.external_gateway_added(ex_gw_port, interface_name)
  File "neutron/agent/l3/router_info.py", line 722, in external_gateway_added
ex_gw_port, interface_name, self.ns_name, preserve_ips)
  File "neutron/agent/l3/router_info.py", line 671, in _external_gateway_added
self._plug_external_gateway(ex_gw_port, interface_name, ns_name)
  File "neutron/agent/l3/router_info.py", line 621, in _plug_external_gateway
mtu=ex_gw_port.get('mtu'))
  File "neutron/agent/linux/interface.py", line 266, in plug
bridge, namespace, prefix, mtu)
  File "neutron/agent/linux/interface.py", line 380, in plug_new
namespace_obj.add_device_to_namespace(ns_dev)
  File "neutron/agent/linux/ip_lib.py", line 231, in add_device_to_namespace
device.link.set_netns(self.namespace)
  File "neutron/agent/linux/ip_lib.py", line 519, in set_netns
self._as_root([], ('set', self.name, 'netns', namespace))
  File "neutron/agent/linux/ip_lib.py", line 367, in _as_root
use_root_namespace=use_root_namespace)
  File "neutron/agent/linux/ip_lib.py", line 99, in _as_root
log_fail_as_error=self.log_fail_as_error)
  File "neutron/agent/linux/ip_lib.py", line 108, in _execute
log_fail_as_error=log_fail_as_error)
  File "neutron/agent/linux/utils.py", line 151, in execute
raise ProcessExecutionError(msg, returncode=returncode)
neutron.agent.linux.utils.ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: 
; Stderr: Cannot find device "qg-74372988-7f"

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests gate-failure l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683227

Title:
  test_connection_from_same_address_scope failed with: Cannot find
  device "qg-74372988-7f"

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/84/355284/2/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/d7635bd/testr_results.html.gz

  
  Traceback (most recent call last):
File "neutron/tests/base.py", line 113, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 341, 
in test_connection_from_same_address_scope
  'scope1', 'scope1')
File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 318, 
in _setup_address_scope
  router = self.manage_router(self.agent, router_info)
File "neutron/tests/functional/agent/l3/framework.py", line 319, in 
manage_router
  agent._process_added_router(router)
File "neutron/agent/l3/agent.py", line 455, in _process_added_router
  ri.process()
File "neutron/common/utils.py", line 186, in call
  self.logger(e)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "neutron/common/utils.py", line 183, in call
  return func(*args, **kwargs)
File "neutron/agent/l3/router_info.py", line 1114, in process
  self.process_external()
File "neutron/agent/l3/router_info.py", line 889, in process_external
  

[Yahoo-eng-team] [Bug 1681440] Re: QoS policy object can't be suitable with 1.2 version of object

2017-04-13 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681440

Title:
  QoS policy object can't be suitable with 1.2 version of object

Status in neutron:
  Won't Fix

Bug description:
  In
  
https://github.com/openstack/neutron/blob/master/neutron/objects/qos/policy.py#L220
  there is no function to make QoS policy object compatible with version
  1.2 and higher (append QoSMinimumBandwidthLimit rules to policy)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682202] [NEW] enable_connection_uri fails with constraint violation due to duplicate manager

2017-04-12 Thread Ihar Hrachyshka
Public bug reported:

When multiple agents interact with the same local ovsdb (which is a
reasonable use case, f.e. when there is both ovs and dhcp agents), then
we get the following error in one of agent logs:

2017-03-16 16:06:22.790 14431 ERROR neutron.agent.ovsdb.impl_vsctl [req-
1d2c2121-ef13-45a6-81ec-9cfeda8221a7 - - - - -] Unable to execute ['ovs-
vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--
id=@manager', 'create', 'Manager', 'target="ptcp:6640:127.0.0.1"', '--',
'add', 'Open_vSwitch', '.', 'manager_options', '@manager']. Exception:
Exit code: 1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: transaction error:
{"details":"Transaction causes multiple rows in \"Manager\" table to
have identical values (\"ptcp:6640:127.0.0.1\") for index on column
\"target\".  First row, with UUID 60a3bbc6-2904-4e1f-8f92-387a315e1142,
existed in the database before this transaction and was not modified by
the transaction.  Second row, with UUID c7bc4f11-5fbc-42e1-b8ec-
44324439cace, was inserted by this transaction.","error":"constraint
violation"}

Apparently, two agents clash on manager creation. They should atomically
append instead.

An example of failure at:

http://logs.openstack.org/98/446598/1/check/gate-neutron-dsvm-fullstack-
ubuntu-xenial/2e0f93e/logs/dsvm-fullstack-
logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop
.test_controller_timeout_does_not_break_connectivity_sigkill_GRE-and-
l2pop,openflow-native_ovsdb-cli_/neutron-openvswitch-agent--2017-03-16--
16-06-05-730632.txt.gz?#_2017-03-16_16_06_22_790

It's not clear whether it results in any user visible problem.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: ovs

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Importance: High => Medium

** Changed in: neutron
   Status: New => Confirmed

** Tags added: ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1682202

Title:
  enable_connection_uri fails with constraint violation due to duplicate
  manager

Status in neutron:
  Confirmed

Bug description:
  When multiple agents interact with the same local ovsdb (which is a
  reasonable use case, f.e. when there is both ovs and dhcp agents),
  then we get the following error in one of agent logs:

  2017-03-16 16:06:22.790 14431 ERROR neutron.agent.ovsdb.impl_vsctl
  [req-1d2c2121-ef13-45a6-81ec-9cfeda8221a7 - - - - -] Unable to execute
  ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--
  id=@manager', 'create', 'Manager', 'target="ptcp:6640:127.0.0.1"',
  '--', 'add', 'Open_vSwitch', '.', 'manager_options', '@manager'].
  Exception: Exit code: 1; Stdin: ; Stdout: ; Stderr: ovs-vsctl:
  transaction error: {"details":"Transaction causes multiple rows in
  \"Manager\" table to have identical values (\"ptcp:6640:127.0.0.1\")
  for index on column \"target\".  First row, with UUID 60a3bbc6-2904
  -4e1f-8f92-387a315e1142, existed in the database before this
  transaction and was not modified by the transaction.  Second row, with
  UUID c7bc4f11-5fbc-42e1-b8ec-44324439cace, was inserted by this
  transaction.","error":"constraint violation"}

  Apparently, two agents clash on manager creation. They should
  atomically append instead.

  An example of failure at:

  http://logs.openstack.org/98/446598/1/check/gate-neutron-dsvm-
  fullstack-ubuntu-xenial/2e0f93e/logs/dsvm-fullstack-
  logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop
  .test_controller_timeout_does_not_break_connectivity_sigkill_GRE-and-
  l2pop,openflow-native_ovsdb-cli_/neutron-openvswitch-agent--2017-03-16
  --16-06-05-730632.txt.gz?#_2017-03-16_16_06_22_790

  It's not clear whether it results in any user visible problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1682202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604115] Re: test_cleanup_stale_devices functional test sporadic failures

2017-04-07 Thread Ihar Hrachyshka
It still happens:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_status%3A%5C%22FAILURE%5C%22%20AND%20message%3A%5C%22%2C%20in%20test_cleanup_stale_devices%5C%22%20AND%20build_name%3A%5C
%22gate-neutron-dsvm-functional-ubuntu-xenial%5C%22

3 hits in 7 days, both newton and master.

** Changed in: neutron
   Status: Fix Released => Confirmed

** Changed in: neutron
 Assignee: Armando Migliaccio (armando-migliaccio) => Ihar Hrachyshka 
(ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604115

Title:
  test_cleanup_stale_devices functional test sporadic failures

Status in neutron:
  Confirmed

Bug description:
  19 hits in the last 7 days

  build_status:"FAILURE" AND message:", in test_cleanup_stale_devices"
  AND build_name:"gate-neutron-dsvm-functional"

  Example TRACE failure:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

  Example log from testrunner:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680912] [NEW] Sometimes grenade job fails with NetworkNotFound because a network delete request took too long

2017-04-07 Thread Ihar Hrachyshka
lling delete_port for
b5ff06a6-4084-4961-b239-9f29aa42c68a owned by network:dhcp delete_port
/opt/stack/old/neutron/neutron/plugins/ml2/plugin.py:1637

I suspect this is related to the following patch where we first caught
the situation but landed the patch nevertheless:
https://review.openstack.org/#/q/I924fa7e36ea9e45bf0ef3480972341a851bda86c,n,z

We may want to revert those. We may also want to release a new Newton
release because the patch got into 9.3.0.

** Affects: neutron
 Importance: Critical
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: db gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680912

Title:
  Sometimes grenade job fails with NetworkNotFound because a network
  delete request took too long

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/38/453838/2/gate/gate-grenade-dsvm-neutron-
  ubuntu-xenial/84cf5aa/logs/grenade.sh.txt.gz#_2017-04-07_03_45_19_220

  The DELETE request in question is:
  2017-04-07 03:45:19.220 | 2017-04-07 03:41:31,143 18539 WARNING  
[urllib3.connectionpool] Retrying (Retry(total=9, connect=None, read=None, 
redirect=5)) after connection broken by 
'ReadTimeoutError("HTTPConnectionPool(host='149.202.181.79', port=9696): Read 
timed out. (read timeout=60)",)': 
/v2.0/networks/46b0776a-3917-440d-9b90-4ab02a735188
  2017-04-07 03:45:19.220 | 2017-04-07 03:41:34,053 18539 INFO 
[tempest.lib.common.rest_client] Request (NetworksIpV6Test:_run_cleanups): 404 
DELETE 
http://149.202.181.79:9696/v2.0/networks/46b0776a-3917-440d-9b90-4ab02a735188 
62.919s
  2017-04-07 03:45:19.220 | 2017-04-07 03:41:34,053 18539 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  2017-04-07 03:45:19.220 | Body: None
  2017-04-07 03:45:19.221 | Response - Headers: {u'content-length': 
'138', u'content-type': 'application/json', u'x-openstack-request-id': 
'req-d1dfda92-9b9b-421f-b45f-7ef4746e43c6', u'date': 'Fri, 07 Apr 2017 03:41:34 
GMT', u'connection': 'close', 'content-location': 
'http://149.202.181.79:9696/v2.0/networks/46b0776a-3917-440d-9b90-4ab02a735188',
 'status': '404'}
  2017-04-07 03:45:19.221 | Body: {"NeutronError": {"message": 
"Network 46b0776a-3917-440d-9b90-4ab02a735188 could not be found.", "type": 
"NetworkNotFound", "detail": ""}}

  What we see is first attempt to delete the network failed after 60
  seconds, so we retry DELETE, at which point we see that the network is
  no longer there.

  In neutron-server log, we see that the first DELETE attempt was
  received with req_id req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d at
  03:40:31.344 but completed at 03:41:33.951 only (probably because
  second DELETE attempt triggered something that unblocked the first
  request).

  In logs handling the first DELETE request, we see some looping de-
  allocating ports:

  2017-04-07 03:40:34.227 8785 DEBUG neutron.plugins.ml2.plugin 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059] Ports to auto-deallocate: 
set([u'b5ff06a6-4084-4961-b239-9f29aa42c68a']) delete_subnet 
/opt/stack/old/neutron/neutron/plugins/ml2/plugin.py:1093
  2017-04-07 03:40:34.231 8785 DEBUG neutron.callbacks.manager 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059] Notify callbacks [] for subnet, 
before_delete _notify_loop 
/opt/stack/old/neutron/neutron/callbacks/manager.py:142
  2017-04-07 03:40:34.338 8785 DEBUG neutron.plugins.ml2.plugin 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059] Ports to auto-deallocate: 
set([u'b5ff06a6-4084-4961-b239-9f29aa42c68a']) delete_subnet 
/opt/stack/old/neutron/neutron/plugins/ml2/plugin.py:1093
  2017-04-07 03:40:34.340 8785 DEBUG neutron.callbacks.manager 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059] Notify callbacks [] for subnet, 
before_delete _notify_loop 
/opt/stack/old/neutron/neutron/callbacks/manager.py:142

  It goes on like that on and on up until:

  2017-04-07 03:41:32.644 8785 DEBUG neutron.plugins.ml2.plugin 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059] Deleting subnet record delete_subnet 
/opt/stack/old/neutron/neutron/plugins/ml2/plugin.py:1145
  2017-04-07 03:41:32.644 8785 DEBUG neutron.ipam.driver 
[req-933bd8b3-cd8f-4d2f-8f0c-505f85347b9d tempest-NetworksIpV6Test-1590417059 
tempest-NetworksIpV6Test-1590417059]

[Yahoo-eng-team] [Bug 1680619] [NEW] Move neutron rally plugin into rally repo

2017-04-06 Thread Ihar Hrachyshka
Public bug reported:

Rally is branchless, and may break us. If we have the plugin in rally
itself, we would not have a chance to be broken by e.g. a new validation
added into rally because then rally will gate on our plugin too.

This work involves moving code, enabling the plugin in rally gate, etc.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680619

Title:
  Move neutron rally plugin into rally repo

Status in neutron:
  New

Bug description:
  Rally is branchless, and may break us. If we have the plugin in rally
  itself, we would not have a chance to be broken by e.g. a new
  validation added into rally because then rally will gate on our plugin
  too.

  This work involves moving code, enabling the plugin in rally gate,
  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680580] [NEW] rally job broken because of new validations failing on NeutronTrunks.create_and_list_trunk_subports

2017-04-06 Thread Ihar Hrachyshka
Public bug reported:

2017-04-06 16:36:04.330721 | 2017-04-06 16:36:04.330 | Subtask 
NeutronTrunks.create_and_list_trunk_subports[0] has wrong configuration
2017-04-06 16:36:04.332035 | 2017-04-06 16:36:04.331 | Subtask configuration:
2017-04-06 16:36:04.333765 | 2017-04-06 16:36:04.333 | {'runner': {"type": 
"constant", "times": 1, "concurrency": 4}, 'args': {"subport_count": 250}, 
'context': {"users": {"tenants": 1, "users_per_tenant": 1}, "quotas": 
{"neutron": {"network": -1, "port": 1000
2017-04-06 16:36:04.334810 | 2017-04-06 16:36:04.334 | 
2017-04-06 16:36:04.335942 | 2017-04-06 16:36:04.335 | Reason(s):
2017-04-06 16:36:04.337233 | 2017-04-06 16:36:04.337 |  Parameter 'concurrency' 
means a number of parallel executionsof iterations. Parameter 'times' means 
total number of iteration executions. It is redundant (and restricted) to have 
number of parallel iterations bigger then total number of iterations.`

http://logs.openstack.org/40/453740/3/check/gate-rally-dsvm-neutron-
neutron-ubuntu-xenial/249fa37/console.html#_2017-04-06_16_36_04_328580

This affects stable branches too.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680580

Title:
  rally job broken because of new validations failing on
  NeutronTrunks.create_and_list_trunk_subports

Status in neutron:
  In Progress

Bug description:
  2017-04-06 16:36:04.330721 | 2017-04-06 16:36:04.330 | Subtask 
NeutronTrunks.create_and_list_trunk_subports[0] has wrong configuration
  2017-04-06 16:36:04.332035 | 2017-04-06 16:36:04.331 | Subtask configuration:
  2017-04-06 16:36:04.333765 | 2017-04-06 16:36:04.333 | {'runner': {"type": 
"constant", "times": 1, "concurrency": 4}, 'args': {"subport_count": 250}, 
'context': {"users": {"tenants": 1, "users_per_tenant": 1}, "quotas": 
{"neutron": {"network": -1, "port": 1000
  2017-04-06 16:36:04.334810 | 2017-04-06 16:36:04.334 | 
  2017-04-06 16:36:04.335942 | 2017-04-06 16:36:04.335 | Reason(s):
  2017-04-06 16:36:04.337233 | 2017-04-06 16:36:04.337 |  Parameter 
'concurrency' means a number of parallel executionsof iterations. Parameter 
'times' means total number of iteration executions. It is redundant (and 
restricted) to have number of parallel iterations bigger then total number of 
iterations.`

  http://logs.openstack.org/40/453740/3/check/gate-rally-dsvm-neutron-
  neutron-ubuntu-xenial/249fa37/console.html#_2017-04-06_16_36_04_328580

  This affects stable branches too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680525] [NEW] keystone-manage fails with "ImportError: No module named 'memcache'"

2017-04-06 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/91/452691/2/check/gate-tempest-dsvm-py35
-ubuntu-xenial/ef52695/logs/devstacklog.txt.gz#_2017-04-06_10_56_26_001

2017-04-06 10:56:24.491 | + lib/keystone:bootstrap_keystone:657  :   
/usr/local/bin/keystone-manage bootstrap --bootstrap-username admin 
--bootstrap-password secretadmin --bootstrap-project-name admin 
--bootstrap-role-name admin --bootstrap-service-name keystone 
--bootstrap-region-id RegionOne --bootstrap-admin-url 
http://198.72.124.56/identity_admin --bootstrap-public-url 
http://198.72.124.56/identity
2017-04-06 10:56:26.000 | 2017-04-06 10:56:25.998 12948 CRITICAL keystone [-] 
ImportError: No module named 'memcache'
2017-04-06 10:56:26.000 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
Traceback (most recent call last):
2017-04-06 10:56:26.000 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/bin/keystone-manage", line 10, in 
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
sys.exit(main())
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/cmd/manage.py", line 45, in main
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
cli.main(argv=sys.argv, config_files=config_files)
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/cmd/cli.py", line 1331, in main
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
CONF.command.cmd_class.main()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/cmd/cli.py", line 380, in main
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
klass = cls()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/cmd/cli.py", line 67, in __init__
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
self.load_backends()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/cmd/cli.py", line 130, in load_backends
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
drivers = backends.load_backends()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/server/backends.py", line 32, in load_backends
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
cache.configure_cache()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/opt/stack/new/keystone/keystone/common/cache/core.py", line 124, in 
configure_cache
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
cache.configure_cache_region(CONF, region)
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/oslo_cache/core.py", line 200, in 
configure_cache_region
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
'%s.' % conf.cache.config_prefix)
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 552, in 
configure_from_config
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
"%swrap" % prefix, None),
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 417, in 
configure
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
_config_prefix
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/api.py", line 81, in 
from_config_dict
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone for 
key in config_dict
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 208, in __init__
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
super(MemcacheArgs, self).__init__(arguments)
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 108, in __init__
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
self._imports()
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 287, in _imports
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
import memcache  # noqa
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 
ImportError: No module named 'memcache'
2017-04-06 10:56:26.001 | 2017-04-06 10:56:25.998 12948 TRACE keystone 

49 hits in last day:

[Yahoo-eng-team] [Bug 1680183] [NEW] neutron-keepalived-state-change fails with "AssertionError: do not call blocking functions from the mainloop" in functional tests

2017-04-05 Thread Ihar Hrachyshka
Public bug reported:

17:39:17.802 6173 CRITICAL neutron [-] AssertionError: do not call blocking 
functions from the mainloop
17:39:17.802 6173 ERROR neutron Traceback (most recent call last):
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change", 
line 10, in 
17:39:17.802 6173 ERROR neutron sys.exit(main())
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/keepalived_state_change.py", line 19, in main
17:39:17.802 6173 ERROR neutron keepalived_state_change.main()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 157, in 
main
17:39:17.802 6173 ERROR neutron cfg.CONF.monitor_cidr).start()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/daemon.py", line 249, in start
17:39:17.802 6173 ERROR neutron self.run()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 70, in 
run
17:39:17.802 6173 ERROR neutron for iterable in self.monitor:
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/async_process.py", line 256, in 
_iter_queue
17:39:17.802 6173 ERROR neutron yield queue.get(block=block)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get
17:39:17.802 6173 ERROR neutron return waiter.wait()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait
17:39:17.802 6173 ERROR neutron return get_hub().switch()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
17:39:17.802 6173 ERROR neutron return self.greenlet.switch()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
17:39:17.802 6173 ERROR neutron self.wait(sleep_time)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
17:39:17.802 6173 ERROR neutron presult = self.do_poll(seconds)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
17:39:17.802 6173 ERROR neutron return self.poll.poll(seconds)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 134, in 
handle_sigterm
17:39:17.802 6173 ERROR neutron self._kill_monitor()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 131, in 
_kill_monitor
17:39:17.802 6173 ERROR neutron run_as_root=True)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 221, in kill_process
17:39:17.802 6173 ERROR neutron execute(['kill', '-%d' % signal, pid], 
run_as_root=run_as_root)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 155, in execute
17:39:17.802 6173 ERROR neutron greenthread.sleep(0)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
17:39:17.802 6173 ERROR neutron assert hub.greenlet is not current, 'do not 
call blocking functions from the mainloop'
17:39:17.802 6173 ERROR neutron AssertionError: do not call blocking functions 
from the mainloop
17:39:17.802 6173 ERROR neutron

This is what I see when running fullstack l3ha tests, once I enable
syslog logging for the helper process.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680183

Title:
  neutron-keepalived-state-change fails with "AssertionError: do not
  call blocking functions from the mainloop" in functional tests

Status in neutron:
  New

Bug description:
  17:39:17.802 6173 CRITICAL neutron [-] AssertionError: do not call blocking 
functions from the mainloop
  17:39:17.802 6173 ERROR neutron Traceback (most recent call last):
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change", 
line 10, in 
  17:39:17.802 6173 ERROR neutron sys.exit(main())
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/keepalived_state_change.py", line 19, in main
  17:39:17.802 6173 ERROR neutron keepalived_state_change.main()
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", 

[Yahoo-eng-team] [Bug 1679815] [NEW] test_router_interface_ops_bump_router fails with "AssertionError: 5 not greater than 5"

2017-04-04 Thread Ihar Hrachyshka
Public bug reported:

Traceback (most recent call last):
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
return f(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/services/revisions/test_revision_plugin.py",
 line 162, in test_router_interface_ops_bump_router
router['revision_number'])
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1233, in assertGreater
self.fail(self._formatMessage(msg, standardMsg))
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: 5 not greater than 5

http://logs.openstack.org/88/360488/3/gate/gate-neutron-
python35/1b33db5/testr_results.html.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1679815

Title:
  test_router_interface_ops_bump_router fails with "AssertionError: 5
  not greater than 5"

Status in neutron:
  Confirmed

Bug description:
  Traceback (most recent call last):
File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
  return f(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/services/revisions/test_revision_plugin.py",
 line 162, in test_router_interface_ops_bump_router
  router['revision_number'])
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1233, in assertGreater
  self.fail(self._formatMessage(msg, standardMsg))
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: 5 not greater than 5

  http://logs.openstack.org/88/360488/3/gate/gate-neutron-
  python35/1b33db5/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1679815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679775] [NEW] test_network_list_queries_constant fails with testtools.matchers._impl.MismatchError: 28 != 13

2017-04-04 Thread Ihar Hrachyshka
Public bug reported:

Traceback (most recent call last):
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
return f(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/plugins/ml2/test_plugin.py",
 line 595, in test_network_list_queries_constant
'networks')
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 6578, in _assert_object_list_queries_constant
self.assertEqual(before_count, after_count)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 28 != 13

logs.openstack.org/12/453212/1/check/gate-neutron-
python35/53142ce/testr_results.html.gz

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1679775

Title:
  test_network_list_queries_constant fails with
  testtools.matchers._impl.MismatchError: 28 != 13

Status in neutron:
  Confirmed

Bug description:
  Traceback (most recent call last):
File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
  return f(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/plugins/ml2/test_plugin.py",
 line 595, in test_network_list_queries_constant
  'networks')
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 6578, in _assert_object_list_queries_constant
  self.assertEqual(before_count, after_count)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 28 != 13

  logs.openstack.org/12/453212/1/check/gate-neutron-
  python35/53142ce/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1679775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504527] Re: network_device_mtu not documented in agent config files

2017-04-04 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504527

Title:
  network_device_mtu not documented in agent config files

Status in neutron:
  Fix Released

Bug description:
  There is no network_device_mtu notion in agent config files, while
  it's a supported and useful option.

  -bash-4.2$ grep network_device_mtu -r etc/
  -bash-4.2$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677784] Re: quota unit test failing because of missing setup

2017-03-30 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1677636 ***
https://bugs.launchpad.net/bugs/1677636

** This bug is no longer a duplicate of bug 1677676
   TestTrackedResource unit tests failing, plugin not initialized
** This bug has been marked a duplicate of bug 1677636
   TestTrackedResource.test_resync failed with AttributeError: 'NoneType' 
object has no attribute '_get_collection_query'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677784

Title:
  quota unit test failing because of missing setup

Status in neutron:
  New

Bug description:
  It looks like the quota unit tests might be depending on the other unit tests 
for setup.
  Failure spotted in py35 on https://review.openstack.org/#/c/451208/5

  http://logs.openstack.org/08/451208/5/check/gate-neutron-
  python35/023b962/testr_results.html.gz

  
  ft694.12: 
neutron.tests.unit.quota.test_resource.TestTrackedResource.test_count_with_dirty_false_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
  return f(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/quota/test_resource.py",
 line 147, in test_count_with_dirty_false
  res = self._test_count()
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/quota/test_resource.py",
 line 142, in _test_count
  self.context, res.name, self.tenant_id, in_use=0)
File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", 
line 163, in wrapped
  return method(*args, **kwargs)
File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", 
line 93, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/six.py",
 line 686, in reraise
  raise value
File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", 
line 89, in wrapped
  return f(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_db/api.py",
 line 151, in wrapper
  ectxt.value = e.inner_exc
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/six.py",
 line 686, in reraise
  raise value
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_db/api.py",
 line 139, in wrapper
  return f(*args, **kwargs)
File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", 
line 128, in wrapped
  LOG.debug("Retry wrapper got retriable exception: %s", e)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/six.py",
 line 686, in reraise
  raise value
File "/home/jenkins/workspace/gate-neutron-python35/neutron/db/api.py", 
line 124, in wrapped
  return f(*dup_args, **dup_kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/db/quota/api.py", line 
90, in set_quota_usage
  context, resource=resource, project_id=tenant_id)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", line 
425, in get_object
  **cls.modify_fields_to_db(kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/db/api.py", line 
32, in get_object
  return _get_filter_query(context, model, **kwargs).first()
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/db/api.py", line 
27, in _get_filter_query
  query = plugin._get_collection_query(context, model, filters)
  AttributeError: 'NoneType' object has no attribute '_get_collection_query'

To manage notifications about this 

[Yahoo-eng-team] [Bug 1677636] [NEW] TestTrackedResource.test_resync failed with AttributeError: 'NoneType' object has no attribute '_get_collection_query'

2017-03-30 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/18/442518/11/gate/neutron-coverage-ubuntu-
xenial/e0c4f27/console.html

2017-03-30 11:30:54.119979 | Traceback (most recent call last):
2017-03-30 11:30:54.120013 |   File "neutron/tests/base.py", line 116, in func
2017-03-30 11:30:54.120036 | return f(self, *args, **kwargs)
2017-03-30 11:30:54.120070 |   File 
"neutron/tests/unit/quota/test_resource.py", line 255, in test_resync
2017-03-30 11:30:54.120091 | res.mark_dirty(self.context)
2017-03-30 11:30:54.120120 |   File "neutron/quota/resource.py", line 190, in 
mark_dirty
2017-03-30 11:30:54.120151 | quota_api.set_quota_usage_dirty(context, 
self.name, tenant_id)
2017-03-30 11:30:54.120177 |   File "neutron/db/api.py", line 163, in wrapped
2017-03-30 11:30:54.120199 | return method(*args, **kwargs)
2017-03-30 11:30:54.120266 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 963, in wrapper
2017-03-30 11:30:54.120302 | return fn(*args, **kwargs)
2017-03-30 11:30:54.120338 |   File "neutron/db/quota/api.py", line 119, in 
set_quota_usage_dirty
2017-03-30 11:30:54.120366 | context, resource=resource, 
project_id=tenant_id)
2017-03-30 11:30:54.120394 |   File "neutron/objects/base.py", line 425, in 
get_object
2017-03-30 11:30:54.120417 | **cls.modify_fields_to_db(kwargs)
2017-03-30 11:30:54.120446 |   File "neutron/objects/db/api.py", line 32, in 
get_object
2017-03-30 11:30:54.120476 | return _get_filter_query(context, model, 
**kwargs).first()
2017-03-30 11:30:54.120506 |   File "neutron/objects/db/api.py", line 27, in 
_get_filter_query
2017-03-30 11:30:54.120542 | query = plugin._get_collection_query(context, 
model, filters)
2017-03-30 11:30:54.120580 | AttributeError: 'NoneType' object has no attribute 
'_get_collection_query'

** Affects: neutron
 Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Tags added: gate-failure unittest

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677636

Title:
  TestTrackedResource.test_resync failed with AttributeError: 'NoneType'
  object has no attribute '_get_collection_query'

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/18/442518/11/gate/neutron-coverage-ubuntu-
  xenial/e0c4f27/console.html

  2017-03-30 11:30:54.119979 | Traceback (most recent call last):
  2017-03-30 11:30:54.120013 |   File "neutron/tests/base.py", line 116, in func
  2017-03-30 11:30:54.120036 | return f(self, *args, **kwargs)
  2017-03-30 11:30:54.120070 |   File 
"neutron/tests/unit/quota/test_resource.py", line 255, in test_resync
  2017-03-30 11:30:54.120091 | res.mark_dirty(self.context)
  2017-03-30 11:30:54.120120 |   File "neutron/quota/resource.py", line 190, in 
mark_dirty
  2017-03-30 11:30:54.120151 | quota_api.set_quota_usage_dirty(context, 
self.name, tenant_id)
  2017-03-30 11:30:54.120177 |   File "neutron/db/api.py", line 163, in wrapped
  2017-03-30 11:30:54.120199 | return method(*args, **kwargs)
  2017-03-30 11:30:54.120266 |   File 
"/home/jenkins/workspace/neutron-coverage-ubuntu-xenial/.tox/cover/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 963, in wrapper
  2017-03-30 11:30:54.120302 | return fn(*args, **kwargs)
  2017-03-30 11:30:54.120338 |   File "neutron/db/quota/api.py", line 119, in 
set_quota_usage_dirty
  2017-03-30 11:30:54.120366 | context, resource=resource, 
project_id=tenant_id)
  2017-03-30 11:30:54.120394 |   File "neutron/objects/base.py", line 425, in 
get_object
  2017-03-30 11:30:54.120417 | **cls.modify_fields_to_db(kwargs)
  2017-03-30 11:30:54.120446 |   File "neutron/objects/db/api.py", line 32, in 
get_object
  2017-03-30 11:30:54.120476 | return _get_filter_query(context, model, 
**kwargs).first()
  2017-03-30 11:30:54.120506 |   File "neutron/objects/db/api.py", line 27, in 
_get_filter_query
  2017-03-30 11:30:54.120542 | query = 
plugin._get_collection_query(context, model, filters)
  2017-03-30 11:30:54.120580 | AttributeError: 'NoneType' object has no 
attribute '_get_collection_query'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1677636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677020] [NEW] test_filtering_shared_subnets failed in -api job with MismatchError comparing subnets

2017-03-28 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1674517 ***
https://bugs.launchpad.net/bugs/1674517

Public bug reported:

http://logs.openstack.org/99/407099/48/check/gate-neutron-dsvm-api-
ubuntu-xenial/a1969ce/testr_results.html.gz

pythonlogging:'': {{{
2017-03-28 19:25:18,166 7692 INFO [tempest.lib.common.rest_client] Request 
(SharedNetworksTest:test_filtering_shared_subnets): 201 POST 
http://15.184.69.59:9696/v2.0/networks 0.879s
2017-03-28 19:25:18,167 7692 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: {"network": {"name": "tempest-test-network--1848706369"}}
Response - Headers: {u'connection': 'close', 'status': '201', 
u'x-openstack-request-id': 'req-052b7e7e-7b14-4675-a354-c07f3a9f125c', 
u'content-length': '587', u'content-type': 'application/json; charset=UTF-8', 
'content-location': 'http://15.184.69.59:9696/v2.0/networks', u'date': 'Tue, 28 
Mar 2017 19:25:18 GMT'}
Body: 
{"network":{"ipv6_address_scope":null,"dns_domain":"","revision_number":3,"port_security_enabled":true,"id":"2a656de1-7782-4205-9561-d9bb8e50c8c0","router:external":false,"availability_zone_hints":[],"availability_zones":[],"ipv4_address_scope":null,"shared":false,"project_id":"1f33d41e817c42b38d1a920b6eb5842e","status":"ACTIVE","subnets":[],"description":"","tags":[],"updated_at":"2017-03-28T19:25:17Z","name":"tempest-test-network--1848706369","qos_policy_id":null,"admin_state_up":true,"tenant_id":"1f33d41e817c42b38d1a920b6eb5842e","created_at":"2017-03-28T19:25:17Z","mtu":1450}}
2017-03-28 19:25:19,087 7692 INFO [tempest.lib.common.rest_client] Request 
(SharedNetworksTest:test_filtering_shared_subnets): 201 POST 
http://15.184.69.59:9696/v2.0/subnets 0.919s
2017-03-28 19:25:19,088 7692 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: {"subnet": {"cidr": "10.1.0.0/28", "ip_version": 4, "gateway_ip": 
"10.1.0.1", "network_id": "2a656de1-7782-4205-9561-d9bb8e50c8c0"}}
Response - Headers: {u'connection': 'close', 'status': '201', 
u'x-openstack-request-id': 'req-e3c6dc37-7d8c-4f11-8c5a-1cabeae75931', 
u'content-length': '594', u'content-type': 'application/json; charset=UTF-8', 
'content-location': 'http://15.184.69.59:9696/v2.0/subnets', u'date': 'Tue, 28 
Mar 2017 19:25:19 GMT'}
Body: 
{"subnet":{"service_types":[],"description":"","enable_dhcp":true,"tags":[],"network_id":"2a656de1-7782-4205-9561-d9bb8e50c8c0","tenant_id":"1f33d41e817c42b38d1a920b6eb5842e","created_at":"2017-03-28T19:25:18Z","dns_nameservers":[],"updated_at":"2017-03-28T19:25:18Z","gateway_ip":"10.1.0.1","ipv6_ra_mode":null,"allocation_pools":[{"start":"10.1.0.2","end":"10.1.0.14"}],"host_routes":[],"revision_number":2,"ip_version":4,"ipv6_address_mode":null,"cidr":"10.1.0.0/28","project_id":"1f33d41e817c42b38d1a920b6eb5842e","id":"34750ed3-99c0-4de3-9971-076ac992ea1a","subnetpool_id":null,"name":""}}
2017-03-28 19:25:20,438 7692 INFO [tempest.lib.common.rest_client] Request 
(SharedNetworksTest:test_filtering_shared_subnets): 201 POST 
http://15.184.69.59:9696/v2.0/subnets 1.349s
2017-03-28 19:25:20,438 7692 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: {"subnet": {"cidr": "10.1.0.0/28", "ip_version": 4, "gateway_ip": 
"10.1.0.1", "network_id": "30e2c6c3-0e6f-4c47-a516-6f0f6f3a2b1d"}}
Response - Headers: {u'connection': 'close', 'status': '201', 
u'x-openstack-request-id': 'req-46a771ec-b3f9-4ed8-918d-edf3fdb4bac2', 
u'content-length': '594', u'content-type': 'application/json; charset=UTF-8', 
'content-location': 'http://15.184.69.59:9696/v2.0/subnets', u'date': 'Tue, 28 
Mar 2017 19:25:20 GMT'}
Body: 
{"subnet":{"service_types":[],"description":"","enable_dhcp":true,"tags":[],"network_id":"30e2c6c3-0e6f-4c47-a516-6f0f6f3a2b1d","tenant_id":"7671eeda385745399b668bf68c000c53","created_at":"2017-03-28T19:25:19Z","dns_nameservers":[],"updated_at":"2017-03-28T19:25:19Z","gateway_ip":"10.1.0.1","ipv6_ra_mode":null,"allocation_pools":[{"start":"10.1.0.2","end":"10.1.0.14"}],"host_routes":[],"revision_number":2,"ip_version":4,"ipv6_address_mode":null,"cidr":"10.1.0.0/28","project_id":"7671eeda385745399b668bf68c000c53","id":"9d97a397-c27d-4b35-9c20-151930e40362","subnetpool_id":null,"name":""}}
2017-03-28 19:25:20,796 7692 INFO [tempest.lib.common.rest_client] Request 
(SharedNetworksTest:test_filtering_shared_subnets): 200 GET 
http://15.184.69.59:9696/v2.0/subnets?shared=True 0.357s
2017-03-28 19:25:20,797 7692 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
Body: None
Response - Headers: {u'connection': 'close', 'status': '200', 
u'x-openstack-request-id': 

[Yahoo-eng-team] [Bug 1677008] Re: Stop using os-cloud-config

2017-03-28 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => pike-1

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
Milestone: pike-1 => None

** Tags added: needs-attention

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677008

Title:
  Stop using os-cloud-config

Status in tripleo:
  Triaged

Bug description:
  os-cloud-config is deprecated in Ocata and will be removed in the future.
  TripleO doesn't use it anymore. Only Neutron Client functional tests are 
using it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1677008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1676966] [NEW] TrunkManagerTestCase.test_connectivity failed to spawn ping

2017-03-28 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/99/407099/48/check/gate-neutron-dsvm-
functional-ubuntu-xenial/bb1c3ee/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/base.py", line 116, in func
return f(self, *args, **kwargs)
  File 
"neutron/tests/functional/services/trunk/drivers/openvswitch/agent/test_trunk_manager.py",
 line 216, in test_connectivity
self.tester.wait_for_sub_port_connectivity(self.tester.INGRESS)
  File "neutron/tests/common/conn_testers.py", line 494, in 
wait_for_sub_port_connectivity
"can't get through." % (src_ns, dst_ip)))
  File "neutron/common/utils.py", line 688, in wait_until_true
while not predicate():
  File "neutron/tests/common/conn_testers.py", line 60, in all_replied
sent, received = _get_packets_sent_received(src_ns, dst_ip, count)
  File "neutron/tests/common/conn_testers.py", line 54, in 
_get_packets_sent_received
pinger.start()
  File "neutron/tests/common/net_helpers.py", line 387, in start
self.proc = RootHelperProcess(cmd, namespace=self.namespace)
  File "neutron/tests/common/net_helpers.py", line 280, in __init__
self._wait_for_child_process()
  File "neutron/tests/common/net_helpers.py", line 315, in 
_wait_for_child_process
"in %d seconds" % (self.cmd, timeout)))
  File "neutron/common/utils.py", line 693, in wait_until_true
raise exception
RuntimeError: Process ['ping', '192.168.0.1', '-W', '1', '-c', '3'] hasn't been 
spawned in 20 seconds

http://logs.openstack.org/99/407099/48/check/gate-neutron-dsvm-
functional-ubuntu-xenial/bb1c3ee/logs/dsvm-functional-
logs/neutron.tests.functional.services.trunk.drivers.openvswitch.agent.test_trunk_manager.TrunkManagerTestCase.test_connectivity.txt.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1676966

Title:
  TrunkManagerTestCase.test_connectivity failed to spawn ping

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/99/407099/48/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/bb1c3ee/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 116, in func
  return f(self, *args, **kwargs)
File 
"neutron/tests/functional/services/trunk/drivers/openvswitch/agent/test_trunk_manager.py",
 line 216, in test_connectivity
  self.tester.wait_for_sub_port_connectivity(self.tester.INGRESS)
File "neutron/tests/common/conn_testers.py", line 494, in 
wait_for_sub_port_connectivity
  "can't get through." % (src_ns, dst_ip)))
File "neutron/common/utils.py", line 688, in wait_until_true
  while not predicate():
File "neutron/tests/common/conn_testers.py", line 60, in all_replied
  sent, received = _get_packets_sent_received(src_ns, dst_ip, count)
File "neutron/tests/common/conn_testers.py", line 54, in 
_get_packets_sent_received
  pinger.start()
File "neutron/tests/common/net_helpers.py", line 387, in start
  self.proc = RootHelperProcess(cmd, namespace=self.namespace)
File "neutron/tests/common/net_helpers.py", line 280, in __init__
  self._wait_for_child_process()
File "neutron/tests/common/net_helpers.py", line 315, in 
_wait_for_child_process
  "in %d seconds" % (self.cmd, timeout)))
File "neutron/common/utils.py", line 693, in wait_until_true
  raise exception
  RuntimeError: Process ['ping', '192.168.0.1', '-W', '1', '-c', '3'] hasn't 
been spawned in 20 seconds

  http://logs.openstack.org/99/407099/48/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/bb1c3ee/logs/dsvm-functional-
  
logs/neutron.tests.functional.services.trunk.drivers.openvswitch.agent.test_trunk_manager.TrunkManagerTestCase.test_connectivity.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1676966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645810] Re: neutron api update port and agent rpc update port timestamp may cause db deadlock

2017-03-28 Thread Ihar Hrachyshka
Also we don't lock records anymore.

** Tags removed: needs-attention

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Released => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645810

Title:
  neutron  api update port and agent rpc  update port timestamp  may
  cause db deadlock

Status in neutron:
  Opinion

Bug description:
  The test scenario as follow steps :

  1,server api update some attributes of port ,like hostid:
(1),ml2 plugin: update_port  > db.get_locked_port_and_binding .
(2),server api thread get port db update lock  , and wait for release 
lock to update timestamp.

  2,ovs agent receive rpc message ,handler update port statues
(1),ovs agent rpc method update_port_status is called 
(2),in this method it will flush session firstly to get  timestamp lock 
for updating port 

  3,at the this time :
(1),agent rpc get timestamp lock to update timestamp  
(2),server api get port lock to update port 
  (3),agent method update_port_status wait for port lock , but server 
method db.get_locked_port_and_binding also wait for timestamp lock which caused 
deadlock

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602081] Re: Use oslo.context's policy dict

2017-03-27 Thread Ihar Hrachyshka
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602081

Title:
  Use oslo.context's policy dict

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a cross project goal to standardize the values available to
  policy writers and to improve the basic oslo.context object. It is
  part of the follow up work to bug #1577996 and bug #968696.

  There has been an ongoing problem for how we define the 'admin' role.
  Because tokens are project scoped having the 'admin' role on any
  project granted you the 'admin' role on all of OpenStack. As a
  solution to this keystone defined an is_admin_project field so that
  keystone defines a single project that your token must be scoped to to
  perform admin operations. This has been implemented.

  The next phase of this is to make all the projects understand the X
  -Is-Admin-Project header from keystonemiddleware and pass it to
  oslo_policy. However this pattern of keystone changes something and
  then goes to every project to fix it has been repeated a number of
  times now and we would like to make it much more automatic.

  Ongoing work has enhanced the base oslo.context object to include both
  the load_from_environ and to_policy_values methods. The
  load_from_environ classmethod takes an environment dict with all the
  standard auth_token and oslo middleware headers and loads them into
  their standard place on the context object.

  The to_policy_values() then creates a standard credentials dictionary
  with all the information that should be required to enforce policy
  from the context. The combination of these two methods means in future
  when authentication information needs to be passed to policy it can be
  handled entirely by oslo.context and does not require changes in each
  individual service.

  Note that in future a similar pattern will hopefully be employed to
  simplify passing authentication information over RPC to solve the
  timeout issues. This is a prerequisite for that work.

  There are a few common problems in services that are required to make
  this work:

  1. Most service context.__init__ functions take and discard **kwargs.
  This is so if the context.from_dict receives arguments it doesn't know
  how to handle (possibly because new things have been added to the base
  to_dict) it ignores them. Unfortunately to make the load_from_environ
  method work we need to pass parameters to __init__ that are handled by
  the base class.

  To make this work we simply have to do a better job of using
  from_dict. Instead of passing everything to __init__ and ignoring what
  we don't know we have from_dict extract only the parameters that
  context knows how to use and call __init__ with those.

  2. The parameters passed to the base context.__init__ are old.
  Typically they are user and tenant where most services expect user_id
  and project_id. There is ongoing work to improve this in oslo.context
  but for now we have to ensure that the subclass correctly sets and
  uses the right variable names.

  3. Some services provide additional information to the policy
  enforcement method. To continue to make this function we will simply
  override the to_policy_values method in the subclasses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2017-03-27 Thread Ihar Hrachyshka
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Barbican:
  In Progress
Status in Cinder:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Released
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Mistral:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in osprofiler:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  Won't Fix
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674889] Re: Fix get_schema_helper bug in some case

2017-03-27 Thread Ihar Hrachyshka
Is that 6642 port something specific used by OVN?

** Tags added: needs-attention

** Changed in: neutron
   Status: In Progress => Incomplete

** Tags added: ovs

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674889

Title:
  Fix get_schema_helper bug in some case

Status in networking-ovn:
  New
Status in neutron:
  Incomplete
Status in ovsdbapp:
  New

Bug description:
  When thread of ovn ovsdb-server is not present, then start or restart 
neutron-server, manager was set to openvswitch. Neutron-server can no longer 
connect to ovn ovsdb for occupied port 6642 unless delete manager manually.
  Manager "ptcp:6642:10.157.0.159"
  Bridge br-int
  fail_mode: secure

  This is not a bug that always exists, it's caused by some neutron
  recent modification. I will port idlutils.get_schema_helper from
  ovsdbapp to neutron, and in networking-ovn, will pass
  try_add_manager=False

  I also submitted a bug[1] in neutron a few days ago, now it is merged
  into this one.

  [1] https://bugs.launchpad.net/neutron/+bug/1672590

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1674889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675151] Re: Switch to haproxy fails dhcp request

2017-03-27 Thread Ihar Hrachyshka
Yes, Daniel is right, you need fresh haproxy.

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Invalid

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1675151

Title:
  Switch to haproxy fails dhcp request

Status in neutron:
  Invalid

Bug description:
  Looks like the patch to Switch ns-metadata-proxy to haproxy is breaking our 
Dell Ironic CI builds.
  haproxy doesn't like the config file. 

  
  The patch https://review.openstack.org/#/c/431691/ broke our Ironic CI 
builds, can you help look at the logs and help me fix?
  Full logs
  
https://stash.opencrowbar.org/logs/71/446571/12/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/3a6e628/
   
  f6ecb-068d-4c89-8fe9-c4c14f0228ee.pid get_value_from_file 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:252
  2017-03-22 16:30:10.365 15528 DEBUG neutron.agent.metadata.driver 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] haproxy_cfg =
  global
  log /dev/log local0 debug
  userstack
  group   stack
  maxconn 1024
  pidfile 
/opt/stack/data/neutron/external/pids/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.pid
  daemon
   
  defaults
  log global
  mode http
  option httplog
  option dontlognull
  option http-server-close
  option forwardfor
  retries 3
  timeout http-request30s
  timeout connect 30s
  timeout client  32s
  timeout server  32s
  timeout http-keep-alive 30s
   
  listen listener
  bind 0.0.0.0:80
  server metadata /opt/stack/data/neutron/metadata_proxy
  http-request add-header X-Neutron-Network-ID 
075f6ecb-068d-4c89-8fe9-c4c14f0228ee
  create_config_file /opt/stack/new/neutron/neutron/agent/metadata/driver.py:126
  2017-03-22 16:30:10.366 15528 DEBUG neutron.agent.linux.utils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Running command (rootwrap 
daemon): ['ip', 'netns', 'exec'
  , 'qdhcp-075f6ecb-068d-4c89-8fe9-c4c14f0228ee', 'haproxy', '-f', 
'/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf']
 execute_rootwr
  ap_daemon /opt/stack/new/neutron/neutron/agent/linux/utils.py:108
  2017-03-22 16:30:10.440 15528 ERROR neutron.agent.linux.utils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: [ALERT] 080/1630
  10 (16482) : parsing 
[/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf:26]
 : Unknown host in '/opt/stack/data/neutron/metadata_prox
  y'
  [ALERT] 080/163010 (16482) : parsing 
[/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf:27]:
 unknown parameter 'X-Neutron-Network-ID
  ', expects 'allow', 'deny', 'auth'.
  [ALERT] 080/163010 (16482) : Error(s) found in configuration file : 
/opt/stack/data/neutron/ns-metadata-proxy/075f6ecb-068d-4c89-8fe9-c4c14f0228ee.conf
   
  2017-03-22 16:30:10.440 15528 DEBUG oslo_concurrency.lockutils 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Releasing semaphore 
"dhcp-agent-network-lock-075f6ecb-06
  8d-4c89-8fe9-c4c14f0228ee" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exception during message handling
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/o
  017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server 
[req-19ee385c-6764-4d9f-bf50-eab47d4bab38 - -] Exception during message handling
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
157, in _process_inco
  ming
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispa
  tch
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2017-03-22 16:30:10.441 15528 ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1663458] Re: brutal stop of ovs-agent doesn't kill ryu controller

2017-03-22 Thread Ihar Hrachyshka
The agent now reduces timeout for RPC requests. It doesn't affect
existing requests, and so there is still an issue, but that should be
(first) solved in oslo.messaging, for which bug 1672836 was reported. We
may revisit how we interrupt RPC communication in the future when we
have support for that in oslo. For now, let's close the bug.

** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: needs-attention

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663458

Title:
  brutal stop of ovs-agent doesn't kill ryu controller

Status in neutron:
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  It seems like when we kill neutron-ovs-agent and start it again, the
  ryu controller fails to start because the previous instance (in
  eventlet) is still running.

  (... ovs agent is failing to start and is brutally killed)

  Trying to start the process 5 minutes later:
  INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 
10.0.0.0rc2.dev33
  INFO ryu.base.app_manager [-] loading app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
  INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
  INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
  INFO ryu.base.app_manager [-] instantiating app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of 
OVSNeutronAgentRyuApp
  INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of 
OFPHandler
  INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of 
OfctlService
  ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 54, in _launch
  return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
120, in server_loop
  datapath_connection_factory)
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 43, 
in listen
  sock.bind(addr)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use
  INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connecting...
  INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connected
  INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[-] Bridge br-int has datapath-ID badb62a6184f
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  I haven't figured out yet how the previous instance of ovs agent was
  killed (my theory is that Puppet killed it but I don't have the
  killing code yet, I'll update the bug asap).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1663458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674787] [NEW] Bump default quotas for core resources

2017-03-21 Thread Ihar Hrachyshka
Public bug reported:

Ports, networks, subnets default quotas are quite low so most operators
still bump them, which doesn't make for a great experience. Neither it
helps testing in gate where we need to bump quotas to successfully
execute our own tempest suite.

I suggest we bump quotas for those resources, to remove another knob
everyone needs to bump in any realistic environment (at least in gate).

** Affects: neutron
 Importance: Wishlist
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: usability

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674787

Title:
  Bump default quotas for core resources

Status in neutron:
  In Progress

Bug description:
  Ports, networks, subnets default quotas are quite low so most
  operators still bump them, which doesn't make for a great experience.
  Neither it helps testing in gate where we need to bump quotas to
  successfully execute our own tempest suite.

  I suggest we bump quotas for those resources, to remove another knob
  everyone needs to bump in any realistic environment (at least in
  gate).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674780] [NEW] L3 HA: check scripts are written after keepalived is (re)started

2017-03-21 Thread Ihar Hrachyshka
Public bug reported:

Code inspection showed that L3 HA implementation outputs config file for
keepalived; then (re)starts the daemon, and only then attempts to write
check scripts. It is a race condition vector that would show up if it
would take it longer for the agent to write the check scripts; or if the
agent would fail to write them at all due to some other bug. In which
case, the daemon and the router may have fallen back to backup state.

We should first prepare all files, then (re)start keepalived.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674780

Title:
  L3 HA: check scripts are written after keepalived is (re)started

Status in neutron:
  In Progress

Bug description:
  Code inspection showed that L3 HA implementation outputs config file
  for keepalived; then (re)starts the daemon, and only then attempts to
  write check scripts. It is a race condition vector that would show up
  if it would take it longer for the agent to write the check scripts;
  or if the agent would fail to write them at all due to some other bug.
  In which case, the daemon and the router may have fallen back to
  backup state.

  We should first prepare all files, then (re)start keepalived.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632290] Re: RuntimeError: Metadata proxy didn't spawn

2017-03-21 Thread Ihar Hrachyshka
Metadata proxy is haproxy based now, this should not happen. If it still
does, please reopen.

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632290

Title:
  RuntimeError: Metadata proxy didn't spawn

Status in neutron:
  Fix Released

Bug description:
  Logstash started seeing this failure October 3rd. Not seeing any
  occurrences before that. It's very intermittent:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22RuntimeError%3A%20Metadata%20proxy%20didn't%20spawn%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627106] Re: TimeoutException while executing tests adding bridge using OVSDB native

2017-03-21 Thread Ihar Hrachyshka
0 hits in 10 days, closing the bug.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627106

Title:
  TimeoutException while executing tests adding bridge using OVSDB
  native

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/91/366291/12/check/gate-neutron-dsvm-
  functional-ubuntu-trusty/a23c816/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 62, in 
test_post_commit_vswitchd_completed_no_failures
  self._add_br_and_test()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 56, in 
_add_br_and_test
  self._add_br()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 52, in 
_add_br
  tr.add(ovsdb.add_br(self.brname))
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 72, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands 
[AddBridgeCommand(name=test-br6925d8e2, datapath_type=None, may_exist=True)] 
exceeded timeout 10 seconds

  
  I suspect this one may hit us because we finally made timeout working with 
Icd745514adc14730b9179fa7a6dd5c115f5e87a5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674557] [NEW] eventlet 0.20.x breaks functional tests due to missing select.poll()

2017-03-20 Thread Ihar Hrachyshka
Public bug reported:

eventlet was bumped from 0.19.x to 0.20.1 in gate, and it broke
functional job:

http://logs.openstack.org/78/385178/18/check/gate-neutron-dsvm-
functional-ubuntu-xenial/184b97c/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/base.py", line 116, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 155, in 
test_conntrack_disassociate_fip_legacy_router
self._test_conntrack_disassociate_fip(ha=False)
  File "neutron/tests/functional/agent/l3/framework.py", line 159, in 
_test_conntrack_disassociate_fip
self.assertTrue(netcat.test_connectivity())
  File "neutron/tests/common/net_helpers.py", line 505, in test_connectivity
message = self.server_process.read_stdout(READ_TIMEOUT).strip()
  File "neutron/tests/common/net_helpers.py", line 287, in read_stdout
return self._read_stream(self.stdout, timeout)
  File "neutron/tests/common/net_helpers.py", line 292, in _read_stream
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'

That's because as per release notes for 0.20.0, select.poll was removed:

http://eventlet.net/doc/changelog.html#id2

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Confirmed

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674557

Title:
  eventlet 0.20.x breaks functional tests due to missing select.poll()

Status in neutron:
  Confirmed

Bug description:
  eventlet was bumped from 0.19.x to 0.20.1 in gate, and it broke
  functional job:

  http://logs.openstack.org/78/385178/18/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/184b97c/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 116, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/l3/test_legacy_router.py", line 155, 
in test_conntrack_disassociate_fip_legacy_router
  self._test_conntrack_disassociate_fip(ha=False)
File "neutron/tests/functional/agent/l3/framework.py", line 159, in 
_test_conntrack_disassociate_fip
  self.assertTrue(netcat.test_connectivity())
File "neutron/tests/common/net_helpers.py", line 505, in test_connectivity
  message = self.server_process.read_stdout(READ_TIMEOUT).strip()
File "neutron/tests/common/net_helpers.py", line 287, in read_stdout
  return self._read_stream(self.stdout, timeout)
File "neutron/tests/common/net_helpers.py", line 292, in _read_stream
  poller = select.poll()
  AttributeError: 'module' object has no attribute 'poll'

  That's because as per release notes for 0.20.0, select.poll was
  removed:

  http://eventlet.net/doc/changelog.html#id2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674434] [NEW] RouterL3AgentBindingDbObjTestCase may fail with NeutronDbObjectDuplicateEntry

2017-03-20 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/08/360908/51/gate/gate-neutron-
python35/8473cfb/testr_results.html.gz

Traceback (most recent call last):
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
return f(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_base.py",
 line 1640, in test_count_validate_filters_false
self._make_object(fields).create()
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", 
line 212, in decorator
res = func(self, *args, **kwargs)
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", 
line 592, in create
object_class=self.__class__, db_exception=db_exc)
neutron.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to create a 
duplicate RouterL3AgentBinding: for attribute(s) ['router_id', 'binding_index'] 
with value(s) None

This is triggered by https://review.openstack.org/#/c/360908/

** Affects: neutron
     Importance: Medium
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: gate-failure unittest

** Tags added: gate-failure unittest

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674434

Title:
  RouterL3AgentBindingDbObjTestCase may fail with
  NeutronDbObjectDuplicateEntry

Status in neutron:
  In Progress

Bug description:
  http://logs.openstack.org/08/360908/51/gate/gate-neutron-
  python35/8473cfb/testr_results.html.gz

  Traceback (most recent call last):
File "/home/jenkins/workspace/gate-neutron-python35/neutron/tests/base.py", 
line 116, in func
  return f(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/tests/unit/objects/test_base.py",
 line 1640, in test_count_validate_filters_false
  self._make_object(fields).create()
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", line 
212, in decorator
  res = func(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", line 
592, in create
  object_class=self.__class__, db_exception=db_exc)
  neutron.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to create a 
duplicate RouterL3AgentBinding: for attribute(s) ['router_id', 'binding_index'] 
with value(s) None

  This is triggered by https://review.openstack.org/#/c/360908/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673427] Re: mysql OOM ( Out of sort memory, consider increasing server sort buffer size )

2017-03-16 Thread Ihar Hrachyshka
Fixed by https://review.openstack.org/#/c/446196/

** Changed in: devstack
   Status: New => Fix Released

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673427

Title:
  mysql OOM ( Out of sort memory, consider increasing server sort buffer
  size )

Status in devstack:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  Neutron fails with:

  http://logs.openstack.org/84/445884/2/gate/gate-neutron-dsvm-api-
  ubuntu-xenial/800e806/logs/screen-q-svc.txt.gz

  2017-03-15 21:02:19.418 30853 ERROR neutron.api.v2.resource DBError: 
(pymysql.err.InternalError) (1038, u'Out of sort memory, consider increasing 
server sort buffer size') [SQL: u'SELECT networks.id AS network_id, subnets.id 
AS subnet_id, subnets.cidr AS subnets_cidr, subnets.ip_version AS 
subnets_ip_version, networks.name AS network_name, networks.project_id AS 
networks_project_id, subnets.name AS subnet_name, 
count(ipallocations.subnet_id) AS used_ips \nFROM networks LEFT OUTER JOIN 
subnets ON networks.id = subnets.network_id LEFT OUTER JOIN ipallocations ON 
subnets.id = ipallocations.subnet_id GROUP BY networks.id, subnets.id, 
subnets.cidr, subnets.ip_version, networks.name, networks.project_id, 
subnets.name']
  2017-03-15 21:02:19.418 30853 ERROR neutron.api.v2.resource 

  
  So may we consider changing the sort buffer size?

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1673427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673124] [NEW] test_admin_create_network_keystone_v3 failed with KeyError: 'project_id'

2017-03-15 Thread Ihar Hrachyshka
Public bug reported:

ft1.1: 
neutron.tests.tempest.api.admin.test_networks.NetworksTestAdmin.test_admin_create_network_keystone_v3[id-d3c76044-d067-4cb0-ae47-8cdd875c7f67]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2017-03-15 05:06:00,443 8618 INFO [tempest.lib.common.rest_client] Request 
(NetworksTestAdmin:test_admin_create_network_keystone_v3): 201 POST 
http://158.69.89.86/identity/v3/auth/tokens
2017-03-15 05:06:00,444 8618 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: 
Response - Headers: {'content-location': 
'http://158.69.89.86/identity/v3/auth/tokens', u'content-length': '3667', 
u'x-openstack-request-id': 'req-87ad6c4d-f31c-4b0f-9d09-f0dfea7967ef', u'vary': 
'X-Auth-Token', u'date': 'Wed, 15 Mar 2017 05:06:00 GMT', u'connection': 
'close', u'content-type': 'application/json', u'server': 'Apache/2.4.18 
(Ubuntu)', 'status': '201', u'x-subject-token': ''}
Body: {"token": {"is_domain": false, "methods": ["password"], "roles": 
[{"id": "49586c062a164a0387b2c81ef73f28c2", "name": "admin"}, {"id": 
"9cca5926cfa14c84a060bf4926657c07", "name": "Member"}], "expires_at": 
"2017-03-15T06:06:00.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "0d567c4487d049988d49a79acd159b42", "name": 
"tempest-NetworksTestAdmin-1042551292"}, "catalog": [{"endpoints": [{"url": 
"http://158.69.89.86:8080;, "interface": "admin", "region": "RegionOne", 
"region_id": "RegionOne", "id": "0134e566b4a3417bac59232f7581214b"}, {"url": 
"http://158.69.89.86:8080/v1/AUTH_0d567c4487d049988d49a79acd159b42;, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"62fed52298d6447f81a6caa4aab41254"}], "type": "object-store", "id": 
"0b9611b345d04326a6fd47661adfd998", "name": "swift"}, {"endpoints": [{"url": 
"http://158.69.89.86/placement;, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "12e21ad49
 605441ca3c2b197523905f1"}], "type": "placement", "id": 
"2cd7d15b20794d21b920e8636dbf590a", "name": "placement"}, {"endpoints": 
[{"url": "http://158.69.89.86:8776/v1/0d567c4487d049988d49a79acd159b42;, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"c064e8594368493b968a8f3ba91df6ab"}], "type": "volume", "id": 
"2ce6191d34104ac39017cb0ccc8fbb97", "name": "cinder"}, {"endpoints": [{"url": 
"http://158.69.89.86:8774/v2.1;, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "e2d7023e14c54fa693919d9bcdcca435"}], "type": 
"compute", "id": "2cfbccc3ee9848aebcbc32aef3ec351d", "name": "nova"}, 
{"endpoints": [{"url": 
"http://158.69.89.86:8774/v2/0d567c4487d049988d49a79acd159b42;, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"3c848f675f1047608ceac0477ab3f3da"}], "type": "compute_legacy", "id": 
"b8c84ebaf06442a0af3e25fcfe60c962", "name": "nova_legacy"}, {"endpoints": 
[{"url": "http://158.69.89.86:9292;, "inter
 face": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"1b0f3fbd80f344049cb4c3edb7392a18"}], "type": "image", "id": 
"c759a7968ff74612998aca5343fd8306", "name": "glance"}, {"endpoints": [{"url": 
"http://158.69.89.86/identity_admin;, "interface": "admin", "region": 
"RegionOne", "region_id": "RegionOne", "id": 
"01c25a55e10b4c0aab6d6aca140ad66b"}, {"url": "http://158.69.89.86/identity;, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"78397ca9d2214f91b45294f5938b4609"}], "type": "identity", "id": 
"cf274a9a95c740028d152ae31d904649", "name": "keystone"}, {"endpoints": [{"url": 
"http://158.69.89.86:8776/v2/0d567c4487d049988d49a79acd159b42;, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"68a27567b8f543c5b4281d2e187a68ec"}], "type": "volumev2", "id": 
"deaacb06abc94de49358b11acafc90e6", "name": "cinderv2"}, {"endpoints": [{"url": 
"http://158.69.89.86:8776/v3/0d567c4487d049988d49a79acd159b42;, "interface": 
"public",
  "region": "RegionOne", "region_id": "RegionOne", "id": 
"6f91491ad9b645babd22633e1b6d389b"}], "type": "volumev3", "id": 
"efa1876fbcd74c65959456057ca6d4a9", "name": "cinderv3"}, {"endpoints": [{"url": 
"http://158.69.89.86:9696/;, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "a5f18abdddf949219731ff2e7da0d2b4"}], "type": 
"network", "id": "f830c6fe0f5f4e4190b292f45aa212ce", "name": "neutron"}], 
"user": {"password_expires_at": null, "domain": {"id": "default", "name": 
"Default"}, "id": "4db7f3cf9e0945bfbbe64e277e3cb563", "name": 
"tempest-NetworksTestAdmin-1042551292"}, "audit_ids": 
["FOAqC8pzQC-V1EcUiggviw"], "issued_at": "2017-03-15T05:06:00.00Z"}}
2017-03-15 05:06:00,947 8618 INFO [tempest.lib.common.rest_client] Request 
(NetworksTestAdmin:test_admin_create_network_keystone_v3): 201 POST 
http://158.69.89.86:9696/v2.0/networks 0.503s
2017-03-15 05:06:00,948 8618 DEBUG

[Yahoo-eng-team] [Bug 1673086] [NEW] test_filter_router_tags fails with testtools.matchers._impl.MismatchError: set(['tag-res2', 'tag-res3']) != set([u'tag-res3'])

2017-03-15 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/53/319353/4/check/gate-neutron-dsvm-api-
ubuntu-xenial/bafaa60/testr_results.html.gz

ft1.1: 
neutron.tests.tempest.api.test_tag.TagFilterRouterTestJSON.test_filter_router_tags[id-cdd3f3ea-073d-4435-a6cb-826a4064193d,smoke]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2017-03-15 05:06:28,543 8715 INFO [tempest.lib.common.rest_client] Request 
(TagFilterRouterTestJSON:test_filter_router_tags): 200 GET 
http://15.184.66.234:9696/v2.0/routers?tags=red 0.300s
2017-03-15 05:06:28,543 8715 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'X-Auth-Token': '', 
'Accept': 'application/json'}
Body: None
Response - Headers: {'content-location': 
'http://15.184.66.234:9696/v2.0/routers?tags=red', 'status': '200', 
u'x-openstack-request-id': 'req-dd2c112c-7e0b-4c1f-a3e9-67a993cb3765', 
u'content-type': 'application/json', u'content-length': '1431', u'date': 'Wed, 
15 Mar 2017 05:06:28 GMT', u'connection': 'close'}
Body: {"routers": [{"status": "ACTIVE", "external_gateway_info": null, 
"availability_zone_hints": [], "availability_zones": [], "description": "", 
"tags": ["blue", "red", "green"], "tenant_id": 
"a74304310e6a4a669dae814a4f644503", "created_at": "2017-03-15T05:06:25Z", 
"admin_state_up": false, "updated_at": "2017-03-15T05:06:27Z", "flavor_id": 
null, "revision_number": 6, "routes": [], "project_id": 
"a74304310e6a4a669dae814a4f644503", "id": 
"388e0638-9e6d-4658-b9b3-ac664fbbc96b", "name": "tag-res3"}, {"status": 
"ACTIVE", "external_gateway_info": null, "availability_zone_hints": [], 
"availability_zones": [], "description": "", "tags": ["red"], "tenant_id": 
"a74304310e6a4a669dae814a4f644503", "created_at": "2017-03-15T05:06:25Z", 
"admin_state_up": false, "updated_at": "2017-03-15T05:06:27Z", "flavor_id": 
null, "revision_number": 6, "routes": [], "project_id": 
"a74304310e6a4a669dae814a4f644503", "id": 
"74dca3b4-9cf1-493e-8acb-1845da14f2f5", "name": "tag-res2"}, {"status": 
"ACTIVE",
  "external_gateway_info": null, "availability_zone_hints": [], 
"availability_zones": [], "description": "", "tags": ["red"], "tenant_id": 
"a74304310e6a4a669dae814a4f644503", "created_at": "2017-03-15T05:06:24Z", 
"admin_state_up": false, "updated_at": "2017-03-15T05:06:27Z", "flavor_id": 
null, "revision_number": 4, "routes": [], "project_id": 
"a74304310e6a4a669dae814a4f644503", "id": 
"baf41f14-fac6-4c3e-98db-10dd6de963ca", "name": "tag-res1"}]}
2017-03-15 05:06:28,728 8715 INFO [tempest.lib.common.rest_client] Request 
(TagFilterRouterTestJSON:test_filter_router_tags): 200 GET 
http://15.184.66.234:9696/v2.0/routers?tags=red%2Cblue 0.184s
2017-03-15 05:06:28,729 8715 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'Content-Type': 'application/json', 'X-Auth-Token': '', 
'Accept': 'application/json'}
Body: None
Response - Headers: {'content-location': 
'http://15.184.66.234:9696/v2.0/routers?tags=red%2Cblue', 'status': '200', 
u'x-openstack-request-id': 'req-c17048b2-6a89-438a-8ba9-98cabf92dc6d', 
u'content-type': 'application/json', u'content-length': '497', u'date': 'Wed, 
15 Mar 2017 05:06:28 GMT', u'connection': 'close'}
Body: {"routers": [{"status": "ACTIVE", "external_gateway_info": null, 
"availability_zone_hints": [], "availability_zones": [], "description": "", 
"tags": ["green", "blue", "red"], "tenant_id": 
"a74304310e6a4a669dae814a4f644503", "created_at": "2017-03-15T05:06:25Z", 
"admin_state_up": false, "updated_at": "2017-03-15T05:06:27Z", "flavor_id": 
null, "revision_number": 6, "routes": [], "project_id": 
"a74304310e6a4a669dae814a4f644503", "id": 
"388e0638-9e6d-4658-b9b3-ac664fbbc96b", "name": "tag-res3"}]}
}}}

Traceback (most recent call last):
  File "tempest/test.py", line 121, in wrapper
return func(*func_args, **func_kwargs)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_tag.py", line 
317, in test_filter_router_tags
self._test_filter_tags()
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_tag.py", line 
190, in _test_filter_tags
self._assertEqualResources(['tag-res2', 'tag-res3'], res)
  File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_tag.py", line 
179, in _assertEqualResources
self.assertEqual(set(expected), set(actual))
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
411, in assertEqual
self.assertThat(observed, matcher, message)
  File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: set(['tag-res2', 'tag-res3']) != 
set([u'tag-res3'])

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure tempest

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure tempest

-- 
You received 

[Yahoo-eng-team] [Bug 1672852] [NEW] [RFE] Make controllers with different list of supported API extensions to behave identically

2017-03-14 Thread Ihar Hrachyshka
Public bug reported:

The idea is to make controllers behave the same on API layer
irrespective of the fact whether they, due to their different major
versions, or because of different configuration files, support different
lists of API extensions.

The primary use case here is when controllers are upgraded in rolling
mode, when you have different major versions running and probably
serving API requests in round-robin implemented by a frontend load
balancer. If version N exposes extensions A,B,C,D, while N+1 exposes
A,B,C,D,E, then during upgrade when both versions are running, API
/extensions/ endpoint should return [A,B,C,D]. After all controllers get
to the new major version, they can switch to [A,B,C,D,E].

This proposal implies there is mutual awareness of controller services
about each other and their lists of supported extensions that will be
achieved by storing lists in a new servers table, similar to agents
tables we have.

On service startup, controllers will discover information about other
controllers from the table and load only those extensions that are
supported by all controller peers. We may also introduce a mechanism
where a signal triggers reload of extensions based on current table info
state, or a periodic reloading thread that will look at the table e.g.
every 60 seconds. (An alternative could be discovering that info on each
API request, but that would be too consuming.)

This proposal does not handle case where we drop an extension in a span
of a single cycle (like replacing timestamp extension with
timestamp_core). We may need to handle those cases by some other means
(the easiest being not allowing such drastic in-place replacement of
attribute format).

** Affects: neutron
 Importance: Wishlist
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672852

Title:
  [RFE] Make controllers with different list of supported API extensions
  to behave identically

Status in neutron:
  New

Bug description:
  The idea is to make controllers behave the same on API layer
  irrespective of the fact whether they, due to their different major
  versions, or because of different configuration files, support
  different lists of API extensions.

  The primary use case here is when controllers are upgraded in rolling
  mode, when you have different major versions running and probably
  serving API requests in round-robin implemented by a frontend load
  balancer. If version N exposes extensions A,B,C,D, while N+1 exposes
  A,B,C,D,E, then during upgrade when both versions are running, API
  /extensions/ endpoint should return [A,B,C,D]. After all controllers
  get to the new major version, they can switch to [A,B,C,D,E].

  This proposal implies there is mutual awareness of controller services
  about each other and their lists of supported extensions that will be
  achieved by storing lists in a new servers table, similar to agents
  tables we have.

  On service startup, controllers will discover information about other
  controllers from the table and load only those extensions that are
  supported by all controller peers. We may also introduce a mechanism
  where a signal triggers reload of extensions based on current table
  info state, or a periodic reloading thread that will look at the table
  e.g. every 60 seconds. (An alternative could be discovering that info
  on each API request, but that would be too consuming.)

  This proposal does not handle case where we drop an extension in a
  span of a single cycle (like replacing timestamp extension with
  timestamp_core). We may need to handle those cases by some other means
  (the easiest being not allowing such drastic in-place replacement of
  attribute format).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672607] [NEW] test_arp_spoof_doesnt_block_normal_traffic fails with AttributeError: 'NoneType' object has no attribute 'splitlines'

2017-03-13 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/03/444603/5/check/gate-neutron-dsvm-
functional-ubuntu-xenial/52405e2/testr_results.html.gz

Traceback (most recent call last):
  File "neutron/tests/base.py", line 116, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/test_ovs_flows.py", line 182, in 
test_arp_spoof_doesnt_block_normal_traffic
self._setup_arp_spoof_for_port(self.src_p.name, [self.src_addr])
  File "neutron/tests/functional/agent/test_ovs_flows.py", line 309, in 
_setup_arp_spoof_for_port
self.br_int, vif, details)
  File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 916, in setup_arp_spoofing_protection
bridge.set_allowed_macs_for_port(vif.ofport, mac_addresses)
  File 
"neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_int.py", 
line 150, in set_allowed_macs_for_port
in_port=port).splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure ovs

** Changed in: neutron
   Importance: Undecided => High

** Tags added: functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672607

Title:
  test_arp_spoof_doesnt_block_normal_traffic fails with AttributeError:
  'NoneType' object has no attribute 'splitlines'

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/03/444603/5/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/52405e2/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 116, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/test_ovs_flows.py", line 182, in 
test_arp_spoof_doesnt_block_normal_traffic
  self._setup_arp_spoof_for_port(self.src_p.name, [self.src_addr])
File "neutron/tests/functional/agent/test_ovs_flows.py", line 309, in 
_setup_arp_spoof_for_port
  self.br_int, vif, details)
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 916, in setup_arp_spoofing_protection
  bridge.set_allowed_macs_for_port(vif.ofport, mac_addresses)
File 
"neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_int.py", 
line 150, in set_allowed_macs_for_port
  in_port=port).splitlines()
  AttributeError: 'NoneType' object has no attribute 'splitlines'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617282] Re: functional gate failed with git clone timeout on fetching ovs from github

2017-03-10 Thread Ihar Hrachyshka
Fixed by https://review.openstack.org/#/c/437041

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617282

Title:
  functional gate failed with git clone timeout on fetching ovs from
  github

Status in neutron:
  Fix Released
Status in openvswitch package in Ubuntu:
  Confirmed

Bug description:
  http://logs.openstack.org/68/351368/23/check/gate-neutron-dsvm-
  functional/0d68031/console.html

  2016-08-25 10:06:34.915685 | fatal: unable to access 
'https://github.com/openvswitch/ovs.git/': Failed to connect to github.com port 
443: Connection timed out
  2016-08-25 10:06:34.920456 | + functions-common:git_timed:603   :   
[[ 128 -ne 124 ]]
  2016-08-25 10:06:34.921769 | + functions-common:git_timed:604   :   
die 604 'git call failed: [git clone' https://github.com/openvswitch/ovs.git 
'/opt/stack/new/ovs]'
  2016-08-25 10:06:34.922982 | + functions-common:die:186 :   
local exitcode=0
  2016-08-25 10:06:34.924373 | + functions-common:die:187 :   
set +o xtrace
  2016-08-25 10:06:34.924404 | [Call Trace]
  2016-08-25 10:06:34.924430 | 
/opt/stack/new/neutron/neutron/tests/contrib/gate_hook.sh:53:compile_ovs
  2016-08-25 10:06:34.924447 | 
/opt/stack/new/neutron/devstack/lib/ovs:57:git_timed
  2016-08-25 10:06:34.924463 | /opt/stack/new/devstack/functions-common:604:die
  2016-08-25 10:06:34.926689 | [ERROR] 
/opt/stack/new/devstack/functions-common:604 git call failed: [git clone 
https://github.com/openvswitch/ovs.git /opt/stack/new/ovs]

  I guess we should stop pulling OVS from github. Instead, we could use
  Xenial platform that already provides ovs == 2.5 from .deb packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628377] Re: test_stack_update_replace_with_ip_rollback filure

2017-03-10 Thread Ihar Hrachyshka
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628377

Title:
  test_stack_update_replace_with_ip_rollback filure

Status in heat:
  New

Bug description:
  heat integration tests test_stack_update_replace_with_ip_rollback
  failed with below error. Though there no previous occurrences of this
  error, I can see db errors in neutron logs[1].

  http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-
  convg-mysql-lbaasv2/8b93a55/console.html

  2016-09-28 04:06:13.600724 | 2016-09-28 04:06:13.600 | Captured traceback:
  2016-09-28 04:06:13.601965 | 2016-09-28 04:06:13.601 | ~~~
  2016-09-28 04:06:13.603681 | 2016-09-28 04:06:13.603 | Traceback (most 
recent call last):
  2016-09-28 04:06:13.605168 | 2016-09-28 04:06:13.604 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_create_update_neutron_port.py",
 line 119, in test_stack_update_replace_with_ip_rollback
  2016-09-28 04:06:13.607511 | 2016-09-28 04:06:13.606 | 
self.assertEqual(_id, new_id)
  2016-09-28 04:06:13.608660 | 2016-09-28 04:06:13.608 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
  2016-09-28 04:06:13.611349 | 2016-09-28 04:06:13.611 | 
self.assertThat(observed, matcher, message)
  2016-09-28 04:06:13.613880 | 2016-09-28 04:06:13.613 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 498, in 
assertThat
  2016-09-28 04:06:13.616079 | 2016-09-28 04:06:13.615 | raise 
mismatch_error
  2016-09-28 04:06:13.617343 | 2016-09-28 04:06:13.617 | 
testtools.matchers._impl.MismatchError: !=:
  2016-09-28 04:06:13.619135 | 2016-09-28 04:06:13.618 | reference = 
u'04c2e178-5c96-4f22-9072-faea92fa6560'
  2016-09-28 04:06:13.620321 | 2016-09-28 04:06:13.620 | actual= 
u'0255770f-e6a5-45de-b604-10c06f12d42c'

  [1]

  http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-
  convg-mysql-
  lbaasv2/8b93a55/logs/screen-q-svc.txt.gz#_2016-09-28_04_06_01_173

  2016-09-28 04:06:01.173 4912 DEBUG neutron.callbacks.manager 
[req-f70ab80e-2870-4bbe-aad0-1841203355d8 demo -] Notify callbacks 
[('neutron.db.l3_db._notify_routers_callback-8748586454487', ), 
('neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._native_event_send_dhcp_notification--9223372036829987228',
 >), 
('neutron.db.l3_dvrscheduler_db._notify_port_delete-8748585805241', )] for port, after_delete _notify_loop 
/opt/stack/new/neutron/neutron/callbacks/manager.py:142
  2016-09-28 04:06:01.173 4914 DEBUG neutron.db.api 
[req-dfcdfd65-63b9-45cd-913c-43f6287e2e37 - -] Retry wrapper got retriable 
exception: Traceback (most recent call last):
    File "/opt/stack/new/neutron/neutron/db/api.py", line 119, in wrapped
  return f(*dup_args, **dup_kwargs)
    File "/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1733, in 
update_port_status
  context.session.flush()
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", 
line 2019, in flush
  self._flush(objects)
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", 
line 2137, in _flush
  transaction.rollback(_capture_exception=True)
    File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
  compat.reraise(exc_type, exc_value, exc_tb)
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", 
line 2101, in _flush
  flush_context.execute()
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 373, in execute
  rec.execute(self)
    File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 532, in execute
  uow
    File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
170, in save_obj
  mapper, table, update)
    File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
728, in _emit_update_statements
  (table.description, len(records), rows))
  StaleDataError: UPDATE statement on table 'standardattributes' expected to 
update 1 row(s); 0 were matched.
   wrapped /opt/stack/new/neutron/neutron/db/api.py:124

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1628377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630257] Re: DBDeadlock occurs during test_dualnet_multi_prefix_dhcpv6_stateless

2017-03-10 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1509004 ***
https://bugs.launchpad.net/bugs/1509004

** This bug is no longer a duplicate of bug 1533194
   Gate failures for neutron in TestGettingAddress
** This bug has been marked a duplicate of bug 1509004
   "test_dualnet_dhcp6_stateless_from_os" failures seen in the gate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630257

Title:
  DBDeadlock occurs during test_dualnet_multi_prefix_dhcpv6_stateless

Status in neutron:
  New

Bug description:
  The test test_dualnet_multi_prefix_dhcpv6_stateless
  
(tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless[compute
  ,id-cf1c4425-766b-45b8-be35-e2959728eb00,network]) failed with error
  on python-openstackclient gates:

  2016-10-04 13:59:59.811433 | Captured traceback:
  2016-10-04 13:59:59.811450 | ~~~
  2016-10-04 13:59:59.811471 | Traceback (most recent call last):
  2016-10-04 13:59:59.811513 |   File "tempest/test.py", line 107, in 
wrapper
  2016-10-04 13:59:59.811558 | return f(self, *func_args, **func_kwargs)
  2016-10-04 13:59:59.811616 |   File 
"tempest/scenario/test_network_v6.py", line 256, in 
test_dualnet_multi_prefix_dhcpv6_stateless
  2016-10-04 13:59:59.811640 | dualnet=True)
  2016-10-04 13:59:59.811672 |   File 
"tempest/scenario/test_network_v6.py", line 203, in _prepare_and_test
  2016-10-04 13:59:59.811696 | self.subnets_v6[i]['gateway_ip'])
  2016-10-04 13:59:59.811728 |   File 
"tempest/scenario/test_network_v6.py", line 213, in _check_connectivity
  2016-10-04 13:59:59.811751 | (dest, source.ssh_client.host)
  2016-10-04 13:59:59.811794 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  2016-10-04 13:59:59.811818 | raise self.failureException(msg)
  2016-10-04 13:59:59.811857 | AssertionError: False is not true : Timed 
out waiting for 2003::1 to become reachable from 172.24.5.14

  http://logs.openstack.org/11/376311/3/check/gate-tempest-dsvm-neutron-
  src-python-openstackclient/04dabcd/console.html

  At this time in neutron-server logs DBDeadlock occurs:

  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 433, in 
_call_on_drivers
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/mech_agent.py", line 60, in 
create_port_precommit
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self._insert_provisioning_block(context)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/mech_agent.py", line 83, in 
_insert_provisioning_block
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
provisioning_blocks.L2_AGENT_ENTITY)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 159, in wrapped
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/opt/stack/new/neutron/neutron/db/provisioning_blocks.py", line 74, in 
add_provisioning_component
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
context.session.add(record)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 490, 
in __exit__
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self.rollback()
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
compat.reraise(exc_type, exc_value, exc_tb)
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 487, 
in __exit__
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self.commit()
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 392, 
in commit
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers 
self._prepare_impl()
  2016-10-04 13:34:51.346 21473 ERROR neutron.plugins.ml2.managers   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", 

[Yahoo-eng-team] [Bug 1640319] Re: AttributeError: 'module' object has no attribute 'convert_to_boolean'

2017-03-10 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640319

Title:
  AttributeError: 'module' object has no attribute 'convert_to_boolean'

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in vmware-nsx:
  Fix Committed

Bug description:
  With latest neutron master code, neutron service q-svc could start due to the 
following error:
  2016-11-08 21:54:39.435 DEBUG oslo_concurrency.lockutils [-] Lock "manager" 
released by "neutron.manager._create_instance" :: held 1.467s from (pid=18534) 
inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
  2016-11-08 21:54:39.435 ERROR neutron.service [-] Unrecoverable error: please 
check log for details.
  2016-11-08 21:54:39.435 TRACE neutron.service Traceback (most recent call 
last):
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 87, in serve_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service service.start()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 63, in start
  2016-11-08 21:54:39.435 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/service.py", line 289, in _run_wsgi
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
config.load_paste_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/common/config.py", line 125, in load_paste_app
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.load_app(app_name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/wsgi.py", line 353, in 
load_app
  2016-11-08 21:54:39.435 TRACE neutron.service return 
deploy.loadapp("config:%s" % self.config_path, name=name)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2016-11-08 21:54:39.435 TRACE neutron.service return loadobj(APP, uri, 
name=name, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2016-11-08 21:54:39.435 TRACE neutron.service return context.create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in 
urlmap_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(app_name, global_conf=global_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2016-11-08 21:54:39.435 TRACE neutron.service return 
self.object_type.invoke(self)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2016-11-08 21:54:39.435 TRACE neutron.service **context.local_conf)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in 
fix_call
  2016-11-08 21:54:39.435 TRACE neutron.service val = callable(*args, **kw)
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/opt/stack/neutron/neutron/auth.py", line 71, in pipeline_factory
  2016-11-08 21:54:39.435 TRACE neutron.service app = 
loader.get_app(pipeline[-1])
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-11-08 21:54:39.435 TRACE neutron.service name=name, 
global_conf=global_conf).create()
  2016-11-08 21:54:39.435 TRACE neutron.service   File 
"/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in 

[Yahoo-eng-team] [Bug 1540983] Re: Gate failures for neutron in test_dualnet_multi_prefix_slaac

2017-03-10 Thread Ihar Hrachyshka
** Changed in: neutron
 Assignee: Oleg Bondarev (obondarev) => (unassigned)

** Changed in: openstack-gate
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540983

Title:
  Gate failures for neutron in test_dualnet_multi_prefix_slaac

Status in neutron:
  Incomplete
Status in OpenStack-Gate:
  Incomplete

Bug description:
  24 hits in 7 days for logstash query : message:"in
  test_dualnet_multi_prefix_slaac" AND voting:1

  2016-02-02 14:35:49.054 | Captured traceback:
  2016-02-02 14:35:49.054 | ~~~
  2016-02-02 14:35:49.054 | Traceback (most recent call last):
  2016-02-02 14:35:49.054 |   File "tempest/test.py", line 113, in wrapper
  2016-02-02 14:35:49.055 | return f(self, *func_args, **func_kwargs)
  2016-02-02 14:35:49.055 |   File "tempest/scenario/test_network_v6.py", 
line 246, in test_dualnet_multi_prefix_slaac
  2016-02-02 14:35:49.055 | dualnet=True)
  2016-02-02 14:35:49.055 |   File "tempest/scenario/test_network_v6.py", 
line 155, in _prepare_and_test
  2016-02-02 14:35:49.055 | sshv4_1, ips_from_api_1, sid1 = 
self.prepare_server(networks=net_list)
  2016-02-02 14:35:49.055 |   File "tempest/scenario/test_network_v6.py", 
line 128, in prepare_server
  2016-02-02 14:35:49.055 | username=username)
  2016-02-02 14:35:49.056 |   File "tempest/scenario/manager.py", line 390, 
in get_remote_client
  2016-02-02 14:35:49.056 | linux_client.validate_authentication()
  2016-02-02 14:35:49.056 |   File 
"tempest/common/utils/linux/remote_client.py", line 63, in 
validate_authentication
  2016-02-02 14:35:49.056 | self.ssh_client.test_connection_auth()
  2016-02-02 14:35:49.056 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 172, in test_connection_auth
  2016-02-02 14:35:49.056 | connection = self._get_ssh_connection()
  2016-02-02 14:35:49.056 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 87, in _get_ssh_connection
  2016-02-02 14:35:49.057 | password=self.password)
  2016-02-02 14:35:49.057 | tempest_lib.exceptions.SSHTimeout: Connection 
to the 172.24.5.141 via SSH timed out.
  2016-02-02 14:35:49.057 | User: cirros, Password: None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671634] [NEW] Allow to set MTU for networks

2017-03-09 Thread Ihar Hrachyshka
Public bug reported:

ATM neutron does allow to configure MTU for networks to reflect
underlying infrastructure, but only to operators, and only by changing
configuration options.

Ideally, users would be allowed to modify MTU for networks (to simplify
matters, on creation only, though we can also look at resource updates)
to accommodate for custom workloads relying on specific MTUs. Or maybe
sometimes users want to get consistent MTUs across all their networks
instead of having different MTUs based on network type backing their
networks. Both of those use cases would be served by allowing PUT for
'mtu' network attribute.

I guess it will require a fake extension to signal the change in
behavior, even while the implementation may still lay in existing plugin
code handling MTUs.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671634

Title:
  Allow to set MTU for networks

Status in neutron:
  New

Bug description:
  ATM neutron does allow to configure MTU for networks to reflect
  underlying infrastructure, but only to operators, and only by changing
  configuration options.

  Ideally, users would be allowed to modify MTU for networks (to
  simplify matters, on creation only, though we can also look at
  resource updates) to accommodate for custom workloads relying on
  specific MTUs. Or maybe sometimes users want to get consistent MTUs
  across all their networks instead of having different MTUs based on
  network type backing their networks. Both of those use cases would be
  served by allowing PUT for 'mtu' network attribute.

  I guess it will require a fake extension to signal the change in
  behavior, even while the implementation may still lay in existing
  plugin code handling MTUs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522102] Re: [RFE] use oslo-versioned-objects to help with dealing with upgrades

2017-03-09 Thread Ihar Hrachyshka
Agreed with Artur, we ended up with a BP instead of a bug because bugs
bump owners back and forth. I closed the bug as Invalid, BP is still for
Pike.

** Changed in: neutron
   Status: Triaged => In Progress

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Justin Hammond (justin-hammond) => (unassigned)

** Changed in: neutron
Milestone: pike-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522102

Title:
  [RFE] use oslo-versioned-objects to help with dealing with upgrades

Status in neutron:
  Invalid

Bug description:
  This is a rework and re-submission of the old blueprint:
  https://blueprints.launchpad.net/neutron/+spec/versioned-objects

  We are looking to improve the way we deal with versioning (of all sorts 
db/rpc/rest/templates/plugins). Nova has come up with the idea of versioned 
objects, that Ironic has also now used. This has now been proposed as an oslo 
library: https://review.openstack.org/#/c/127532/
  And it has be accepted. https://github.com/openstack/oslo.versionedobjects
  Also heat's versioned objects is base on oslo. oslo.versionedobjects

  Versioned-objects will help us deal with DB schema being at a
  different version than the code expects. This will allow Neutron to be
  operated safely during upgrades.

  Looking forward as we pass more and more data over RPC we can make use
  of versioned-objects to ensure upgrades happen without spreading the
  version dependent code across the code base.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360011] Re: SSH Auth fails in AdvancedNetworkOps scenario

2017-03-06 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1668958 ***
https://bugs.launchpad.net/bugs/1668958

** This bug is no longer a duplicate of bug 1349617
   SSHException: Error reading SSH protocol banner[Errno 104] Connection reset 
by peer
** This bug has been marked a duplicate of bug 1668958
   metadata service occasionally not returning keys

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360011

Title:
  SSH Auth fails in AdvancedNetworkOps scenario

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu
  The latter runs nova network.

  In the past 7 days:
  105 hits (12 in gate)
  grenade: 30
  neutron-standard: 1
  neutron-full: 74

  in the past 36 hours:
  72 hits (8 in gate)
  grenade: 0
  neutron-standard: 1
  neutron-full: 71

  Something apparently has fixed the issue in the grenade test but
  screwed the neutron tests.

  Logstash query (from console, as there is no clue in logs) available
  at [1]

  
  The issue manifests as a failure to authenticate to the server (SSH server 
responds).
  then paramiko starts returning errors like [2], until the timeout expires

  [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9
  [2] 
http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366149] Re: neturon-dsvm-full test_server_connectivity_stop_start test fails

2017-03-06 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1668958 ***
https://bugs.launchpad.net/bugs/1668958

** This bug is no longer a duplicate of bug 1349617
   SSHException: Error reading SSH protocol banner[Errno 104] Connection reset 
by peer
** This bug has been marked a duplicate of bug 1668958
   metadata service occasionally not returning keys

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366149

Title:
  neturon-dsvm-full test_server_connectivity_stop_start test fails

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Failure in gate neutron-dsvm-full:
  
http://logs.openstack.org/98/117898/2/gate/gate-tempest-dsvm-neutron-full/40cf18a/console.html#_2014-09-05_12_25_05_730

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349617] Re: SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer

2017-03-06 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1668958 ***
https://bugs.launchpad.net/bugs/1668958

** This bug has been marked a duplicate of bug 1668958
   metadata service occasionally not returning keys

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349617

Title:
  SSHException: Error reading SSH protocol banner[Errno 104] Connection
  reset by peer

Status in grenade:
  Invalid
Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Noticed a drop in categorized bugs on grenade jobs, so looking at
  latest I see this:

  http://logs.openstack.org/63/108363/5/gate/gate-grenade-dsvm-partial-
  ncpu/1458072/console.html

  Running this query:

  message:"Failed to establish authenticated ssh connection to cirros@"
  AND message:"(Error reading SSH protocol banner[Errno 104] Connection
  reset by peer). Number attempts: 18. Retry after 19 seconds." AND
  tags:"console"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGVzdGFibGlzaCBhdXRoZW50aWNhdGVkIHNzaCBjb25uZWN0aW9uIHRvIGNpcnJvc0BcIiBBTkQgbWVzc2FnZTpcIihFcnJvciByZWFkaW5nIFNTSCBwcm90b2NvbCBiYW5uZXJbRXJybm8gMTA0XSBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIpLiBOdW1iZXIgYXR0ZW1wdHM6IDE4LiBSZXRyeSBhZnRlciAxOSBzZWNvbmRzLlwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA2NTkwMTEwMzMyLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  I get 28 hits in 7 days, and it seems to be very particular to grenade
  jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1349617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669900] [NEW] ovs-vswitchd crashed in functional test with segmentation fault

2017-03-03 Thread Ihar Hrachyshka
Public bug reported:

2017-03-03T18:39:35.095Z|00107|connmgr|INFO|test-br368b7744<->unix: 1 flow_mods 
in the last 0 s (1 adds)
2017-03-03T18:39:35.144Z|00108|connmgr|INFO|br-tunb76d9d9d9<->unix: 9 flow_mods 
in the last 0 s (9 adds)
2017-03-03T18:39:35.148Z|00109|connmgr|INFO|br-tunb76d9d9d9<->unix: 1 flow_mods 
in the last 0 s (1 adds)
2017-03-03T18:39:35.255Z|3|daemon_unix(monitor)|WARN|2 crashes: pid 7753 
died, killed (Segmentation fault), waiting until 10 seconds since last restart
2017-03-03T18:39:43.255Z|4|daemon_unix(monitor)|ERR|2 crashes: pid 7753 
died, killed (Segmentation fault), restarting
2017-03-03T18:39:43.256Z|5|ovs_numa|INFO|Discovered 4 CPU cores on NUMA 
node 0
2017-03-03T18:39:43.256Z|6|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 CPU 
cores
2017-03-03T18:39:43.256Z|7|memory|INFO|8172 kB peak resident set size after 
694.6 seconds
2017-03-03T18:39:43.256Z|8|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connecting...
2017-03-03T18:39:43.256Z|9|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connected


http://logs.openstack.org/73/441273/1/check/gate-neutron-dsvm-functional-ubuntu-xenial/82f5446/logs/openvswitch/ovs-vswitchd.txt.gz

This triggered functional test failure.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure ovs

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669900

Title:
  ovs-vswitchd crashed in functional test with segmentation fault

Status in neutron:
  Confirmed

Bug description:
  2017-03-03T18:39:35.095Z|00107|connmgr|INFO|test-br368b7744<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.144Z|00108|connmgr|INFO|br-tunb76d9d9d9<->unix: 9 
flow_mods in the last 0 s (9 adds)
  2017-03-03T18:39:35.148Z|00109|connmgr|INFO|br-tunb76d9d9d9<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.255Z|3|daemon_unix(monitor)|WARN|2 crashes: pid 7753 
died, killed (Segmentation fault), waiting until 10 seconds since last restart
  2017-03-03T18:39:43.255Z|4|daemon_unix(monitor)|ERR|2 crashes: pid 7753 
died, killed (Segmentation fault), restarting
  2017-03-03T18:39:43.256Z|5|ovs_numa|INFO|Discovered 4 CPU cores on NUMA 
node 0
  2017-03-03T18:39:43.256Z|6|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 
CPU cores
  2017-03-03T18:39:43.256Z|7|memory|INFO|8172 kB peak resident set size 
after 694.6 seconds
  
2017-03-03T18:39:43.256Z|8|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connecting...
  
2017-03-03T18:39:43.256Z|9|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connected

  
  
http://logs.openstack.org/73/441273/1/check/gate-neutron-dsvm-functional-ubuntu-xenial/82f5446/logs/openvswitch/ovs-vswitchd.txt.gz

  This triggered functional test failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669893] [NEW] ovsdb-monitor fails to connect to tcp port when vsctl driver is used

2017-03-03 Thread Ihar Hrachyshka
Public bug reported:

It can be spotted in some functional test runs where vsctl flavours of
tests are triggered before any native flavored test enables the manager
port for tcp. For example,

http://logs.openstack.org/18/441218/1/check/gate-neutron-dsvm-
functional-ubuntu-xenial/08292fe/logs/dsvm-functional-
logs/neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_policy_rule_delete_vsctl_.txt.gz#_2017-03-03_16_53_30_358

This is because of https://review.openstack.org/#/c/407813/ that assumed
that we use native.

** Affects: neutron
 Importance: High
 Assignee: Terry Wilson (otherwiseguy)
 Status: Confirmed


** Tags: functional-tests gate-failure ovs

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Terry Wilson (otherwiseguy)

** Tags added: functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669893

Title:
  ovsdb-monitor fails to connect to tcp port when vsctl driver is used

Status in neutron:
  Confirmed

Bug description:
  It can be spotted in some functional test runs where vsctl flavours of
  tests are triggered before any native flavored test enables the
  manager port for tcp. For example,

  http://logs.openstack.org/18/441218/1/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/08292fe/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_policy_rule_delete_vsctl_.txt.gz#_2017-03-03_16_53_30_358

  This is because of https://review.openstack.org/#/c/407813/ that
  assumed that we use native.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667500] Re: Openstack add 'deafult' security group to a VM when attaching new interface to new network even the VM have customized secgroup

2017-02-28 Thread Ihar Hrachyshka
It's behaviour as designed, sec groups are per port. We may ask nova if
they want to modify port create request to accommodate for identical
groups for all VIFs, but that's out of scope for Neutron.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667500

Title:
  Openstack add 'deafult' security group to a VM when attaching new
  interface  to new network  even the VM have customized secgroup

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  New

Bug description:
  
  I am not sure if its design intention, Openstack add 'deafult' security group 
to a VM when attaching new interface to that VM even if the VM have customized 
secgroup .

  for many deployment, users create and add customized security group to
  the VMs, so when attaching new network interface to the VM, Openstack
  keeps the customized secgroup , but in addition, it adds the 'deafult'
  which is not good as default should not  have all security ports open
  by default.

  Liberty,


  before attach the VM to new network < Nova show  >

  | security_groups  | customized
  |

  
  after VM attached to new network < Nova show  > 
  | security_groups  | customized, default  
|

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667579] [NEW] swift-proxy-server fails to start with Python 3.5

2017-02-23 Thread Ihar Hrachyshka
Public bug reported:

Traceback (most recent call last):
  File "/usr/local/bin/swift-proxy-server", line 6, in 
exec(compile(open(__file__).read(), __file__, 'exec'))
  File "/opt/stack/new/swift/bin/swift-proxy-server", line 23, in 
sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 905, in run_wsgi
loadapp(conf_path, global_conf=global_conf)
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 389, in loadapp
ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 373, in loadcontext
global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
296, in loadcontext
global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
320, in _loadconfig
return loader.get_context(object_type, name, global_conf)
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
object_type, name=name, global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
450, in get_context
global_additions=global_additions)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
562, in _pipeline_app_context
for name in pipeline[:-1]]
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
562, in 
for name in pipeline[:-1]]
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
object_type, name=name, global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
454, in get_context
section)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
476, in _context_from_use
object_type, name=use, global_conf=global_conf)
  File "/opt/stack/new/swift/swift/common/wsgi.py", line 66, in get_context
object_type, name=name, global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
406, in get_context
global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
296, in loadcontext
global_conf=global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
328, in _loadegg
return loader.get_context(object_type, name, global_conf)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
620, in get_context
object_type, name=name)
  File "/usr/local/lib/python3.5/dist-packages/paste/deploy/loadwsgi.py", line 
646, in find_egg_entry_point
possible.append((entry.load(), protocol, entry.name))
  File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 
2302, in load
return self.resolve()
  File "/usr/local/lib/python3.5/dist-packages/pkg_resources/__init__.py", line 
2308, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/opt/stack/new/swift/swift/common/middleware/slo.py", line 799
def is_small_segment((seg_dict, start_byte, end_byte)):
 ^
SyntaxError: invalid syntax

http://logs.openstack.org/14/437514/3/check/gate-rally-dsvm-py35
-neutron-neutron-ubuntu-xenial/3221186/logs/screen-s-proxy.txt.gz

This currently blocks neutron gate where we have a voting py3 tempest
job. The reason why swift is deployed with Python3.5 there is because we
special case in devstack to deploy the service with Python3:

http://git.openstack.org/cgit/openstack-
dev/devstack/tree/inc/python#n167

The short term solution is to disable the special casing. Swift should
then work on fixing the code, and gate on Python3 (preferably the same
job as neutron has).

** Affects: devstack
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

** Affects: neutron
 Importance: Critical
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed

** Affects: swift
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: devstack
   Status: New => Confirmed

** Changed in: devstack
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.

[Yahoo-eng-team] [Bug 1666779] [NEW] Expose neutron API via a WSGI script

2017-02-21 Thread Ihar Hrachyshka
Public bug reported:

As per Pike goal [1], we should expose neutron API via a WSGI script,
and make devstack installation use a web server for default deployment.
This bug is a RFE/tracker for the feature.

[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
wsgi.html

** Affects: neutron
 Importance: Wishlist
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: api

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => In Progress

** Tags added: api

** Changed in: neutron
Milestone: None => pike-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666779

Title:
  Expose neutron API via a WSGI script

Status in neutron:
  In Progress

Bug description:
  As per Pike goal [1], we should expose neutron API via a WSGI script,
  and make devstack installation use a web server for default
  deployment. This bug is a RFE/tracker for the feature.

  [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
  wsgi.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617282] Re: functional gate failed with git clone timeout on fetching ovs from github

2017-02-16 Thread Ihar Hrachyshka
The ovs fix we need for functional job stability:
https://mail.openvswitch.org/pipermail/ovs-git/2016-March/017804.html

** Also affects: openvswitch (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617282

Title:
  functional gate failed with git clone timeout on fetching ovs from
  github

Status in neutron:
  Confirmed
Status in openvswitch package in Ubuntu:
  New

Bug description:
  http://logs.openstack.org/68/351368/23/check/gate-neutron-dsvm-
  functional/0d68031/console.html

  2016-08-25 10:06:34.915685 | fatal: unable to access 
'https://github.com/openvswitch/ovs.git/': Failed to connect to github.com port 
443: Connection timed out
  2016-08-25 10:06:34.920456 | + functions-common:git_timed:603   :   
[[ 128 -ne 124 ]]
  2016-08-25 10:06:34.921769 | + functions-common:git_timed:604   :   
die 604 'git call failed: [git clone' https://github.com/openvswitch/ovs.git 
'/opt/stack/new/ovs]'
  2016-08-25 10:06:34.922982 | + functions-common:die:186 :   
local exitcode=0
  2016-08-25 10:06:34.924373 | + functions-common:die:187 :   
set +o xtrace
  2016-08-25 10:06:34.924404 | [Call Trace]
  2016-08-25 10:06:34.924430 | 
/opt/stack/new/neutron/neutron/tests/contrib/gate_hook.sh:53:compile_ovs
  2016-08-25 10:06:34.924447 | 
/opt/stack/new/neutron/devstack/lib/ovs:57:git_timed
  2016-08-25 10:06:34.924463 | /opt/stack/new/devstack/functions-common:604:die
  2016-08-25 10:06:34.926689 | [ERROR] 
/opt/stack/new/devstack/functions-common:604 git call failed: [git clone 
https://github.com/openvswitch/ovs.git /opt/stack/new/ovs]

  I guess we should stop pulling OVS from github. Instead, we could use
  Xenial platform that already provides ovs == 2.5 from .deb packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664347] [NEW] test_volume_boot_pattern failed to get an instance into ACTIVE state

2017-02-13 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/53/431753/2/check/gate-grenade-dsvm-neutron-
dvr-multinode-ubuntu-
xenial/8cf8895/logs/grenade.sh.txt.gz#_2017-02-11_01_24_50_334

2017-02-11 01:24:50.334 | Captured traceback:
2017-02-11 01:24:50.334 | ~~~
2017-02-11 01:24:50.334 | Traceback (most recent call last):
2017-02-11 01:24:50.334 |   File "tempest/test.py", line 99, in wrapper
2017-02-11 01:24:50.334 | return f(self, *func_args, **func_kwargs)
2017-02-11 01:24:50.334 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 168, in 
test_volume_boot_pattern
2017-02-11 01:24:50.334 | security_group=security_group))
2017-02-11 01:24:50.334 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 72, in 
_boot_instance_from_resource
2017-02-11 01:24:50.334 | return self.create_server(image_id='', 
**create_kwargs)
2017-02-11 01:24:50.334 |   File "tempest/scenario/manager.py", line 208, 
in create_server
2017-02-11 01:24:50.334 | image_id=image_id, **kwargs)
2017-02-11 01:24:50.334 |   File "tempest/common/compute.py", line 182, in 
create_test_server
2017-02-11 01:24:50.334 | server['id'])
2017-02-11 01:24:50.334 |   File 
"/opt/stack/old/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2017-02-11 01:24:50.334 | self.force_reraise()
2017-02-11 01:24:50.334 |   File 
"/opt/stack/old/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2017-02-11 01:24:50.334 | six.reraise(self.type_, self.value, self.tb)
2017-02-11 01:24:50.334 |   File "tempest/common/compute.py", line 164, in 
create_test_server
2017-02-11 01:24:50.334 | clients.servers_client, server['id'], 
wait_until)
2017-02-11 01:24:50.334 |   File "tempest/common/waiters.py", line 96, in 
wait_for_server_status
2017-02-11 01:24:50.334 | raise lib_exc.TimeoutException(message)
2017-02-11 01:24:50.334 | tempest.lib.exceptions.TimeoutException: Request 
timed out
2017-02-11 01:24:50.334 | Details: 
(TestVolumeBootPatternV2:test_volume_boot_pattern) Server 
8a628dec-ebe5-401a-9ba9-72358eabca06 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

The instance was created at:
2017-02-11 01:24:50.304 | 2017-02-11 01:17:58,462 26423 INFO 
[tempest.lib.common.rest_client] Request 
(TestVolumeBootPatternV2:test_volume_boot_pattern): 202 POST 
http://158.69.83.100:8774/v2.1/servers 2.300s

Then the test spins on the instance waiting for it to transition into ACTIVE:
2017-02-11 01:24:50.305 | 2017-02-11 01:17:58,802 26423 INFO 
[tempest.lib.common.rest_client] Request 
(TestVolumeBootPatternV2:test_volume_boot_pattern): 200 GET 
http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06 
0.333s
2017-02-11 01:24:50.305 | 2017-02-11 01:17:58,802 26423 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2017-02-11 01:24:50.305 | Body: None
2017-02-11 01:24:50.305 | Response - Headers: {u'connection': 'close', 
u'vary': 'X-OpenStack-Nova-API-Version', u'content-length': '1093', 
u'x-openstack-nova-api-version': '2.1', u'openstack-api-version': 'compute 
2.1', 'status': '200', 'content-location': 
'http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06', 
u'content-type': 'application/json', u'x-compute-request-id': 
'req-3abcfd5a-7e3d-4333-b66b-547e4ce79ce7', u'date': 'Sat, 11 Feb 2017 01:17:58 
GMT'}
2017-02-11 01:24:50.305 | Body: {"server": 
{"OS-EXT-STS:task_state": "scheduling", "addresses": {}, "links": [{"href": 
"http://158.69.83.100:8774/v2.1/servers/8a628dec-ebe5-401a-9ba9-72358eabca06;, 
"rel": "self"}, {"href": 
"http://158.69.83.100:8774/servers/8a628dec-ebe5-401a-9ba9-72358eabca06;, 
"rel": "bookmark"}], "image": "", "OS-EXT-STS:vm_state": "building", 
"OS-SRV-USG:launched_at": null, "flavor": {"id": "42", "links": [{"href": 
"http://158.69.83.100:8774/flavors/42;, "rel": "bookmark"}]}, "id": 
"8a628dec-ebe5-401a-9ba9-72358eabca06", "user_id": 
"f4e2fa64f9c54cadb1cac2642a2d757f", "OS-DCF:diskConfig": "MANUAL", 
"accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, 
"OS-EXT-AZ:availability_zone": "", "metadata": {}, "status": "BUILD", 
"updated": "2017-02-11T01:17:58Z", "hostId": "", "OS-SRV-USG:terminated_at": 
null, "key_name": "tempest-TestVolumeBootPatternV2-228135911", "name": 
"tempest-TestVolumeBootPatternV2-server-1766399508", "created"
 : "2017-02-11T01:17:58Z", "tenant_id": "c30e6127a8104ae3aad60827bbe07a78", 
"os-extended-volumes:volumes_attached": [], "config_drive": ""}}

The attempt to clean up the instance is at:
2017-02-11 01:24:50.316 | 2017-02-11 01:21:15,384 26423 INFO 

[Yahoo-eng-team] [Bug 1661326] Re: neutron-ovs-agent fails to start on Windows due to Linux-specific imports

2017-02-02 Thread Ihar Hrachyshka
[2] suggests that oslo.rootwrap is not Win32 friendly. Added the project
to list of affected.

** Also affects: oslo.rootwrap
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661326

Title:
  neutron-ovs-agent fails to start on Windows due to Linux-specific
  imports

Status in neutron:
  In Progress
Status in oslo.rootwrap:
  New

Bug description:
  Currently, the neutron-ovs-agent service cannot start on Windows, due
  to a few Linux-specific imports. [1][2]

  [1] http://paste.openstack.org/show/597391/
  [2] http://paste.openstack.org/show/597392/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620587] Re: ml2_conf.ini contains oslo.log options

2017-02-01 Thread Ihar Hrachyshka
I don't believe this is a bug. Those options are duplicated, so that
users may override settings per-service.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620587

Title:
  ml2_conf.ini contains oslo.log options

Status in neutron:
  Won't Fix

Bug description:
  When running neutron-server or one of the agents, neutron.conf is
  usually included which already contains the oslo.log options in the
  [DEFAULT] section. There's no need to add the options again to the
  ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


<    1   2   3   4   5   >