[Yahoo-eng-team] [Bug 1818233] [NEW] ping_ip_address used without assertion in Neutron Tempest Plugin

2019-03-01 Thread Assaf Muller
Public bug reported:

The Neutron Tempest Plugin has an ping_ip_address helper in:

neutron_tempest_plugin/scenario/base.py

It's used in several places:

git grep -n ping_ip_address | cut -d":" -f1-2
neutron_tempest_plugin/scenario/base.py:313
neutron_tempest_plugin/scenario/test_basic.py:39
neutron_tempest_plugin/scenario/test_security_groups.py:118
neutron_tempest_plugin/scenario/test_security_groups.py:146
neutron_tempest_plugin/scenario/test_security_groups.py:152
neutron_tempest_plugin/scenario/test_security_groups.py:178
neutron_tempest_plugin/scenario/test_security_groups.py:193
neutron_tempest_plugin/scenario/test_security_groups.py:208
neutron_tempest_plugin/scenario/test_security_groups.py:261
neutron_tempest_plugin/scenario/test_security_groups.py:292

In all places it's used without an assertion. Meaning that if the ping
fails, it'll timeout (CONF.validation.ping_timeout), then continue the
test as if nothing happened. The test will not necessarily fail.

** Affects: neutron
 Importance: Low
 Status: New


** Tags: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818233

Title:
  ping_ip_address used without assertion in Neutron Tempest Plugin

Status in neutron:
  New

Bug description:
  The Neutron Tempest Plugin has an ping_ip_address helper in:

  neutron_tempest_plugin/scenario/base.py

  It's used in several places:

  git grep -n ping_ip_address | cut -d":" -f1-2
  neutron_tempest_plugin/scenario/base.py:313
  neutron_tempest_plugin/scenario/test_basic.py:39
  neutron_tempest_plugin/scenario/test_security_groups.py:118
  neutron_tempest_plugin/scenario/test_security_groups.py:146
  neutron_tempest_plugin/scenario/test_security_groups.py:152
  neutron_tempest_plugin/scenario/test_security_groups.py:178
  neutron_tempest_plugin/scenario/test_security_groups.py:193
  neutron_tempest_plugin/scenario/test_security_groups.py:208
  neutron_tempest_plugin/scenario/test_security_groups.py:261
  neutron_tempest_plugin/scenario/test_security_groups.py:292

  In all places it's used without an assertion. Meaning that if the ping
  fails, it'll timeout (CONF.validation.ping_timeout), then continue the
  test as if nothing happened. The test will not necessarily fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774710] [NEW] DHCP agent doesn't do anything with a network's dns_domain attribute

2018-06-01 Thread Assaf Muller
Public bug reported:

0) Set up Neutron with ML2/OVS or LB, or anything that uses the DHCP agent
1) Create a network with dns_domain
2) Boot a VM on it

Notice the VM doesn't have the DNS domain in it's /etc/resolv.conf

In short, per-network DNS domains are not respected by the DHCP agent.
The dns_domain attribute is persisted in the Neutron DB and passed on to
the DHCP agent via RPC, but the agent doesn't do anything with it.

Versions:
Master and all previous versions.

WIP fix is in https://review.openstack.org/#/c/571546.

** Affects: neutron
 Importance: Medium
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Description changed:

  0) Set up Neutron with ML2/OVS or LB, or anything that uses the DHCP agent
  1) Create a network with dns_domain
  2) Boot a VM on it
  
  Notice the VM doesn't have the DNS domain in it's /etc/resolv.conf
  
  In short, per-network DNS domains are not respected by the DHCP agent.
  The dns_domain attribute is persisted in the Neutron DB and passed on to
  the DHCP agent via RPC, but the agent doesn't do anything with it.
  
  Versions:
  Master and all previous versions.
+ 
+ WIP fix is in https://review.openstack.org/#/c/571546.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1774710

Title:
  DHCP agent doesn't do anything with a network's dns_domain attribute

Status in neutron:
  In Progress

Bug description:
  0) Set up Neutron with ML2/OVS or LB, or anything that uses the DHCP agent
  1) Create a network with dns_domain
  2) Boot a VM on it

  Notice the VM doesn't have the DNS domain in it's /etc/resolv.conf

  In short, per-network DNS domains are not respected by the DHCP agent.
  The dns_domain attribute is persisted in the Neutron DB and passed on
  to the DHCP agent via RPC, but the agent doesn't do anything with it.

  Versions:
  Master and all previous versions.

  WIP fix is in https://review.openstack.org/#/c/571546.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1774710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738768] Re: Dataplane downtime when containers are stopped/restarted

2017-12-21 Thread Assaf Muller
** Changed in: neutron
   Status: Incomplete => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738768

Title:
  Dataplane downtime when containers are stopped/restarted

Status in neutron:
  Confirmed
Status in tripleo:
  Confirmed

Bug description:
  I have deployed a 3 controllers - 3 computes HA environment with
  ML2/OVS and observed dataplane downtime when restarting/stopping
  neutron-l3 container on controllers. This is what I did:

  1. Created a network, subnet, router, a VM and attached a FIP to the VM
  2. Left a ping running on the undercloud to the FIP
  3. Stopped l3 container in controller-0.
     Result: Observed some packet loss while the router was failed over to 
controller-1
  4. Stopped l3 container in controller-1
     Result: Observed some packet loss while the router was failed over to 
controller-2
  5. Stopped l3 container in controller-2
     Result: No traffic to/from the FIP at all.

  (overcloud) [stack@undercloud ~]$ ping 10.0.0.131
  PING 10.0.0.131 (10.0.0.131) 56(84) bytes of data.
  64 bytes from 10.0.0.131: icmp_seq=1 ttl=63 time=1.83 ms
  64 bytes from 10.0.0.131: icmp_seq=2 ttl=63 time=1.56 ms

  < Last l3 container was stopped here (step 5 above)>

  From 10.0.0.1 icmp_seq=10 Destination Host Unreachable
  From 10.0.0.1 icmp_seq=11 Destination Host Unreachable

  When containers are stopped, I guess that the qrouter namespace is not
  accessible by the kernel:

  [heat-admin@overcloud-controller-2 ~]$ sudo ip netns e 
qrouter-5244e91c-f533-4128-9289-f37c9656792c ip a
  RTNETLINK answers: Invalid argument
  RTNETLINK answers: Invalid argument
  setting the network namespace "qrouter-5244e91c-f533-4128-9289-f37c9656792c" 
failed: Invalid argument

  This means that not only we're getting controlplane downtime but also 
dataplane which could be seen as a regression when compared to 
non-containerized environments.
  The same would happen with DHCP and I expect instances not being able to 
fetch IP addresses from dnsmasq when dhcp containers are stopped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1738768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707791] Re: When using an abbreviated CIDR such as 10/8 in a router creation, keepalived segfaults

2017-08-04 Thread Assaf Muller
*** This bug is a duplicate of bug 1490885 ***
https://bugs.launchpad.net/bugs/1490885

This was resolved here:
https://review.openstack.org/#/c/219187

Which is already present in stable/newton+

Since stable/mitaka doesn't exist anymore, there's nothing that can be
done about this bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

** This bug has been marked a duplicate of bug 1490885
   Neutron should not accept invalid subnet CIDRs like '1/24'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707791

Title:
  When using an abbreviated CIDR such as 10/8 in a router creation,
  keepalived  segfaults

Status in neutron:
  Fix Released

Bug description:
  When using an abbreviated CIDR such as 10/8 in a router creation,
  keepalived segfaults. This issue affects non-ha routers as well.

  neutron router-update UUID --routes type=dict list=true
  destination=10/8,nexthop=10.0.0.1 list=true
  destination=10.1.0.0/12,nexthop=10.10.10.1

  I'm hesitating between fixing it in the client or fixing it in neutron
  itself.   What would be the best place to fix this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666864] Re: function does not return correct create router timestamp

2017-02-27 Thread Assaf Muller
Victor can you explain why you marked this bug as invalid when asking
for more information? Wouldn't you mark it as 'Incomplete' in that case?
Either way, it seems like what Eran is saying is that he created a
router, issued a show, and the created_at and updated_at fields aren't
matching, with no other concurrent operation. That seems like a complete
bug report.

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666864

Title:
  function does not return correct create router timestamp

Status in neutron:
  New

Bug description:
  Version: OSP-10 Newton
  Test failed : 
neutron.tests.tempest.api.test_timestamp.TestTimeStampWithL3.test_show_router_attribute_with_timestamp

  At creation the timestamp is correct: u'created_at':
  u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z'

  when "show" function display the timestamp it display it with ~3 sec 
difference:
  u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:26Z'

  show function display incorrect timestamp

  218 def test_show_router_attribute_with_timestamp(self):
  219 router = self.create_router(router_name='test')
  220 import ipdb;ipdb.set_trace()
  --> 221 body = self.client.show_router(router['id'])
  222 show_router = body['router']
  223 # verify the timestamp from creation and showed is same
  224 import ipdb;ipdb.set_trace()
  225 self.assertEqual(router['created_at'],
  226  show_router['created_at'])

  ipdb> router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb> n
  2017-02-22 10:47:55.084 6919 INFO tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request 
(TestTimeStampWithL3:test_show_router_attribute_with_timestamp): 200 GET 
http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3 0.224s
  2017-02-22 10:47:55.086 6919 DEBUG tempest.lib.common.rest_client 
[req-eef7ded4-bb01-4401-96cd-325c01b2230b ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '200', u'content-length': '462', 
'content-location': 
'http://10.0.0.104:9696/v2.0/routers/545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', 
u'date': 'Wed, 22 Feb 2017 10:47:56 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-eef7ded4-bb01-4401-96cd-325c01b2230b'}
  Body: {"router": {"status": "ACTIVE", "external_gateway_info": null, 
"availability_zone_hints": [], "availability_zones": ["nova"], "description": 
"", "admin_state_up": false, "tenant_id": "8b9cd1ebd13f4172a0c63789ee9c0de2", 
"created_at": "2017-02-22T10:47:24Z", "updated_at": "2017-02-22T10:47:26Z", 
"flavor_id": null, "revision_number": 8, "routes": [], "project_id": 
"8b9cd1ebd13f4172a0c63789ee9c0de2", "id": 
"545e74b0-2f3d-43b8-8678-93bbb3f1f6f3", "name": "test"}} _log_request_full 
tempest/lib/common/rest_client.py:431
  > 
/home/centos/tempest-upstream/neutron/neutron/tests/tempest/api/test_timestamp.py(222)test_show_router_attribute_with_timestamp()

  
  ipdb> router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', 
u'admin_state_up': False, u'tenant_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', 
u'created_at': u'2017-02-22T10:47:24Z', u'updated_at': u'2017-02-22T10:47:24Z', 
u'flavor_id': None, u'revision_number': 3, u'routes': [], u'project_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb> show_router
  {u'status': u'ACTIVE', u'external_gateway_info': None, 
u'availability_zone_hints': [], u'availability_zones': [u'nova'], 
u'description': u'', u'admin_state_up': False, u'tenant_id': 
u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'created_at': u'2017-02-22T10:47:24Z', 
u'updated_at': u'2017-02-22T10:47:26Z', u'flavor_id': None, u'revision_number': 
8, u'routes': [], u'project_id': u'8b9cd1ebd13f4172a0c63789ee9c0de2', u'id': 
u'545e74b0-2f3d-43b8-8678-93bbb3f1f6f3', u'name': u'test'}
  ipdb>

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1643581] Re: mtu size - auto setting for wifi not loads some webpages initially

2016-11-21 Thread Assaf Muller
You reported a bug on the OpenStack Neutron component, pretty clear the
bug report is not relevant for OpenStack.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643581

Title:
  mtu size - auto setting for wifi not loads some webpages initially

Status in neutron:
  Invalid

Bug description:
  Webpages like economictimes.indiatimes.com , opening of inner links in
  ndtv.com , at times dinamalar.com etc.,. did not open and mostly
  resulted in request time out. It opened after 10 or 15 minutes gap
  when refreshed. Changing the dns in netgear wifi modem to google dns,
  verisign dns, open dns or auto obtain dns did not solve the problem.

  Atlast found out that Wifi Network MTU size Auto as default was
  causing this request time out.

  Now after putting a MTU size value manually for wifi network lower
  than the MTU size value configured in wifi modem, all the above
  webpages opens.

  Presently using Ubuntu 16.10

  Kindly make auto MTU size setting for wifi network  to first probe mtu
  size of wifi modem and set a slightly lower value thant that as MTU.

  Reporting this to make Ubuntu user friendly even for newbies.

  I am using Ubuntu for more than 6 years.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1643581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632099] [NEW] Deleting an HA router kills keepalived-state-change with signal 9, leaving children behind

2016-10-10 Thread Assaf Muller
Public bug reported:

When deleting an HA router the agent shuts the neutron-keepalived-state-
monitor with signal 9, leaving behind processes that the state change
process spawns, the "ip -o monitor address" process.

How to reproduce:

$ps aux | grep "monitor address"  # Verify you've got nothing
$tox -e dsvm-functional test_ha_router_lifecycle  # The test creates and 
deletes an HA router
$ps aux | grep "monitor address"  # Oops, leaked process!

** Affects: neutron
 Importance: Low
 Status: New


** Tags: l3-ha liberty-backport-potential mitaka-backport-potential 
newton-backport-potential

** Tags added: l3-ha

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: juno-backport-potential liberty-backport-potential
newton-backport-potential

** Tags added: mitaka-backport-potential

** Tags removed: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632099

Title:
  Deleting an HA router kills keepalived-state-change with signal 9,
  leaving children behind

Status in neutron:
  New

Bug description:
  When deleting an HA router the agent shuts the neutron-keepalived-
  state-monitor with signal 9, leaving behind processes that the state
  change process spawns, the "ip -o monitor address" process.

  How to reproduce:

  $ps aux | grep "monitor address"  # Verify you've got nothing
  $tox -e dsvm-functional test_ha_router_lifecycle  # The test creates and 
deletes an HA router
  $ps aux | grep "monitor address"  # Oops, leaked process!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1632099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628510] Re: Lenovo vibe x3 can't use hi-fi mode with neutron

2016-09-28 Thread Assaf Muller
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628510

Title:
  Lenovo vibe x3 can't use hi-fi mode with neutron

Status in neutron:
  Invalid

Bug description:
  Lenovo vibe x3 can't use hi-fi mode with neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622914] Re: agent traces about bridge-nf-call sysctl values missing

2016-09-15 Thread Assaf Muller
Added TripleO - br_filter kernel module should be loaded by installers.

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622914

Title:
  agent traces about bridge-nf-call sysctl values missing

Status in devstack:
  New
Status in neutron:
  In Progress
Status in tripleo:
  New

Bug description:
  spotted in gate:

  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 450, in daemon_loop
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device_info.get('updated'))
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 265, in 
setup_port_filters
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.prepare_devices_filter(new_devices)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 130, in 
decorated_function
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent *args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 138, in 
prepare_devices_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._apply_port_filter(device_ids)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 163, in 
_apply_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.firewall.prepare_port_filter(device)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 170, in 
prepare_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._enable_netfilter_for_bridges()
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 114, in 
_enable_netfilter_for_bridges
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent run_as_root=True)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent raise RuntimeError(msg)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent RuntimeError: Exit code: 255; 
Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1622914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548142] Re: Created two dns-nameservers on one subnet by running two commands update "dns-nameserver" parameter in case of neutron server active-active

2016-09-13 Thread Assaf Muller
I believe this was fixed by https://review.openstack.org/#/c/303966/.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548142

Title:
  Created two dns-nameservers on one subnet by running two commands
  update "dns-nameserver" parameter in case of neutron server active-
  active

Status in neutron:
  Fix Released

Bug description:
  I had three controllers, I found a bug. I can create two dns-
  nameserver on one subnet by I run two commands update dns-nameserver
  parameter AT THE SAME TIME

  How to reproduce:

  Topology: http://codepad.org/ff0debPB

  Step1: Create one subnet with dns-nameserver is 8.8.8.8

  $ neutron subnet-create --name sub-int-net --dns-nameserver 8.8.8.8
  int-net 172.16.69.0/24

  Step 2: Update dns-nameserver parameter of "sub-int-net" 
  Please running commands AT THE SAME TIME

  - On Controller1
  $ neutron subnet-update --dns-nameserver 172.1.1.10 sub-int-net

  - On Controller2
  $ neutron subnet-update --dns-nameserver 172.1.1.20 sub-int-net

  The result:

  - Before update:
  $ neutron subnet-show sub-int-net
  This is the result: http://codepad.org/QLRnNebj

  - After update
  $ neutron subnet-show sub-int-net
  This is the result: http://codepad.org/y94cKrGV
  (Please note line 7&8)

  Check in database:
  This is the result: http://codepad.org/tOoWGVs8

  In originally, When we update dns-nameserver parameter, the system will 
remove the existing dns-nameserver and write a new dns-nameserver, which have 
value in command update 
  That mean: after update this subnet has only one dns-nameserver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618987] [NEW] test_connection_from_diff_address_scope intermittent "Cannot find device" errors

2016-08-31 Thread Assaf Muller
Public bug reported:

Example TRACE:
http://logs.openstack.org/58/360858/4/check/gate-neutron-dsvm-functional/3fb0ba3/console.html#_2016-08-30_23_25_18_854125

It looks like OVSDB adds an internal OVS port, then when we try to set
the MTU, the linux network stack cannot find the device.

I'm under the impression that https://review.openstack.org/#/c/344859/
was supposed to solve this class of issues.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618987

Title:
  test_connection_from_diff_address_scope intermittent "Cannot find
  device" errors

Status in neutron:
  Confirmed

Bug description:
  Example TRACE:
  
http://logs.openstack.org/58/360858/4/check/gate-neutron-dsvm-functional/3fb0ba3/console.html#_2016-08-30_23_25_18_854125

  It looks like OVSDB adds an internal OVS port, then when we try to set
  the MTU, the linux network stack cannot find the device.

  I'm under the impression that https://review.openstack.org/#/c/344859/
  was supposed to solve this class of issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618111] [NEW] fullstack using deprecated oslo policy and oslo concurrency options

2016-08-29 Thread Assaf Muller
Public bug reported:

>From recent fullstack logs:

Option "policy_file" from group "DEFAULT" is deprecated. Use option 
"policy_file" from group "oslo_policy".
Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" 
from group "oslo_concurrency".

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618111

Title:
  fullstack using deprecated oslo policy and oslo concurrency options

Status in neutron:
  In Progress

Bug description:
  From recent fullstack logs:

  Option "policy_file" from group "DEFAULT" is deprecated. Use option 
"policy_file" from group "oslo_policy".
  Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" 
from group "oslo_concurrency".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618103] [NEW] fullstack using deprecated oslo messaging options

2016-08-29 Thread Assaf Muller
Public bug reported:

In https://bugs.launchpad.net/neutron/+bug/1487322 we switched fullstack
to use oslo messaging options from the DEFAULT section to the
oslo_messaging_rabbit section. Now these options have been deprecated as
well and the new directive is to use the transport_url option:
http://docs.openstack.org/developer/oslo.messaging/opts.html#DEFAULT.transport_url.
It doesn't seem like the option is documented, but the format is:
rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:$PORT/$virtual_host

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618103

Title:
  fullstack using deprecated oslo messaging options

Status in neutron:
  In Progress

Bug description:
  In https://bugs.launchpad.net/neutron/+bug/1487322 we switched
  fullstack to use oslo messaging options from the DEFAULT section to
  the oslo_messaging_rabbit section. Now these options have been
  deprecated as well and the new directive is to use the transport_url
  option:
  
http://docs.openstack.org/developer/oslo.messaging/opts.html#DEFAULT.transport_url.
  It doesn't seem like the option is documented, but the format is:
  rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:$PORT/$virtual_host

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345341] Re: radvd needs functional tests

2016-08-17 Thread Assaf Muller
** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1345341

Title:
  radvd needs functional tests

Status in neutron:
  Confirmed

Bug description:
  See the review comments for https://review.openstack.org/102648

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1345341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403455] Re: neutron-netns-cleanup doesn't clean up all L3 agent spawned processes

2016-08-17 Thread Assaf Muller
** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403455

Title:
  neutron-netns-cleanup doesn't clean up all L3 agent spawned processes

Status in neutron:
  Triaged

Bug description:
  neutron-netns-cleanup cleans all DHCP resources by invoking a command
  on the DHCP driver. However, in the L3 agent case, it merely tries to
  remove the router namespaces. Any child processes that we've added
  over the years (Specifically metadata proxy, radvd, keepalived at this
  point) are not cleaned up. I think that netns-cleanup should remove
  any routers on the system the same way we remove DHCP resources, by
  invoking a method on a L3 agent instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604115] [NEW] test_cleanup_stale_devices functional test sporadic failures

2016-07-18 Thread Assaf Muller
Public bug reported:

19 hits in the last 7 days

build_status:"FAILURE" AND message:", in test_cleanup_stale_devices" AND
build_name:"gate-neutron-dsvm-functional"

Example TRACE failure:
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

Example log from testrunner:
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604115

Title:
  test_cleanup_stale_devices functional test sporadic failures

Status in neutron:
  Confirmed

Bug description:
  19 hits in the last 7 days

  build_status:"FAILURE" AND message:", in test_cleanup_stale_devices"
  AND build_name:"gate-neutron-dsvm-functional"

  Example TRACE failure:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

  Example log from testrunner:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592000] Re: [RFE] Admin customized default security-group

2016-07-14 Thread Assaf Muller
I'd like to see this RFE discussed with the drivers team before it is
marked as Won't Fix.

** Changed in: neutron
   Status: Won't Fix => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592000

Title:
  [RFE] Admin customized default security-group

Status in neutron:
  Confirmed

Bug description:
  Allow the admin to decide which rules should be added (by default) to
  the tenant default security-group once created.

  At the moment, each tenant default security-group is created with specific 
set of rules: allow all egress and allow ingress from default sg.
  However, this is not the desired behavior for all deployments, as some would 
want to practice a “zero trust” model where all traffic is blocked unless 
explicitly decided otherwise, or on the other hand, allow all inbound+outbound 
traffic.
  It’s worth nothing that at some use cases the default behavior can be 
expressed with very specific sets of rules, which only the admin has the 
knowledge to define (e.g- allow connection to active directory endpoints), in 
such cases the impact on usability is even worse, as it requires the admin to 
create rules on every tenant default security-group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594593] [NEW] API tests are broken with 'TypeError: create_tenant() takes exactly 1 argument (2 given)'

2016-06-20 Thread Assaf Muller
Public bug reported:

Tempest patch with Change-Id of:
I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

Changed the v2 Keystone tenant API create_tenant to keyword arguments. This 
broke our API tests that used create_tenant with a tenant_id... It looks like 
the addCleanup that was supposed
to delete the newly created tenant actually created a second tenant. The 
existing create_tenant calls were unaffected by the Tempest change as it is 
backwards compatible.

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594593

Title:
  API tests are broken with 'TypeError: create_tenant() takes exactly 1
  argument (2 given)'

Status in neutron:
  In Progress

Bug description:
  Tempest patch with Change-Id of:
  I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

  Changed the v2 Keystone tenant API create_tenant to keyword arguments. This 
broke our API tests that used create_tenant with a tenant_id... It looks like 
the addCleanup that was supposed
  to delete the newly created tenant actually created a second tenant. The 
existing create_tenant calls were unaffected by the Tempest change as it is 
backwards compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586584] Re: Get the virtual network topology

2016-06-09 Thread Assaf Muller
This can be done entirely in the client side. It would essentially be a
greppable, text based result similar to the current Horizon network
diagram. New features belong to the openstack client, not the neutron
client, so I fixed up the bug's target component.

** Project changed: neutron => python-openstackclient

** Changed in: python-openstackclient
   Status: Triaged => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586584

Title:
  Get the virtual network topology

Status in python-openstackclient:
  New

Bug description:
  When we create a virtual network and use the network in openstack, we just 
can get some simple information about this network by use neutron command " 
neutron net-show". However, getting more details is useful and necessary for us 
. For example , how many virtual devices such as vm, router, dhcp  which 
connected to the network. Further, we also want to know the tenants which those 
devices belong to and the nodes on which those devices are located. If we know 
all this information , we can generate a network topology about this network.
  Using the network topology, Understanding, planning and managing network 
become very easy. Also, fault diagnosis is more efficient. For example, wo can 
easily know the compute node on which  problematic port is located, not need to 
do a lot of inquiries in Nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1586584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587719] Re: no option to view the interfaces attached to router from CLI

2016-06-01 Thread Assaf Muller
neutron router-port-list 

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: Sharat Sharma (sharat-sharma) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1587719

Title:
  no option to view the interfaces attached to router from CLI

Status in neutron:
  Invalid

Bug description:
  There is no way to view the interfaces attached to a router from the
  CLI. To know the interfaces attached to the router, we have to rely on
  dashboard. So an extra field has to be added to the router-show table
  to display the attached interfaces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1587719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580440] Re: neutron purge - executing command on non existing tenant print wrong command

2016-05-25 Thread Assaf Muller
@John - Done. Thank you!

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580440

Title:
  neutron purge - executing command on non existing tenant print wrong
  command

Status in neutron:
  Invalid
Status in openstack-manuals:
  In Progress

Bug description:
  I executed " neutron purge" command  with a non existing tenant ID and
  received the following:

  neutron purge 25a1c11e26354d7dbb5b204eb1310f33
  Purging resources: 100% complete.
  The following resources could not be deleted: 1 network

  
  We do not have that tenant ID so the message should be :

  There is not tenant with "SPECIFIED ID" id found.


  python-neutron-8.0.0-1.el7ost.noarch
  openstack-neutron-8.0.0-1.el7ost.noarch
  python-neutron-lib-0.0.2-1.el7ost.noarch
  openstack-neutron-metering-agent-8.0.0-1.el7ost.noarch
  openstack-neutron-ml2-8.0.0-1.el7ost.noarch
  openstack-neutron-openvswitch-8.0.0-1.el7ost.noarch
  python-neutronclient-4.1.1-2.el7ost.noarch
  openstack-neutron-common-8.0.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584596] Re: The "purge" feature- 1 command for all components

2016-05-23 Thread Assaf Muller
** Project changed: neutron => python-openstackclient

** Description changed:

- Hi all, 
- After working with "neutron purge" and testing it, we have some thoughts 
regarding the feature. 
+ Hi all,
+ After working with "neutron purge" and testing it, we have some thoughts 
regarding the feature.
  
- Due to the fact that this features purpose is to clean all ( network)
+ Due to the fact that this feature's purpose is to clean all network
  objects after tenant was deleted, we think that the "purge" command
  should clear not only Neutron objects, but all tenant owned objects that
  were not deleted after tenant deletion. "One action cleans all".
  
- Additionally, 
- We had a thought to make this purge command run behind the scene after 
"openstack project delete" command execution, which will clean the deleted 
tenant objects without a user to feel anything ( The manual option should stay 
available off-course ) 
+ Additionally, we had a thought to make this purge command run behind the
+ scene after "openstack project delete" command execution, which will
+ clean the deleted tenant objects without a user to feel anything. The
+ manual option should stay available of course.
  
- 
- BR 
+ BR
  Alex

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584596

Title:
  The "purge" feature- 1 command for all components

Status in python-openstackclient:
  New

Bug description:
  Hi all,
  After working with "neutron purge" and testing it, we have some thoughts 
regarding the feature.

  Due to the fact that this feature's purpose is to clean all network
  objects after tenant was deleted, we think that the "purge" command
  should clear not only Neutron objects, but all tenant owned objects
  that were not deleted after tenant deletion. "One action cleans all".

  Additionally, we had a thought to make this purge command run behind
  the scene after "openstack project delete" command execution, which
  will clean the deleted tenant objects without a user to feel anything.
  The manual option should stay available of course.

  BR
  Alex

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1584596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552960] Re: Tempest and Neutron duplicate tests

2016-05-13 Thread Assaf Muller
** No longer affects: neutron/kilo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552960

Title:
  Tempest and Neutron duplicate tests

Status in neutron:
  In Progress

Bug description:
  Problem statement:

  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

  Proposed solution:

  For problem 1, we eliminate the overlap. We do this by defining a set
  of tests that will live in Tempest, and another set of tests that will
  live in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line
  in the sand, we will delete any tests in Neutron that should continue
  to live in Tempest. Some Neutron tests were modified after they were
  copied from Tempest, these modifications will have to be examined and
  then proposed to Tempest. Afterwards these tests may be removed from
  Neutron, eliminating the overlap from the Neutron side. Once this is
  done, overlapping tests may be deleted from Tempest.

  For problem 2, we will develop a Neutron Tempest plugin. This will be
  tracked in a separate bug. Note that there's already a patch for this
  up for review: https://review.openstack.org/#/c/274023/

  * The work is also being tracked here:
  https://etherpad.openstack.org/p/neutron-tempest-defork

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525901] Re: Agents report as started before neutron recognizes as active

2016-05-13 Thread Assaf Muller
** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525901

Title:
  Agents report as started before neutron recognizes as active

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  In HA, there is a potential race condition between the openvswitch
  agent and other agents that "own", depend on or manipulate ports. As
  the neutron server resumes on a failover it will not immediately be
  aware of openvswitch agents that have also been activated on failover
  and act as though there are no active openvswitch agents (this is an
  example, it most likely affects other L2 agents). If an agent such as
  the L3 agent starts and begins resync before the neutron server is
  aware of the active openvswitch agent, ports for the routers on that
  agent will be marked as "binding_failed". Currently this is a
  "terminal" state for the port as neutron does not attempt to rebind
  failed bindings on the same host.

  Unfortunately, the neutron agents do not provide even a best-effort
  deterministic indication to the outside service manager (systemd,
  pacemaker, etc...) that it has fully initialized and the neutron
  server should be aware that it is active. Agents should follow the
  same pattern as wsgi based services and notify systemd after it can be
  reasonably assumed that the neutron server should be aware that it is
  alive. That way service startup order logic or constraints can
  properly start an agent that is dependent on other agents *after*
  neutron should be aware that the required agents are active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579882] Re: Neutron l3 agent fails

2016-05-09 Thread Assaf Muller
The way I'm reading this, this is a deployment type issue, not a Neutron
bug. I highly recommend ask.openstack.org, it's a very active Q type
resource.

I'm setting this to not-a-bug for now. If later you discover a concrete
bug, reply here and I'll re-open the bug.

** Changed in: neutron
 Assignee: Miguel Lavalle (minsel) => (unassigned)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579882

Title:
  Neutron l3 agent fails

Status in neutron:
  Invalid

Bug description:
  Currently I am running LXD on Ubuntu 15.10. I have created two
  containers, controller and compute. Ive gone ahead and installed and
  setup all services. The only issue I am having is with
  neutron-l3-agent and neutron-dhcp-agent. After completing
  configuration I am seeing the following errors:

  neutron-l3-agent.log:
  2016-05-09 19:31:40.558 5045 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'link', 'add', 'tap2d9c4091-9c', 'type', 'veth', 'peer', 'name', 
'qr-2d9c4091-9c', 'netns', 'qrouter-9b021858-d363-4e0b-9691-6cedc2af5bcb'] 
create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:84
  2016-05-09 19:31:41.390 5045 ERROR neutron.agent.linux.utils [-] Exit code: 
2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists

  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info [-] Exit 
code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 371, in call
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 960, 
in process
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 477, 
in _process_internal_ports
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 400, 
in internal_network_added
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
mtu=port.get('mtu'))
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 374, 
in _internal_network_added
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
prefix=prefix, mtu=mtu)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 248, 
in plug
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info bridge, 
namespace, prefix, mtu)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 447, 
in plug_new
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
namespace2=namespace)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 174, in 
add_veth
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
self._as_root([], 'link', tuple(args))
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 95, in 
_as_root
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 104, in 
_execute
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info 
log_fail_as_error=log_fail_as_error)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 140, in 
execute
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info raise 
RuntimeError(msg)
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info RuntimeError: 
Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info
  2016-05-09 19:31:41.394 5045 ERROR neutron.agent.l3.router_info
  2016-05-09 

[Yahoo-eng-team] [Bug 1576840] [NEW] fullstack OVS agent in native openflow mode sometimes fails to bind socket

2016-04-29 Thread Assaf Muller
Public bug reported:

Plenty of hits in the last few days, currently the top issue affecting
fullstack stability.

Example paste:
http://paste.openstack.org/show/495797/

Example logs:
http://logs.openstack.org/18/276018/21/check/gate-neutron-dsvm-fullstack/c0761dc/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,openflow-native_ovsdb-native_/

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576840

Title:
  fullstack OVS agent in native openflow mode sometimes fails to bind
  socket

Status in neutron:
  Confirmed

Bug description:
  Plenty of hits in the last few days, currently the top issue affecting
  fullstack stability.

  Example paste:
  http://paste.openstack.org/show/495797/

  Example logs:
  
http://logs.openstack.org/18/276018/21/check/gate-neutron-dsvm-fullstack/c0761dc/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,openflow-native_ovsdb-native_/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569621] [NEW] test_preventing_firewall_blink functional test unstable

2016-04-12 Thread Assaf Muller
Public bug reported:

Logstash query:
message:"in test_preventing_firewall_blink" AND build_status:"FAILURE" AND 
build_name:"gate-neutron-dsvm-functional"

23 hits in the last 7 days.

Example TRACE:
http://paste.openstack.org/show/493886/

The test runs with the iptables driver with and without ipset, and with
the OVS firewall driver, and it looks like the failure is only with the
OVS driver.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569621

Title:
  test_preventing_firewall_blink functional test unstable

Status in neutron:
  Confirmed

Bug description:
  Logstash query:
  message:"in test_preventing_firewall_blink" AND build_status:"FAILURE" AND 
build_name:"gate-neutron-dsvm-functional"

  23 hits in the last 7 days.

  Example TRACE:
  http://paste.openstack.org/show/493886/

  The test runs with the iptables driver with and without ipset, and
  with the OVS firewall driver, and it looks like the failure is only
  with the OVS driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567668] [NEW] Functional job sometimes hits global 2 hour limit and fails

2016-04-07 Thread Assaf Muller
Public bug reported:

Here's an example:
http://logs.openstack.org/13/302913/1/check/gate-neutron-dsvm-functional/91dd537/console.html

Logstash query:
build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"Killed  timeout -s 9"

45 hits in the last 7 days.

Ihar and I checked the timing, and it started happening as we merged:
https://review.openstack.org/#/c/298056/

There's a few problems here:
1) It appears like a test is freezing up. We have a per-test timeout defined. 
The timeout is defined by OS_TEST_TIMEOUT in tox.ini, and is enforced via a 
fixtures.Timeout fixture set up in the oslotest base class. It looks like that 
timeout doesn't always work.
2) When the global 2 hours job timeout is hit, it doesn't perform post-tests 
tasks such as copying over log files, which makes these problems a lot harder 
to troubleshoot.
3) And of course, there is some sort of issue with likely 
https://review.openstack.org/#/c/298056/.

We can fix via a revert, which will increase the failure rate of
fullstack. Since I've been unable to reproduce this issue locally, I'd
like to hold off on a revert and try to get some more information by
tackling some combination of problems 1 and 2, and then adding more
logging to figure it out.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567668

Title:
  Functional job sometimes hits global 2 hour limit and fails

Status in neutron:
  New

Bug description:
  Here's an example:
  
http://logs.openstack.org/13/302913/1/check/gate-neutron-dsvm-functional/91dd537/console.html

  Logstash query:
  build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"Killed  timeout -s 9"

  45 hits in the last 7 days.

  Ihar and I checked the timing, and it started happening as we merged:
  https://review.openstack.org/#/c/298056/

  There's a few problems here:
  1) It appears like a test is freezing up. We have a per-test timeout defined. 
The timeout is defined by OS_TEST_TIMEOUT in tox.ini, and is enforced via a 
fixtures.Timeout fixture set up in the oslotest base class. It looks like that 
timeout doesn't always work.
  2) When the global 2 hours job timeout is hit, it doesn't perform post-tests 
tasks such as copying over log files, which makes these problems a lot harder 
to troubleshoot.
  3) And of course, there is some sort of issue with likely 
https://review.openstack.org/#/c/298056/.

  We can fix via a revert, which will increase the failure rate of
  fullstack. Since I've been unable to reproduce this issue locally, I'd
  like to hold off on a revert and try to get some more information by
  tackling some combination of problems 1 and 2, and then adding more
  logging to figure it out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567613] [NEW] Functional tests logging configured incorrectly

2016-04-07 Thread Assaf Muller
Public bug reported:

Functional tests output per-test logs produced by the test runner
processes to /tmp/dsvm-functional-logs, and those files are then copied
so they're accessible when viewing logs produced by CI runs. However,
logging seems to be set incorrectly and most of the files are empty
whereas they didn't used to be.

This makes troubleshooting other functional tests CI failures more
difficult than it needs to be.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567613

Title:
  Functional tests logging configured incorrectly

Status in neutron:
  New

Bug description:
  Functional tests output per-test logs produced by the test runner
  processes to /tmp/dsvm-functional-logs, and those files are then
  copied so they're accessible when viewing logs produced by CI runs.
  However, logging seems to be set incorrectly and most of the files are
  empty whereas they didn't used to be.

  This makes troubleshooting other functional tests CI failures more
  difficult than it needs to be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567608] [NEW] neutron.tests.functional.agent.windows.test_ip_lib.IpLibTestCase.test_ipwrapper_get_device_by_ip_None unstable

2016-04-07 Thread Assaf Muller
Public bug reported:

Logstash query:
build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"ValueError: You must specify a valid interface name."

4 matches in the last 7 days.

TRACE example:
http://paste.openstack.org/show/493396/

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567608

Title:
  
neutron.tests.functional.agent.windows.test_ip_lib.IpLibTestCase.test_ipwrapper_get_device_by_ip_None
  unstable

Status in neutron:
  New

Bug description:
  Logstash query:
  build_name:"gate-neutron-dsvm-functional" AND build_status:"FAILURE" AND 
message:"ValueError: You must specify a valid interface name."

  4 matches in the last 7 days.

  TRACE example:
  http://paste.openstack.org/show/493396/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567472] [NEW] net_helpers.get_free_namespace_port can return used ports

2016-04-07 Thread Assaf Muller
Public bug reported:

Here's a simplification of 'get_free_namespace_port':

output = ip_wrapper.netns.execute(['ss', param])
used_ports = _get_source_ports_from_ss_output(output)  # Parses 'ss' output and 
gets all used ports, this is the problematic part
return get_unused_port(used_ports)

Here's a demonstration:
output = ip_wrapper.netns.execute(['ss', param])
print output
State  Recv-Q Send-QLocal Address:Port  Peer Address:Port 
LISTEN 0  10127.0.0.1:6640 *:* 
LISTEN 0  128   *:46675*:* 
LISTEN 0  128   *:22   *:* 
LISTEN 0  128   *:5432 *:* 
LISTEN 0  128   *:3260 *:* 
LISTEN 0  50*:3306 *:* 
ESTAB  0  36   10.0.0.202:22   10.0.0.44:45258 
ESTAB  0  0 127.0.0.1:32965127.0.0.1:4369  
ESTAB  0  010.0.0.202:22   10.0.0.44:36104 
LISTEN 0  128  :::80  :::* 
LISTEN 0  128  :::4369:::* 
LISTEN 0  128  :::22  :::* 
LISTEN 0  128  :::5432:::* 
LISTEN 0  128  :::3260:::* 
LISTEN 0  128  :::5672:::* 
ESTAB  0  0  :::127.0.0.1:4369  :::127.0.0.1:32965

used = net_helpers._get_source_ports_from_ss_output(output)
print used
 {'22', '3260', '32965', '4369', '5432', '5672', '80'}

You can see it returned '3260' but not '3306'.

This bug can impact how fullstack picks which free ports to use for
neutron-server and neutron-openvswitch-agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567472

Title:
  net_helpers.get_free_namespace_port can return used ports

Status in neutron:
  New

Bug description:
  Here's a simplification of 'get_free_namespace_port':

  output = ip_wrapper.netns.execute(['ss', param])
  used_ports = _get_source_ports_from_ss_output(output)  # Parses 'ss' output 
and gets all used ports, this is the problematic part
  return get_unused_port(used_ports)

  Here's a demonstration:
  output = ip_wrapper.netns.execute(['ss', param])
  print output
  State  Recv-Q Send-QLocal Address:Port  Peer Address:Port 
  LISTEN 0  10127.0.0.1:6640 *:*
 
  LISTEN 0  128   *:46675*:*
 
  LISTEN 0  128   *:22   *:*
 
  LISTEN 0  128   *:5432 *:*
 
  LISTEN 0  128   *:3260 *:*
 
  LISTEN 0  50*:3306 *:*
 
  ESTAB  0  36   10.0.0.202:22   
10.0.0.44:45258 
  ESTAB  0  0 127.0.0.1:32965127.0.0.1:4369 
 
  ESTAB  0  010.0.0.202:22   
10.0.0.44:36104 
  LISTEN 0  128  :::80  :::*
 
  LISTEN 0  128  :::4369:::*
 
  LISTEN 0  128  :::22  :::*
 
  LISTEN 0  128  :::5432:::*
 
  LISTEN 0  128  :::3260:::*
 
  LISTEN 0  128  :::5672:::*
 
  ESTAB  0  0  :::127.0.0.1:4369  :::127.0.0.1:32965

  used = net_helpers._get_source_ports_from_ss_output(output)
  print used
   {'22', '3260', '32965', '4369', '5432', '5672', '80'}

  You can see it returned '3260' but not '3306'.

  This bug can impact how fullstack picks which free ports to use for
  neutron-server and neutron-openvswitch-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567023] [NEW] test_keepalived_respawns* functional tests raceful

2016-04-06 Thread Assaf Muller
Public bug reported:

Running just KeepalivedManagerTestCase locally I see the following three
tests fail frequently:

test_keepalived_spawn
test_keepalived_respawns
test_keepalived_respawn_with_unexpected_exit

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567023

Title:
  test_keepalived_respawns* functional tests raceful

Status in neutron:
  New

Bug description:
  Running just KeepalivedManagerTestCase locally I see the following
  three tests fail frequently:

  test_keepalived_spawn
  test_keepalived_respawns
  test_keepalived_respawn_with_unexpected_exit

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566569] [NEW] OVS functional tests no longer output OVS logs after the change to compile OVS from source

2016-04-05 Thread Assaf Muller
Public bug reported:

Since https://review.openstack.org/#/c/266423/ merged we compile OVS
from source for the functional job. This had a side effect of not
providing OVS logs. Here's the openvswitch logs dir for the patch in
question:

http://logs.openstack.org/23/266423/26/check/gate-neutron-dsvm-
functional/fde6d9e/logs/openvswitch/

You can see it only has has ovs-ctl log, which was created by the OVS-
from-package binary before it was shut down to make room for the OVS-
from-source binary.

Here is the OVS logs dir for the parent change:

http://logs.openstack.org/60/265460/4/check/gate-neutron-dsvm-
functional/6134795/logs/openvswitch/

Which contains a lot of nice, juicy logs useful for all sorts of amazing
things.

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566569

Title:
  OVS functional tests no longer output OVS logs after the change to
  compile OVS from source

Status in neutron:
  New

Bug description:
  Since https://review.openstack.org/#/c/266423/ merged we compile OVS
  from source for the functional job. This had a side effect of not
  providing OVS logs. Here's the openvswitch logs dir for the patch in
  question:

  http://logs.openstack.org/23/266423/26/check/gate-neutron-dsvm-
  functional/fde6d9e/logs/openvswitch/

  You can see it only has has ovs-ctl log, which was created by the OVS-
  from-package binary before it was shut down to make room for the OVS-
  from-source binary.

  Here is the OVS logs dir for the parent change:

  http://logs.openstack.org/60/265460/4/check/gate-neutron-dsvm-
  functional/6134795/logs/openvswitch/

  Which contains a lot of nice, juicy logs useful for all sorts of
  amazing things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563603] Re: Unable to login to a newly instantiated VM

2016-03-29 Thread Assaf Muller
I'd advise you to ask on ask.openstack.org. It's a super active Q,
you'll get much better answers there. Launchpad is typically used for
concrete bugs.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563603

Title:
  Unable to login to a newly instantiated VM

Status in neutron:
  Invalid

Bug description:
  When i try to ssh to my new VM it is showing me the error message "No
  route to host"

  ssh: connect to host 10.0.0.13 port 22: No route to host

  I have already set the security rules:
  nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
  ERROR (BadRequest): This rule already exists in group 
02dc84cc-808f-49ff-bbf2-e1b34d4b3c01 (HTTP 400) (Request-ID: 
req-024eb93f-b0b7-42e7-afce-b4fc034e33c4)

  below is the output for netns:
  deadman@ubuntu:~$ sudo ip netns exec $PRIVATE_NETNS_ID ip a
  [sudo] password for deadman: 
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ns-1fa432f7-1c@if6:  mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:a7:ff:f8 brd ff:ff:ff:ff:ff:ff
  inet 10.0.0.10/29 brd 10.0.0.15 scope global ns-1fa432f7-1c
 valid_lft forever preferred_lft forever
  inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-1fa432f7-1c
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fea7:fff8/64 scope link 
 valid_lft forever preferred_lft forever
  deadman@ubuntu:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551288] Re: Fullstack native tests sometimes fail with an OVS agent failing to start with 'Address already in use' error

2016-03-27 Thread Assaf Muller
Still seeing instances of this bug. I have a deterministic solution
coming up.

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551288

Title:
  Fullstack native tests sometimes fail with an OVS agent failing to
  start with 'Address already in use' error

Status in neutron:
  Confirmed

Bug description:
  Example failure:
  test_connectivity(VLANs,Native) fails with this error:

  http://paste.openstack.org/show/488585/

  wait_until_env_is_up is timing out, which typically means that the
  expected number of agents failed to start. Indeed in this particular
  example I saw this line being output repeatedly in neutron-server.log:

  [29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
  0.005458

  Fullstack calls GET on agents to determine if the expected amount of
  agents were started and are successfully reporting back to neutron-
  server.

  We then see that one of the three OVS agents crashed with this TRACE:
  http://paste.openstack.org/show/488586/

  This happens only with the native tests using the Ryu library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561248] [NEW] Fullstack linux bridge tests sometimes fail because an agent wouldn't come up as it cannot connect to RabbitMQ

2016-03-23 Thread Assaf Muller
Public bug reported:

Here's a couple of examples:

http://logs.openstack.org/78/292178/10/check/gate-neutron-dsvm-
fullstack/b54b61b/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
/neutron-linuxbridge-agent--2016-03-23--13-54-07-458571.log.txt.gz

http://logs.openstack.org/07/296507/2/check/gate-neutron-dsvm-
fullstack/1a24251/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
/neutron-linuxbridge-agent--2016-03-23--21-05-41-093902.log.txt.gz

Note that in both cases the other two agents in the same test were able
to connect successfully. The commonality between agents that cannot
connect to rabbit is that they use a local_ip that is the *broadcast
address* of the subnet they belong to.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561248

Title:
  Fullstack linux bridge tests sometimes fail because an agent wouldn't
  come up as it cannot connect to RabbitMQ

Status in neutron:
  New

Bug description:
  Here's a couple of examples:

  http://logs.openstack.org/78/292178/10/check/gate-neutron-dsvm-
  
fullstack/b54b61b/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
  /neutron-linuxbridge-agent--2016-03-23--13-54-07-458571.log.txt.gz

  http://logs.openstack.org/07/296507/2/check/gate-neutron-dsvm-
  
fullstack/1a24251/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VXLAN_
  /neutron-linuxbridge-agent--2016-03-23--21-05-41-093902.log.txt.gz

  Note that in both cases the other two agents in the same test were
  able to connect successfully. The commonality between agents that
  cannot connect to rabbit is that they use a local_ip that is the
  *broadcast address* of the subnet they belong to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560277] [NEW] Fullstack neutron-server fails to start with: 'RuntimeError: Could not bind to 0.0.0.0:X after trying for 30 seconds'

2016-03-21 Thread Assaf Muller
Public bug reported:

Paste of TRACE:
http://paste.openstack.org/show/491377/

Example of failure:
http://logs.openstack.org/06/286106/3/check/gate-neutron-dsvm-fullstack/df82460/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_/neutron-server--2016-03-22--00-18-18-027088.log.txt.gz

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560277

Title:
  Fullstack neutron-server fails to start with: 'RuntimeError: Could not
  bind to 0.0.0.0:X after trying for 30 seconds'

Status in neutron:
  Confirmed

Bug description:
  Paste of TRACE:
  http://paste.openstack.org/show/491377/

  Example of failure:
  
http://logs.openstack.org/06/286106/3/check/gate-neutron-dsvm-fullstack/df82460/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_/neutron-server--2016-03-22--00-18-18-027088.log.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558819] [NEW] Fullstack linux bridge agent sometimes refuses to die during test clean up, failing the test

2016-03-19 Thread Assaf Muller
Public bug reported:

Paste of failure:
http://paste.openstack.org/show/491014/

When looking at the LB agent logs, you start seeing RPC errors as
neutron-server is unable to access the DB. What's happening is that
fullstack times out trying to kill the LB agent and moves on to other
clean ups. It deletes the DB for the test, but the agents and neutron-
server live on, resulting in errors trying to access the DB. The DB
errors are essentially unrelated - The root cause is that the agent
refuses to die for an unknown reason.

The code that tries to stop the agent is AsyncProcess.stop(block=True, 
signal=9).
Another detail that might be relevant is that the agent lives in a namespace.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558819

Title:
  Fullstack linux bridge agent sometimes refuses to die during test
  clean up, failing the test

Status in neutron:
  Confirmed

Bug description:
  Paste of failure:
  http://paste.openstack.org/show/491014/

  When looking at the LB agent logs, you start seeing RPC errors as
  neutron-server is unable to access the DB. What's happening is that
  fullstack times out trying to kill the LB agent and moves on to other
  clean ups. It deletes the DB for the test, but the agents and neutron-
  server live on, resulting in errors trying to access the DB. The DB
  errors are essentially unrelated - The root cause is that the agent
  refuses to die for an unknown reason.

  The code that tries to stop the agent is AsyncProcess.stop(block=True, 
signal=9).
  Another detail that might be relevant is that the agent lives in a namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557168] [NEW] Fullstack tests do not configure root_helper, use the default of 'sudo'

2016-03-14 Thread Assaf Muller
Public bug reported:

Fullstack tests should be similar to non-test environments, especially
when it's easy. We should use rootwrap and rootwrap-daemon and not plain
sudo.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557168

Title:
  Fullstack tests do not configure root_helper, use the default of
  'sudo'

Status in neutron:
  In Progress

Bug description:
  Fullstack tests should be similar to non-test environments, especially
  when it's easy. We should use rootwrap and rootwrap-daemon and not
  plain sudo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555356] [NEW] Neutron too coupled with Tempest

2016-03-09 Thread Assaf Muller
Public bug reported:

I recently dropped Tempest 'plumbing' code from Neutron and imported it
instead from Tempest: https://review.openstack.org/#/c/269771/

Since then, we've had two instances of changes to Tempest breaking the Neutron 
API tests:
https://bugs.launchpad.net/neutron/+bug/1554362, and:
https://review.openstack.org/#/c/284911/

Neutron imports the following files from Tempest (That aren't a stable
interface e.g. in tempest.lib) that Manila and Ironic (As examples of
projects that have Tempest plugins) do not:

from tempest.common import cred_provider
from tempest.common import custom_matchers
from tempest.common import tempest_fixtures
from tempest import manager
from tempest.services.identity.v2.json.tenants_client import TenantsClient

Instead of waiting for the next breakage, we should go through each such import 
and figure out a strategy to stop using it:
1) Perhaps there is already equivalent code in tempest.lib?
2) If not, can we help move it to tempest.lib?
3) Failing that, we can always copy the needed functionality in to Neutron

** Affects: neutron
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555356

Title:
  Neutron too coupled with Tempest

Status in neutron:
  New

Bug description:
  I recently dropped Tempest 'plumbing' code from Neutron and imported
  it instead from Tempest: https://review.openstack.org/#/c/269771/

  Since then, we've had two instances of changes to Tempest breaking the 
Neutron API tests:
  https://bugs.launchpad.net/neutron/+bug/1554362, and:
  https://review.openstack.org/#/c/284911/

  Neutron imports the following files from Tempest (That aren't a stable
  interface e.g. in tempest.lib) that Manila and Ironic (As examples of
  projects that have Tempest plugins) do not:

  from tempest.common import cred_provider
  from tempest.common import custom_matchers
  from tempest.common import tempest_fixtures
  from tempest import manager
  from tempest.services.identity.v2.json.tenants_client import TenantsClient

  Instead of waiting for the next breakage, we should go through each such 
import and figure out a strategy to stop using it:
  1) Perhaps there is already equivalent code in tempest.lib?
  2) If not, can we help move it to tempest.lib?
  3) Failing that, we can always copy the needed functionality in to Neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548547] Re: Functional tests failing with FAIL: process-returncode

2016-03-07 Thread Assaf Muller
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548547

Title:
  Functional tests failing with FAIL: process-returncode

Status in neutron:
  Fix Released

Bug description:
  After https://review.openstack.org/#/c/277813/, we started seeing
  failures in the functional job. The root cause is that the patch is
  using self.addOnException, and it looks like if the method that is
  invoked on exception itself raises an exception, testr freaks out and
  fails the test. I think that in this particular case, the method
  (collect_debug_info) may be executed out of order, after test clean
  ups already occur so namespaces fixtures are already cleaned up.

  Example trace:
  http://paste.openstack.org/show/487818/

  Failure rate seems to be hovering around 80% in the last couple of
  days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552960] [NEW] Tempest and Neutron duplicate tests

2016-03-03 Thread Assaf Muller
Public bug reported:

Problem statement:

1) Tests overlap between Tempest and Neutron repos - 264 tests last I checked. 
The effect is:
1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a test, 
where should it go? Someone just proposed a Tempest patch for a new Neutron 
API, what should I do with this patch?
1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

Proposed solution:

For problem 1, we eliminate the overlap. We do this by defining a set of
tests that will live in Tempest, and another set of tests that will live
in Neutron. More information may be found here:
https://review.openstack.org/#/c/280427/. After deciding on the line in
the sand, we will delete any tests in Neutron that should continue to
live in Tempest. Some Neutron tests were modified after they were copied
from Tempest, these modifications will have to be examined and then
proposed to Tempest. Afterwards these tests may be removed from Neutron,
eliminating the overlap from the Neutron side. Once this is done,
overlapping tests may be deleted from Tempest.

For problem 2, we will develop a Neutron Tempest plugin. This will be
tracked in a separate bug. Note that there's already a patch for this up
for review: https://review.openstack.org/#/c/274023/

* The work is also being tracked here: https://etherpad.openstack.org/p
/neutron-tempest-defork

** Affects: neutron
 Importance: High
 Assignee: Assaf Muller (amuller)
 Status: In Progress

** Description changed:

  Problem statement:
  
  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.
  
  Proposed solution:
  
  For problem 1, we eliminate the overlap. We do this by defining a set of
  tests that will live in Tempest, and another set of tests that will live
  in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line in
  the sand, we will delete any tests in Neutron that should continue to
  live in Tempest. Some Neutron tests were modified after they were copied
  from Tempest, these modifications will have to be examined and then
  proposed to Tempest. Afterwards these tests may be removed from Neutron,
  eliminating the overlap from the Neutron side. Once this is done,
  overlapping tests may be deleted from Tempest.
  
  For problem 2, we will develop a Neutron Tempest plugin. This will be
  tracked in a separate bug. Note that there's already a patch for this up
  for review: https://review.openstack.org/#/c/274023/
+ 
+ * The work is also being tracked here: https://etherpad.openstack.org/p
+ /neutron-tempest-defork

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552960

Title:
  Tempest and Neutron duplicate tests

Status in neutron:
  In Progress

Bug description:
  Problem statement:

  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

  Proposed solution:

  For problem 1, we eliminate the overlap. We do this by defining a set
  of tests that will live in Tempest, and another set of tests that will
  live in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line
  in the sand, we will delete any tests in Neutron that sho

[Yahoo-eng-team] [Bug 1551288] [NEW] Fullstack native tests sometimes fail with an OVS agent failing to start with 'Address already in use' error

2016-02-29 Thread Assaf Muller
Public bug reported:

Example failure:
test_connectivity(VLANs,Native) fails with this error:

http://paste.openstack.org/show/488585/

wait_until_env_is_up is timing out, which typically means that the
expected number of agents failed to start. Indeed in this particular
example I saw this line being output repeatedly in neutron-server.log:

[29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
0.005458

Fullstack calls GET on agents to determine if the expected amount of
agents were started and are successfully reporting back to neutron-
server.

We then see that one of the three OVS agents crashed with this TRACE:
http://paste.openstack.org/show/488586/

This happens only with the native tests using the Ryu library.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

** Tags added: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551288

Title:
  Fullstack native tests sometimes fail with an OVS agent failing to
  start with 'Address already in use' error

Status in neutron:
  New

Bug description:
  Example failure:
  test_connectivity(VLANs,Native) fails with this error:

  http://paste.openstack.org/show/488585/

  wait_until_env_is_up is timing out, which typically means that the
  expected number of agents failed to start. Indeed in this particular
  example I saw this line being output repeatedly in neutron-server.log:

  [29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
  0.005458

  Fullstack calls GET on agents to determine if the expected amount of
  agents were started and are successfully reporting back to neutron-
  server.

  We then see that one of the three OVS agents crashed with this TRACE:
  http://paste.openstack.org/show/488586/

  This happens only with the native tests using the Ryu library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548547] [NEW] Functional tests failing with FAIL: process-returncode

2016-02-22 Thread Assaf Muller
Public bug reported:

After https://review.openstack.org/#/c/277813/, we started seeing
failures in the functional job. The root cause is that the patch is
using self.addOnException, and it looks like if the method that is
invoked on exception itself raises an exception, testr freaks out and
fails the test. I think that in this particular case, the method
(collect_debug_info) may be executed out of order, after test clean ups
already occur so namespaces fixtures are already cleaned up.

Example trace:
http://paste.openstack.org/show/487818/

Failure rate seems to be hovering around 80% in the last couple of days.

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548547

Title:
  Functional tests failing with FAIL: process-returncode

Status in neutron:
  Confirmed

Bug description:
  After https://review.openstack.org/#/c/277813/, we started seeing
  failures in the functional job. The root cause is that the patch is
  using self.addOnException, and it looks like if the method that is
  invoked on exception itself raises an exception, testr freaks out and
  fails the test. I think that in this particular case, the method
  (collect_debug_info) may be executed out of order, after test clean
  ups already occur so namespaces fixtures are already cleaned up.

  Example trace:
  http://paste.openstack.org/show/487818/

  Failure rate seems to be hovering around 80% in the last couple of
  days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546668] [NEW] Auto allocate topology masks error when executing without a default subnetpool

2016-02-17 Thread Assaf Muller
Public bug reported:

How to reproduce:

Create an external network with is_default = True
Run 'neutron auto-allocated-topology-show', actual output:
Deployment error: Unable to provide tenant private network.

>From neutron-server.log:
No default pools available
Unable to auto allocate topology for tenant 760c003239354f45a41caccd0af7ab42 
due to missing requirements, e.g. default or shared subnetpools

Looking at the code, when auto allocating a tenant network it catches a
bunch of different errors when creating a subnet, logs the error above,
then raises another exception, losing the reason for the failure.

The expected output would be something like:
'Cannot auto allocate a topology without a default subnetpool configured'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: auto-allocated-topology

** Tags added: auto-allocated-topology

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546668

Title:
  Auto allocate topology masks error when executing without a default
  subnetpool

Status in neutron:
  New

Bug description:
  How to reproduce:

  Create an external network with is_default = True
  Run 'neutron auto-allocated-topology-show', actual output:
  Deployment error: Unable to provide tenant private network.

  From neutron-server.log:
  No default pools available
  Unable to auto allocate topology for tenant 760c003239354f45a41caccd0af7ab42 
due to missing requirements, e.g. default or shared subnetpools

  Looking at the code, when auto allocating a tenant network it catches
  a bunch of different errors when creating a subnet, logs the error
  above, then raises another exception, losing the reason for the
  failure.

  The expected output would be something like:
  'Cannot auto allocate a topology without a default subnetpool configured'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544999] Re: Encountered database ACL.", "error":"referential integrity violation" running OVN stack

2016-02-12 Thread Assaf Muller
** Project changed: neutron => networking-ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544999

Title:
  Encountered database  ACL.","error":"referential integrity violation"
  running OVN stack

Status in networking-ovn:
  Confirmed

Bug description:
  Scenario tested: A tenant has 2 networks, each with a subnet connected 
through a router. Each tenant gets two VMs each in a network. When these two 
VMs boot up, they send iperf traffic across the router.
  Scale tenants to 500 and measure scaling performance

  Results: Scaled up to 330 vms successfully and failed to boot VMs
  beyond 330VMs.

  The neturon servers including plugin were pegged at 100% and see this 
database integrity violation:
  2016-01-28 04:55:04.723 2757 WARNING requests.packages.urllib3.connectionpool 
[req-7964fc18-a95c-4464-a568-038a453d006e 7a0b7c6414734a93b5dffbc666534690 
e53d6cd40c5140c58ec014eb56070917 - - -] Connection pool is full, discarding 
connection: identity.open.softlayer.com
  2016-01-28 04:55:04.772 2757 WARNING requests.packages.urllib3.connectionpool 
[-] Connection pool is full, discarding connection: identity.open.softlayer.com
  2016-01-28 04:55:05.776 2757 WARNING requests.packages.urllib3.connectionpool 
[req-6b80241f-0928-4d93-a80e-988a7b3e9690 7a0b7c6414734a93b5dffbc666534690 
e53d6cd40c5140c58ec014eb56070917 - - -] Connection pool is full, discarding 
connection: identity.open.softlayer.com
  2016-01-28 04:55:55.380 2757 ERROR neutron.agent.ovsdb.impl_idl [-] OVSDB 
Error:
  {"details":"Table Logical_Switch column acls row 
36d940b2-26cc-426a-bda6-dd2491f18397 references nonexistent row 
0644cffd-71ca-467f-8e1d-6652968870ef in table ACL.","error":"referential 
integrity violation"}

  2016-01-28 04:55:55.443 2757 ERROR neutron.agent.ovsdb.impl_idl 
[req-0bd44851-ecd9-4957-978f-350b52ada25b cc27c50b17fc4954905db5f3f3eed730 
e53d6cd40c5140c58ec014eb56070917 - - -] Traceback (most recent call last):
  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/agent/ovsdb/native/connection.py",
 line 99, in run
  txn.results.put(txn.do_commit())
  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py", 
line 106, in do_commit
  raise RuntimeError(msg)
  RuntimeError: OVSDB Error:
  {"details":"Table Logical_Switch column acls row 
36d940b2-26cc-426a-bda6-dd2491f18397 references nonexistent row 
0644cffd-71ca-467f-8e1d-6652968870ef in table ACL.","error":"referential 
integrity violation"} 

  This problem occurs consistently scaling to 500 VMs.

  Also have seen this problem occurring in a 5 node devstack
  installation.  In this case we increased the Neutron api threads to 18
  to recreate this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1544999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538572] Re: Neutron api test fails with raise TypeError("Invalid credentials")

2016-01-27 Thread Assaf Muller
Fixed with https://review.openstack.org/#/c/263400/.

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538572

Title:
  Neutron api test fails with raise TypeError("Invalid credentials")

Status in neutron:
  Fix Released

Bug description:
  Neutron api test case fails with below error

  2016-01-27 14:16:18.979 8049 WARNING oslo_config.cfg [-] Option 
"allow_tenant_isolation" from group "compute" is deprecated. Use option 
"allow_tenant_isolation" from group "auth".
  v2
  http://10.11.6.32:5000/v2.0/
  {'username': None, 'tenant_name': None, 'password': None, 'user_id': None, 
'tenant_id': None}
  ==
  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  self.setUp()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  self.setupContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in 
setupContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 471, in try_run
  return func()
File "/opt/stack/neutron/neutron/tests/tempest/test.py", line 260, in 
setUpClass
  cls.resource_setup()
File "/opt/stack/neutron/neutron/tests/api/test_networks.py", line 65, in 
resource_setup
  super(NetworksTestJSON, cls).resource_setup()
File "/opt/stack/neutron/neutron/tests/api/base.py", line 65, in 
resource_setup
  os = cls.get_client_manager()
File "/opt/stack/neutron/neutron/tests/tempest/test.py", line 391, in 
get_client_manager
  force_tenant_isolation=force_tenant_isolation,
File "/opt/stack/neutron/neutron/tests/tempest/common/credentials.py", line 
37, in get_isolated_credentials
  network_resources=network_resources)
File "/opt/stack/neutron/neutron/tests/tempest/common/isolated_creds.py", 
line 39, in __init__
  self._get_admin_clients())
File "/opt/stack/neutron/neutron/tests/tempest/common/isolated_creds.py", 
line 48, in _get_admin_clients
  os = clients.AdminManager()
File "/opt/stack/neutron/neutron/tests/api/clients.py", line 115, in 
__init__
  'identity_admin'),
File "/opt/stack/neutron/neutron/tests/tempest/common/cred_provider.py", 
line 64, in get_configured_credentials
  credentials = get_credentials(fill_in=fill_in, **params)
File "/opt/stack/neutron/neutron/tests/tempest/common/cred_provider.py", 
line 93, in get_credentials
  **params)
File "/opt/stack/neutron/neutron/tests/tempest/auth.py", line 489, in 
get_credentials
  ca_certs=ca_certs, trace_requests=trace_requests)
File "/opt/stack/neutron/neutron/tests/tempest/auth.py", line 188, in 
__init__
  super(KeystoneAuthProvider, self).__init__(credentials)
File "/opt/stack/neutron/neutron/tests/tempest/auth.py", line 42, in 
__init__
  raise TypeError("Invalid credentials")
  TypeError: Invalid credentials

  --
  Ran 0 tests in 0.016s

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533013] [NEW] L2pop raises exception when deleting an unbound port

2016-01-11 Thread Assaf Muller
Public bug reported:

Some brilliant individual introduced a regression during a refactor
(https://review.openstack.org/#/c/263471/) that causes an exception to
be raised when an unbound port is deleted. For example:

neutron port-create --name=port some_network
neutron port-delete port
Deleted port: port

In the neutron-server log we can see:
http://paste.openstack.org/show/483517/

Apart from the scary TRACE there's no real implications. What should
have happened is an early return, so the l2pop mech driver shouldn't be
doing anything in this case, and it's spamming the log with bogus
information instead.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l2-pop

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533013

Title:
  L2pop raises exception when deleting an unbound port

Status in neutron:
  In Progress

Bug description:
  Some brilliant individual introduced a regression during a refactor
  (https://review.openstack.org/#/c/263471/) that causes an exception to
  be raised when an unbound port is deleted. For example:

  neutron port-create --name=port some_network
  neutron port-delete port
  Deleted port: port

  In the neutron-server log we can see:
  http://paste.openstack.org/show/483517/

  Apart from the scary TRACE there's no real implications. What should
  have happened is an early return, so the l2pop mech driver shouldn't
  be doing anything in this case, and it's spamming the log with bogus
  information instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532921] Re: networking-zvm has broken setup.cfg

2016-01-11 Thread Assaf Muller
Added networking-zvm as an affected component.

** Also affects: networking-zvm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532921

Title:
  networking-zvm has broken setup.cfg

Status in networking-zvm:
  New
Status in neutron:
  Invalid

Bug description:
  networking-zvm has in setup.cfg:

  [files]
  packages =
  neutron/plugins/zvm
  neutron/plugins/ml2/drivers/zvm

  
  It should have *one* package entry only and it is not allowed to use a "/".

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-zvm/+bug/1532921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530038] Re: when external_network_bridge is set to br-ex ovs does not create flow to EXT access

2016-01-11 Thread Assaf Muller
This is a documentation issue. Since the Icehouse days I believe,
Neutron allows you to configure external connectivity in one of two
ways:

1) Configure the external bridge name in l3_agent.ini. This means that
'qg' external router devices are plugged directly in to the bridge, the
OVS agent doesn't manage the bridge nor does it set up flows in case the
segmentation type for the external network is 'vlan'. This means that
you must set up a Linux VLAN device over the bridge with the external
network's VLAN ID.

2) Set the external bridge name in l3_agent.ini to ''. Configure bridge
mappings in the OVS agent so that it maps the external network's
physical_network to a bridge (Typically still called br-ex). Then the
OVS agent manages the bridge and sets up flows for VLAN translation.

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
 Assignee: ugvddm (271025598-9) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1530038

Title:
  when external_network_bridge is set to  br-ex ovs does not create flow
  to EXT access

Status in neutron:
  Invalid

Bug description:
  Under  /etc/neutron/l3_agent.ini  when configuring external_network_bridge = 
br-ex 
  OVS does not create dump-flow for this vlan so we get external access. 

  version : # rpm -qa |grep neutron 
  openstack-neutron-7.0.1-2.el7ost.noarch
  python-neutronclient-3.1.0-1.el7ost.noarch
  python-neutron-7.0.1-2.el7ost.noarch
  openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
  openstack-neutron-common-7.0.1-2.el7ost.noarch
  openstack-neutron-ml2-7.0.1-2.el7ost.noarch
  [root@puma06 ~(keystone_admin)]# rpm -qa |grep openvswitch 
  python-openvswitch-2.4.0-1.el7.noarch
  openstack-neutron-openvswitch-7.0.1-2.el7ost.noarch
  openvswitch-2.4.0-1.el7.x86_64

  Installed by packstack OSP-8

  step to reproduce : 
  1. Deploy with packstack OSP-8 , set ETX bridge parameter -->  
CONFIG_NEUTRON_L3_EXT_BRIDGE= br-ex 
  2. setup ENV : 
  # neutron net-create external_network --provider:network_type=vlan  
--provider:segmentation_id=181 --provider:physical_network physnet 
--router:external
  # neutron subnet-create external_network 10.35.166.0/24 --disable-dhcp 
--gateway 10.35.166.254  --allocation-pool start=10.35.166.1,end=10.35.166.100 
  # neutron net-create int_net 
  # neutron subnet-create int_net 192.168.1.0/24 --dns_nameservers list=true 
10.35.28.28 --name int_sub
  # neutron router-create Router_eNet
  # neutron router-interface-add Router_eNet subnet=int_sub
  # neutron router-gateway-set Router_eNet external_network

  3. ovs-ofctl dump-flows br-ex
  NXST_FLOW reply (xid=0x4):
   cookie=0x0, duration=57707.344s, table=0, n_packets=21, n_bytes=1638, 
idle_age=60, priority=2,in_port=2 actions=drop
   cookie=0x0, duration=57707.394s, table=0, n_packets=1233857, 
n_bytes=85172234, idle_age=0, priority=0 actions=NORMAL

  try to ping EXT network (8.8.8.8)  -- no connectivity
  4. When we change external_network_bridge = provider and creating the ENV 
again (new router & new routr ports )
  I get EXT access and the flow created : 
  # ovs-ofctl dump-flows br-ex
  NXST_FLOW reply (xid=0x4):
   cookie=0x0, duration=105.955s, table=0, n_packets=5, n_bytes=402, 
idle_age=97, priority=4,in_port=2,dl_vlan=2 actions=mod_vlan_vid:181,NORMAL
   cookie=0x0, duration=58222.490s, table=0, n_packets=39, n_bytes=3174, 
idle_age=105, priority=2,in_port=2 actions=drop
   cookie=0x0, duration=58222.540s, table=0, n_packets=1244856, 
n_bytes=85931346, idle_age=0, priority=0 actions=NORMAL

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1530038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532921] Re: networking-zvm has broken setup.cfg

2016-01-11 Thread Assaf Muller
This bug doesn't require a patch from the Neutron repo so the affected
component should not be Neutron, it should be networking-zvm, but I
couldn't find a launchpad component for it.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532921

Title:
  networking-zvm has broken setup.cfg

Status in neutron:
  Invalid

Bug description:
  networking-zvm has in setup.cfg:

  [files]
  packages =
  neutron/plugins/zvm
  neutron/plugins/ml2/drivers/zvm

  
  It should have *one* package entry only and it is not allowed to use a "/".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1532921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528977] Re: Neutron router not working with latest iproute2 package included in CentOS-7.2-1511

2015-12-28 Thread Assaf Muller
*** This bug is a duplicate of bug 1497309 ***
https://bugs.launchpad.net/bugs/1497309

Closing as a duplicate of bug 1497309.

** This bug has been marked a duplicate of bug 1497309
   l3-agent unable to parse output from ip netns list (iproute2 >= 4.0)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528977

Title:
  Neutron router not working with latest iproute2 package included in
  CentOS-7.2-1511

Status in neutron:
  Confirmed

Bug description:
  Seems that something has been changed in the new iproute version and
  now attempts to add more than one interface to router cause errors
  posted at the bottom. This affects neutron-l3-agent on CentOS-7.2-1511
  and possible on RedHat (cannot check this).

  Quick solution is to simple downgrade the package:
  # wget 
http://mirror.centos.org/centos/7.1.1503/os/x86_64/Packages/iproute-3.10.0-21.el7.x86_64.rpm
  # yum -y downgrade ./iproute-3.10.0-21.el7.x86_64.rpm

  Part of the /var/log/neutron/l3-agent.log:

  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'add', 
u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists

  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info [-]
  Command: ['ip', 'netns', 'add', 
u'qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot create namespace file 
"/var/run/netns/qrouter-ec62eace-0415-49b5-9c26-dca1677ba85a": File exists
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 356, in call
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 692, 
in process
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 396, 
in _process_internal_ports
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 328, 
in internal_network_added
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
INTERNAL_DEV_PREFIX)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 303, 
in _internal_network_added
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
prefix=prefix)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 252, 
in plug
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info bridge, 
namespace, prefix)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 483, 
in plug_new
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
namespace2=namespace)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 144, in 
add_veth
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self.ensure_namespace(namespace2)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 163, in 
ensure_namespace
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info ip = 
self.netns.add(name)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 793, in 
add
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
self._as_root([], ('add', name), use_root_namespace=True)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 280, in 
_as_root
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 80, in 
_as_root
  2015-12-24 01:35:05.794 6343 ERROR neutron.agent.l3.router_info 

[Yahoo-eng-team] [Bug 1497444] Re: Functional tests possibly leaving a network namespace around

2015-12-19 Thread Assaf Muller
test_ha_router_failover does use mock to manipulate NS names, but I ran
the L3 functional tests and used 'watch ip netns', and all namespace
names were constructed correctly and didn't have any weird 'MagicMock'
substrings in 'em. Something must have changed since this bug was
reported.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497444

Title:
  Functional tests possibly leaving a network namespace around

Status in neutron:
  Invalid

Bug description:
  After running the functional tests (dvsm-functional) I noticed I was
  unable to run 'ip netns', it was giving me an error something like:

  @agent1

  I was confused until I noticed a failure in a review:

  | ==
  | Failed 1 tests - output below:
  | ==
  | 
neutron.tests.functional.agent.test_l3_agent.L3HATestFramework.test_ha_router_failover
  | 
--
  | 
  | Captured traceback:
  | ~~~
  | Traceback (most recent call last):
  |   File "neutron/tests/functional/agent/test_l3_agent.py", line 795, in 
test_ha_router_failover
  | router1 = self.manage_router(self.agent, router_info)
  |   File "neutron/tests/functional/agent/test_l3_agent.py", line 133, in 
manage_router
  | agent._process_added_router(router)
  |   File "neutron/agent/l3/agent.py", line 446, in _process_added_router
  | self._router_added(router['id'], router)
  |   File "neutron/agent/l3/agent.py", line 335, in _router_added
  | ri.initialize(self.process_monitor)
  |   File "neutron/agent/l3/ha_router.py", line 87, in initialize
  | self.ha_network_added()
  |   File "neutron/agent/l3/ha_router.py", line 147, in ha_network_added
  | prefix=HA_DEV_PREFIX)
  |   File "neutron/agent/linux/interface.py", line 252, in plug
  | bridge, namespace, prefix)
  |   File "neutron/agent/linux/interface.py", line 346, in plug_new
  | namespace_obj = ip.ensure_namespace(namespace)
  |   File "neutron/agent/linux/ip_lib.py", line 164, in ensure_namespace
  | ip = self.netns.add(name)
  |   File "neutron/agent/linux/ip_lib.py", line 794, in add
  | self._as_root([], ('add', name), use_root_namespace=True)
  |   File "neutron/agent/linux/ip_lib.py", line 281, in _as_root
  | use_root_namespace=use_root_namespace)
  |   File "neutron/agent/linux/ip_lib.py", line 81, in _as_root
  | log_fail_as_error=self.log_fail_as_error)
  |   File "neutron/agent/linux/ip_lib.py", line 90, in _execute
  | log_fail_as_error=log_fail_as_error)
  |   File "neutron/agent/linux/utils.py", line 160, in execute
  | raise RuntimeError(m)
  | RuntimeError: 
  | Command: ['ip', 'netns', 'add', "@agent1"]
  | Exit code: 1
  | Stdin: 
  | Stdout: 
  | Stderr: Cannot not create namespace file "/var/run/netns/@agent1": File exists

  That is from http://logs.openstack.org/06/225206/2/check/gate-neutron-
  dsvm-functional/9ca87f0/console.html

  So it looks like a functional test is either creating a network
  namespace and not cleaning it up, or doing something else horribly
  wrong.

  There is a test at cmd/test_netns_cleanup.py that uses mock and
  namespaces, but it wasn't obvious to me that it was the culprit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460494] Re: neutron-ovs-cleanup failing to start with ovs bonding

2015-12-16 Thread Assaf Muller
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460494

Title:
  neutron-ovs-cleanup failing to start with ovs bonding

Status in neutron:
  Fix Released

Bug description:
  System is running Openstack Kilo:

  [root@network01 ~]# cat /etc/redhat-release
  CentOS Linux release 7.1.1503 (Core)

  [root@network01 ~]# uname -r
  3.10.0-229.4.2.el7.x86_64

  [root@network01 ~]# rpm -qa|grep neutron
  openstack-neutron-ml2-2015.1.0-1.el7.noarch
  openstack-neutron-common-2015.1.0-1.el7.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7.noarch
  python-neutronclient-2.3.11-1.el7.noarch
  python-neutron-2015.1.0-1.el7.noarch
  openstack-neutron-2015.1.0-1.el7.noarch

  Issue is with python-neutron-2015.1.0-1.el7.noarch line 316 of:

  /usr/lib/python2.7/site-packages/neutron/agent/common/ovs_lib.py

  get_vif_ports function is expecting that each ovs port will have an
  interface with a matching name, i.e. ovs port eth0 must have ovs
  Interface with name eth0. This does not work for bonded interfaces
  created within ovs as port bond2 may have Interface eth0 and Interface
  eth1 as an example.

  2015-05-29 16:53:06.034 2833 INFO neutron.common.config [-] Logging enabled!
  2015-05-29 16:53:06.039 2833 INFO neutron.common.config [-] 
/usr/bin/neutron-ovs-cleanup version 2015.1.0
  2015-05-29 16:53:06.502 2833 ERROR neutron.agent.ovsdb.impl_vsctl [-] Unable 
to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=external_ids', 'list', 'Interface', 'bond2'].
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Traceback 
(most recent call last):
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_vsctl.py", line 63, 
in run_vsctl
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl 
log_fail_as_error=False).rstrip()
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl raise 
RuntimeError(m)
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl 
RuntimeError:
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', '--oneline', '--format=json', '--', '--columns=external_ids', 
'list', 'Interface', 'bond2']
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Exit code: 1
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Stdin:
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Stdout:
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl Stderr: 
ovs-vsctl: no row "bond2" in table Interface
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl
  2015-05-29 16:53:06.502 2833 TRACE neutron.agent.ovsdb.impl_vsctl
  2015-05-29 16:53:06.503 2833 CRITICAL neutron [-] RuntimeError:
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=external_ids', 'list', 'Interface', 'bond2']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: ovs-vsctl: no row "bond2" in table Interface

  2015-05-29 16:53:06.503 2833 TRACE neutron Traceback (most recent call last):
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/bin/neutron-ovs-cleanup", line 10, in 
  2015-05-29 16:53:06.503 2833 TRACE neutron sys.exit(main())
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/ovs_cleanup.py", line 100, in main
  2015-05-29 16:53:06.503 2833 TRACE neutron ports = 
collect_neutron_ports(available_configuration_bridges)
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/cmd/ovs_cleanup.py", line 60, in 
collect_neutron_ports
  2015-05-29 16:53:06.503 2833 TRACE neutron ports += [port.port_name for 
port in ovs.get_vif_ports()]
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/common/ovs_lib.py", line 316, 
in get_vif_ports
  2015-05-29 16:53:06.503 2833 TRACE neutron check_error=True)
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/common/ovs_lib.py", line 139, 
in db_get_val
  2015-05-29 16:53:06.503 2833 TRACE neutron check_error=check_error)
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_vsctl.py", line 83, 
in execute
  2015-05-29 16:53:06.503 2833 TRACE neutron txn.add(self)
  2015-05-29 16:53:06.503 2833 TRACE neutron   File 

[Yahoo-eng-team] [Bug 1397087] Re: Bulk port create fails with conflict with some addresses fixed

2015-12-16 Thread Assaf Muller
Confirmed bug.

I don't see how this is an 'opinion', Neutron is clearly screwing up
here.

** Changed in: neutron
   Status: Opinion => Confirmed

** Tags added: l3-ipam-dhcp

** Changed in: neutron
   Importance: Medium => Undecided

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397087

Title:
  Bulk port create fails with conflict with some addresses fixed

Status in neutron:
  Confirmed

Bug description:
  In the bulk version of the port create request, multiple port
  creations may be requested.

  If there is a port without specifying an fixed_ip address, one is
  assigned to it. If a later port requests the same address, a conflict
  is detected and raised. The overall call's succeeds or fails depending
  on which addresses from the pool are set to be assigned next, and the
  order of the requested ports.

  Steps to reproduce:

  # neutron net-create test_fixed_ports
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e |
  | name  | test_fixed_ports |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 4|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | d42d65485d674e0a9d007a06182e46f7 |
  +---+--+

  # neutron subnet-create test_fixed_ports 10.0.0.0/24
  Created a new subnet:
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
  | cidr | 10.0.0.0/24|
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | 5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a   |
  | ip_version   | 4  |
  | name ||
  | network_id   | af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e   |
  | tenant_id| d42d65485d674e0a9d007a06182e46f7   |
  +--++

  # cat ports.data
  {"ports": [
  {
  "name": "A",
  "network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e"
  }, {
  "fixed_ips": [{"ip_address": "10.0.0.2"}],
  "name": "B",
  "network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e"
  }
  ]}

  # TOKEN='a valid keystone token'

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
"http://127.0.1.1:9696/v2.0/ports; -d...@ports.data
  {"NeutronError": {"message": "Unable to complete operation for network 
af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e. The IP address 10.0.0.2 is in use.", 
"type": "IpAddressInUse", "detail": ""}}

  Positive case:

  # cat ports.data.rev
  {"ports": [
  {
  "name": "A",
  "fixed_ips": [{"ip_address": "10.0.0.2"}],
  "network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e"
  }, {
  "name": "B",
  "network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e"
  }
  ]}

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
"http://127.0.1.1:9696/v2.0/ports; -d...@ports.data.rev
  {"ports": [{"status": "DOWN", "binding:host_id": "", "name": "A", 
"admin_state_up": true, "network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e", 
"tenant_id": "7b3e2f49d1fc4154ac5af10a4b9862c5", "binding:vif_details": {}, 
"binding:vnic_type": "normal", "binding:vif_type": "unbound", "device_owner": 
"", "mac_address": "fa:16:3e:16:1e:50", "binding:profile": {}, "fixed_ips": 
[{"subnet_id": "5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a", "ip_address": 
"10.0.0.2"}], "id": "75f5cdb7-5884-4583-9db1-73b946f94a04", "device_id": ""}, 
{"status": "DOWN", "binding:host_id": "", "name": "B", "admin_state_up": true, 
"network_id": "af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e", "tenant_id": 
"7b3e2f49d1fc4154ac5af10a4b9862c5", 

[Yahoo-eng-team] [Bug 1526559] [NEW] L3 agent parallel configuration of routers might slow things down

2015-12-15 Thread Assaf Muller
Public bug reported:

In the L3 agent's _process_routers_loop method, it spawns a GreenPool
with 8 eventlet threads. Those threads then take updates off the agent's
queue and process router updates. Router updates are serialized by
router_id so that two threads don't process the same router at any given
time.

In an environment running on a powerful baremetal server, on agent
restart it was trying to sync roughly 600 routers. Around half were HA
routers, and half were legacy routers. With the default GreenPool size
of 8, the result was that the server ground to a halt as CPU usage
skyrocketed to over 600%. The main offenders were ip, bash, keepalived
and Python. This was on an environment without rootwrap daemon based off
stable/juno. It took around 60 seconds to configure a single router.
Changing the GreenPool size from 8 to 1, caused the agent to:

1) Configure a router in 30 seconds, a 50% improvement.
2) Reduce CPU load from 600% to 70%, freeing the machine to do other things.

I'm filing this bug so that:

1) Someone can confirm my personal experience in a more controlled way - For 
example, graph router configuration time and CPU load as a result of GreenPool 
size.
2) If my findings are confirmed on master with rootwrap daemon, start 
considering alternatives like multiprocessing instead of eventlet 
multithreading, or at the very least optimize the GreenPool size.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp loadimpact

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526559

Title:
  L3 agent parallel configuration of routers might slow things down

Status in neutron:
  New

Bug description:
  In the L3 agent's _process_routers_loop method, it spawns a GreenPool
  with 8 eventlet threads. Those threads then take updates off the
  agent's queue and process router updates. Router updates are
  serialized by router_id so that two threads don't process the same
  router at any given time.

  In an environment running on a powerful baremetal server, on agent
  restart it was trying to sync roughly 600 routers. Around half were HA
  routers, and half were legacy routers. With the default GreenPool size
  of 8, the result was that the server ground to a halt as CPU usage
  skyrocketed to over 600%. The main offenders were ip, bash, keepalived
  and Python. This was on an environment without rootwrap daemon based
  off stable/juno. It took around 60 seconds to configure a single
  router. Changing the GreenPool size from 8 to 1, caused the agent to:

  1) Configure a router in 30 seconds, a 50% improvement.
  2) Reduce CPU load from 600% to 70%, freeing the machine to do other things.

  I'm filing this bug so that:

  1) Someone can confirm my personal experience in a more controlled way - For 
example, graph router configuration time and CPU load as a result of GreenPool 
size.
  2) If my findings are confirmed on master with rootwrap daemon, start 
considering alternatives like multiprocessing instead of eventlet 
multithreading, or at the very least optimize the GreenPool size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523999] [NEW] Any error in L3 agent after external gateway is configured but before the local cache is updated results in errors in subsequent router updates

2015-12-08 Thread Assaf Muller
Public bug reported:

Reproduction:
* Create a new router
* Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).

Any follow up update to the router (Add/remove interface, add/remove
FIP) will fail non-idempotent operations on the external device. This is
because any update will try to add the gateway again (Because
self.ex_gw_port = None). Even without a specific failure, reconfiguring
the external device is wasteful.

HA routers in particular will fail by throwing
VIPDuplicateAddressException for the external device's VIP. This
behavior was actually changed in a recent Mitaka patch
(https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
so this affects Juno to Liberty but not master and future releases.

The impact on legacy or distributed routers is less severe as their
process_external and routes_updated seem to be idempotent - Verified
against master via a makeshift functional test, I could not vouch for
previous releases.

Severity: It's severe for HA routers from Juno to Liberty, but not as
much for other routes types or HA routers on master.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: l3-dvr-backlog l3-ha l3-ipam-dhcp

** Description changed:

  Reproduction:
- Create a new router
- Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
+ * Create a new router
+ * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
  
  Any follow up update to the router (Add/remove interface, add/remove
  FIP) will fail non-idempotent operations on the external device. This is
  because any update will try to add the gateway again (Because
  self.ex_gw_port = None). Even without a specific failure, reconfiguring
  the external device is wasteful.
  
  HA routers in particular will fail by throwing
  VIPDuplicateAddressException for the external device's VIP. This
  behavior was actually changed in a recent Mitaka patch
  (https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
  so this affects Juno to Liberty.
  
  The impact on legacy or distributed routers is less severe as their
  process_external and routes_updated seem to be idempotent - Verified
  against master via a makeshift functional test, I could not vouch for
  previous releases.

** Description changed:

  Reproduction:
  * Create a new router
  * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In such a 
case extra routes would not be configured either, and post-router creation 
events would not be sent, which means that for example the metadata proxy 
wouldn't be started).
  
  Any follow up update to the router (Add/remove interface, add/remove
  FIP) will fail non-idempotent operations on the external device. This is
  because any update will try to add the gateway again (Because
  self.ex_gw_port = None). Even without a specific failure, reconfiguring
  the external device is wasteful.
  
  HA routers in particular will fail by throwing
  VIPDuplicateAddressException for the external device's VIP. This
  behavior was actually changed in a recent Mitaka patch
  (https://review.openstack.org/#/c/196893/50/neutron/agent/l3/ha_router.py),
- so this affects Juno to Liberty.
+ so this affects Juno to Liberty but not master and future releases.
  
  The impact on legacy or distributed routers is less severe as their
  process_external and routes_updated seem to be idempotent - Verified
  against master via a makeshift functional test, I could not vouch for
  previous releases.

** Description changed:

  Reproduction:
  * Create a new router
  * Attach external interface - Execute external_gateway_added successfully but 
fail some time before self.ex_gw_port = self.get_ex_gw_port() (An example of a 
failure would be an RPC error when trying to update FIP statuses. In 

[Yahoo-eng-team] [Bug 1522980] [NEW] L3 HA integration with l2pop assumes control plane is operational for fail over

2015-12-04 Thread Assaf Muller
Public bug reported:

L3 HA did not work with l2pop at all, and that was fixed here:
https://bugs.launchpad.net/neutron/+bug/1365476 via 
https://review.openstack.org/#/c/141114/.

However, the solution is sub optimal because it assumes the control plane is 
operational for fail over to work correctly.
Without l2pop, L3 HA can fail over successfully if the database, messaging 
server, neutron-server and destination L3 agent are dead. With l2pop, all four 
are needed. This is because for fail over to work, the destination L3 agent 
notices that a router has transitioned to master, and notifies neutron-server 
via RPC. At which point neutron-server updates all of the internal router 
port's 'binding:host' value to point to the target node, and l2pop code is 
executed in order to update the L2 agents.

Instead, I'd like fail over to rely solely on the data plane regardless
if l2pop is on or off. One such solution would be something similar to
patch set 9 of the patch:
https://review.openstack.org/#/c/141114/9//COMMIT_MSG. The idea is to
tell l2pop to treat HA router ports as replicated ports (Which they
are), so that tunnel endpoints would be created against all nodes that
host replicas of the router, and the destination MAC address of the port
would not be learned via l2pop, but via the fallback regular MAC
learning mechanism. This means that we lost some of the advantage of
l2pop, but I think it is essential to correct operation of L3 HA.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: l2-pop l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522980

Title:
  L3 HA integration with l2pop assumes control plane is operational for
  fail over

Status in neutron:
  New

Bug description:
  L3 HA did not work with l2pop at all, and that was fixed here:
  https://bugs.launchpad.net/neutron/+bug/1365476 via 
https://review.openstack.org/#/c/141114/.

  However, the solution is sub optimal because it assumes the control plane is 
operational for fail over to work correctly.
  Without l2pop, L3 HA can fail over successfully if the database, messaging 
server, neutron-server and destination L3 agent are dead. With l2pop, all four 
are needed. This is because for fail over to work, the destination L3 agent 
notices that a router has transitioned to master, and notifies neutron-server 
via RPC. At which point neutron-server updates all of the internal router 
port's 'binding:host' value to point to the target node, and l2pop code is 
executed in order to update the L2 agents.

  Instead, I'd like fail over to rely solely on the data plane
  regardless if l2pop is on or off. One such solution would be something
  similar to patch set 9 of the patch:
  https://review.openstack.org/#/c/141114/9//COMMIT_MSG. The idea is to
  tell l2pop to treat HA router ports as replicated ports (Which they
  are), so that tunnel endpoints would be created against all nodes that
  host replicas of the router, and the destination MAC address of the
  port would not be learned via l2pop, but via the fallback regular MAC
  learning mechanism. This means that we lost some of the advantage of
  l2pop, but I think it is essential to correct operation of L3 HA.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522966] [NEW] Linux bridge agent crashes without logging if l2pop is off and vxlan_group is unset

2015-12-04 Thread Assaf Muller
Public bug reported:

In the context of https://bugs.launchpad.net/openstack-
ansible/+bug/1521793, patch https://review.openstack.org/#/c/253606/ was
sent to test openstack-ansible deployment with LB agent and l2pop
disabled. VMs aren't spawning, you can see binding failures in the
neutron-server log. Finally, looking at the LB agent log it looks oddly
short:

http://logs.openstack.org/06/253606/1/check/gate-openstack-ansible-dsvm-
commit/187ccc0/logs/aio1_neutron_agents_container-8702df06/neutron-
linuxbridge-agent.log

Output from that log:

2015-12-04 18:34:01.455 3062 DEBUG oslo_service.service [-] 

 log_opt_values 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/oslo_config/cfg.py:2270
2015-12-04 18:34:01.455 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'eth12'] create_process 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
2015-12-04 18:34:01.486 3062 DEBUG neutron.agent.linux.utils [-] 
Command: ['ip', '-o', 'link', 'show', 'eth12']
Exit code: 0
 execute 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:154
2015-12-04 18:34:01.488 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'eth11'] create_process 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
2015-12-04 18:34:01.523 3062 DEBUG neutron.agent.linux.utils [-] 
Command: ['ip', '-o', 'link', 'show', 'eth11']
Exit code: 0
 execute 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:154
2015-12-04 18:34:01.524 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'addr', 'show', 'to', '172.29.242.170'] create_process 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
2015-12-04 18:34:01.557 3062 DEBUG neutron.agent.linux.utils [-] 
Command: ['ip', 'addr', 'show', 'to', '172.29.242.170']
Exit code: 0
 execute 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:154
2015-12-04 18:34:01.558 3062 WARNING 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] 
VXLAN muticast group(s) must be provided in vxlan_group option to enable VXLAN 
MCAST mode

And that's the end of the file. Looking at the code it's clear what's
happening. Both l2pop and VXLAN.vxlan_group are off, so when the LB
agent initializes and calls check_vxlan_support, it throws
VxlanNetworkUnsupported, but that exception is uncaught and the agent
crashes without logging anything.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522966

Title:
  Linux bridge agent crashes without logging if l2pop is off and
  vxlan_group is unset

Status in neutron:
  New

Bug description:
  In the context of https://bugs.launchpad.net/openstack-
  ansible/+bug/1521793, patch https://review.openstack.org/#/c/253606/
  was sent to test openstack-ansible deployment with LB agent and l2pop
  disabled. VMs aren't spawning, you can see binding failures in the
  neutron-server log. Finally, looking at the LB agent log it looks
  oddly short:

  http://logs.openstack.org/06/253606/1/check/gate-openstack-ansible-
  dsvm-commit/187ccc0/logs/aio1_neutron_agents_container-8702df06
  /neutron-linuxbridge-agent.log

  Output from that log:

  2015-12-04 18:34:01.455 3062 DEBUG oslo_service.service [-] 

 log_opt_values 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/oslo_config/cfg.py:2270
  2015-12-04 18:34:01.455 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'eth12'] create_process 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2015-12-04 18:34:01.486 3062 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ip', '-o', 'link', 'show', 'eth12']
  Exit code: 0
   execute 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:154
  2015-12-04 18:34:01.488 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', '-o', 'link', 'show', 'eth11'] create_process 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2015-12-04 18:34:01.523 3062 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ip', '-o', 'link', 'show', 'eth11']
  Exit code: 0
   execute 
/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py:154
  2015-12-04 18:34:01.524 3062 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'addr', 'show', 'to', '172.29.242.170'] create_process 

[Yahoo-eng-team] [Bug 1522186] [NEW] IptablesFirewallTestCase failing with certain kernels: "sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory"

2015-12-02 Thread Assaf Muller
Public bug reported:

cat /etc/redhat-release 
Fedora release 22 (Twenty Two)

uname -r
4.1.7-200.fc22.x86_64

tox -e dsvm-functional 
neutron.tests.functional.agent.linux.test_iptables_firewall.IptablesFirewallTestCase
All tests in the test class fail with:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file 
or directory

Full trace here:
http://paste.openstack.org/show/480705/

This thread shows that you need to 'modprobe br_netfilter' to be able to
set that sysctl (Which is mandatory for the iptables firewall driver)
since kernel v3.17-rc4-777-g34666d4.

http://askubuntu.com/questions/645638/directory-proc-sys-net-bridge-
missing

This bug affects both production systems as well as the functional
tests.

1) Neutron's functional tests should be portable - They should 'just work' on 
supported platforms by bringing in their own dependencies (Python requirements 
as well as platform requirements via tools/configure_for_func_testing.sh).
2) For production code, it would seem Neutron currently assumes the deployment 
tool makes sure the br_netfilter kernel module is in place. We should examine 
the validity of this assumption, at a minimum document it.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522186

Title:
  IptablesFirewallTestCase failing with certain kernels: "sysctl: cannot
  stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file or
  directory"

Status in neutron:
  New

Bug description:
  cat /etc/redhat-release 
  Fedora release 22 (Twenty Two)

  uname -r
  4.1.7-200.fc22.x86_64

  tox -e dsvm-functional 
neutron.tests.functional.agent.linux.test_iptables_firewall.IptablesFirewallTestCase
  All tests in the test class fail with:
  sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such 
file or directory

  Full trace here:
  http://paste.openstack.org/show/480705/

  This thread shows that you need to 'modprobe br_netfilter' to be able
  to set that sysctl (Which is mandatory for the iptables firewall
  driver) since kernel v3.17-rc4-777-g34666d4.

  http://askubuntu.com/questions/645638/directory-proc-sys-net-bridge-
  missing

  This bug affects both production systems as well as the functional
  tests.

  1) Neutron's functional tests should be portable - They should 'just work' on 
supported platforms by bringing in their own dependencies (Python requirements 
as well as platform requirements via tools/configure_for_func_testing.sh).
  2) For production code, it would seem Neutron currently assumes the 
deployment tool makes sure the br_netfilter kernel module is in place. We 
should examine the validity of this assumption, at a minimum document it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522199] [NEW] Functional test_ipwrapper_get_device_by_ip fails with new kernels

2015-12-02 Thread Assaf Muller
Public bug reported:

Trace:
http://paste.openstack.org/show/480710/

The test creates a device via:
sudo ip tuntap add test223ef12 mode tap

Which on my system (Fedora 22) results in a device called:
test223ef12@NONE

And the test fails to compare 'test223ef12' to 'test223ef12@NONE'.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522199

Title:
  Functional test_ipwrapper_get_device_by_ip fails with new kernels

Status in neutron:
  New

Bug description:
  Trace:
  http://paste.openstack.org/show/480710/

  The test creates a device via:
  sudo ip tuntap add test223ef12 mode tap

  Which on my system (Fedora 22) results in a device called:
  test223ef12@NONE

  And the test fails to compare 'test223ef12' to 'test223ef12@NONE'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521820] [NEW] Some DVR functional tests leak the FIP namespace

2015-12-01 Thread Assaf Muller
Public bug reported:

The FIP namespace is deleted when the agent receives an RPC message
'fipnamespace_delete_on_ext_net'
(https://review.openstack.org/#/c/230079/). This is simulated in some
DVR tests (Thus cleaning up the namespace), but not all. All of the DVR
'lifecycle' tests execute _dvr_router_lifecycle, which in turn executes
_assert_fip_namespace_deleted, that calls
agent.fipnamespace_delete_on_ext_net and asserts it succeeded. Any test
that creates a distributed router that is not a 'lifecycle' test does
not clean up the FIP namespace, this includes:

* test_dvr_router_fips_for_multiple_ext_networks
* test_dvr_router_rem_fips_on_restarted_agent
* test_dvr_router_add_fips_on_restarted_agent
* test_dvr_router_add_internal_network_set_arp_cache
* test_dvr_router_fip_agent_mismatch
* test_dvr_router_fip_late_binding
* test_dvr_router_snat_namespace_with_interface_remove
* test_dvr_ha_router_failover

** Affects: neutron
 Importance: Low
 Status: New


** Tags: functional-tests l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521820

Title:
  Some DVR functional tests leak the FIP namespace

Status in neutron:
  New

Bug description:
  The FIP namespace is deleted when the agent receives an RPC message
  'fipnamespace_delete_on_ext_net'
  (https://review.openstack.org/#/c/230079/). This is simulated in some
  DVR tests (Thus cleaning up the namespace), but not all. All of the
  DVR 'lifecycle' tests execute _dvr_router_lifecycle, which in turn
  executes _assert_fip_namespace_deleted, that calls
  agent.fipnamespace_delete_on_ext_net and asserts it succeeded. Any
  test that creates a distributed router that is not a 'lifecycle' test
  does not clean up the FIP namespace, this includes:

  * test_dvr_router_fips_for_multiple_ext_networks
  * test_dvr_router_rem_fips_on_restarted_agent
  * test_dvr_router_add_fips_on_restarted_agent
  * test_dvr_router_add_internal_network_set_arp_cache
  * test_dvr_router_fip_agent_mismatch
  * test_dvr_router_fip_late_binding
  * test_dvr_router_snat_namespace_with_interface_remove
  * test_dvr_ha_router_failover

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521815] [NEW] DVR functional tests failing intermittently

2015-12-01 Thread Assaf Muller
Public bug reported:

Some console logs:

http://logs.openstack.org/18/248418/3/check/gate-neutron-dsvm-functional/8a6dfcf/console.html
http://logs.openstack.org/00/189500/23/check/gate-neutron-dsvm-functional/d949ce0/console.html
http://logs.openstack.org/72/252072/1/check/gate-neutron-dsvm-functional/aafcd9a/console.html
http://logs.openstack.org/32/192032/26/check/gate-neutron-dsvm-functional/b267f83/console.html
http://logs.openstack.org/02/251502/3/check/gate-neutron-dsvm-functional/b074a96/console.html

Tests seen failing so far (May not be comprehensive):
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_add_fips_on_restarted_agent
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_with_snat_with_fips
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_ha_with_snat_with_fips
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_without_snat_with_fips

The commonality is:
1) DVR tests (With and without HA, with and without SNAT)
2) The tests are taking thousands of seconds to fail, causing the job to time 
out, this is even though we're supposed to have a per-test timeout of 180 
seconds defined in tox.ini. This means (I suspect) that we're not getting the 
functional tests logs.

** Affects: neutron
 Importance: High
 Status: New


** Tags: functional-tests gate-failure l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521815

Title:
  DVR functional tests failing intermittently

Status in neutron:
  New

Bug description:
  Some console logs:

  
http://logs.openstack.org/18/248418/3/check/gate-neutron-dsvm-functional/8a6dfcf/console.html
  
http://logs.openstack.org/00/189500/23/check/gate-neutron-dsvm-functional/d949ce0/console.html
  
http://logs.openstack.org/72/252072/1/check/gate-neutron-dsvm-functional/aafcd9a/console.html
  
http://logs.openstack.org/32/192032/26/check/gate-neutron-dsvm-functional/b267f83/console.html
  
http://logs.openstack.org/02/251502/3/check/gate-neutron-dsvm-functional/b074a96/console.html

  Tests seen failing so far (May not be comprehensive):
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_add_fips_on_restarted_agent
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_router_lifecycle_without_ha_without_snat_with_fips

  The commonality is:
  1) DVR tests (With and without HA, with and without SNAT)
  2) The tests are taking thousands of seconds to fail, causing the job to time 
out, this is even though we're supposed to have a per-test timeout of 180 
seconds defined in tox.ini. This means (I suspect) that we're not getting the 
functional tests logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521846] [NEW] Metering not configured for all-in-one DVR job, failing tests

2015-12-01 Thread Assaf Muller
Public bug reported:

Here's an example:
http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-dvr/62fdece/console.html#_2015-12-02_02_15_59_182

It looks like the metering setUpClass is trying to register metering
resources before checking if the extension is actually loaded (Which is
a separate bug in the tests). Another test fails showing that metering
is not configured:

http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-
dvr/62fdece/console.html#_2015-12-02_02_15_59_167

Metering seems to be configured fine both in the non-DVR job, as well as
the multinode DVR job.

** Affects: neutron
 Importance: High
 Status: New


** Tags: gate-failure l3-dvr-backlog metering

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521846

Title:
  Metering not configured for all-in-one DVR job, failing tests

Status in neutron:
  New

Bug description:
  Here's an example:
  
http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-dvr/62fdece/console.html#_2015-12-02_02_15_59_182

  It looks like the metering setUpClass is trying to register metering
  resources before checking if the extension is actually loaded (Which
  is a separate bug in the tests). Another test fails showing that
  metering is not configured:

  http://logs.openstack.org/74/247874/6/check/gate-tempest-dsvm-neutron-
  dvr/62fdece/console.html#_2015-12-02_02_15_59_167

  Metering seems to be configured fine both in the non-DVR job, as well
  as the multinode DVR job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518632] [NEW] API tests skipped at the gate

2015-11-21 Thread Assaf Muller
Public bug reported:

I'm seeing them running successfully on the 11th here, at 3:42AM EST:
https://review.openstack.org/#/c/230090/

But skipping here, at 10:48PM EST:
https://review.openstack.org/#/c/243915/

The tests are skipped with 'SKIPPED: Neutron support is required'.

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518632

Title:
  API tests skipped at the gate

Status in neutron:
  New

Bug description:
  I'm seeing them running successfully on the 11th here, at 3:42AM EST:
  https://review.openstack.org/#/c/230090/

  But skipping here, at 10:48PM EST:
  https://review.openstack.org/#/c/243915/

  The tests are skipped with 'SKIPPED: Neutron support is required'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518466] [NEW] Fullstack connectivity tests fail intermittently

2015-11-20 Thread Assaf Muller
Public bug reported:

Fullstack test_connectivity_* fail at the gate from time to time. This
happens locally as well.

The test sets up a couple of fake VMs and issues a ping from one to
another. This ping can fail from time to time. I issued a break point
after such a failure and issued a ping myself and it worked.

I think that the ping should be changed to a block/wait_until_ping.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518466

Title:
  Fullstack connectivity tests fail intermittently

Status in neutron:
  In Progress

Bug description:
  Fullstack test_connectivity_* fail at the gate from time to time. This
  happens locally as well.

  The test sets up a couple of fake VMs and issues a ping from one to
  another. This ping can fail from time to time. I issued a break point
  after such a failure and issued a ping myself and it worked.

  I think that the ping should be changed to a block/wait_until_ping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516260] [NEW] L3 agent sync_routers timeouts may cause cluster to fall down

2015-11-14 Thread Assaf Muller
Public bug reported:

L3 agent 'sync_routers' RPC call is sent when the agent starts or when
an exception occurs. It uses a default timeout of 60 seconds (An Oslo
messaging config option). At scale the server can take a long time to
answer, causing a timeout and the message is sent again, causing a
cascading failure and the situation does not resolve itself. The
sync_routers server RPC response was optimized to mitigate this, it
could also be helpful to simply increase the timeout.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516260

Title:
  L3 agent sync_routers timeouts may cause cluster to fall down

Status in neutron:
  New

Bug description:
  L3 agent 'sync_routers' RPC call is sent when the agent starts or when
  an exception occurs. It uses a default timeout of 60 seconds (An Oslo
  messaging config option). At scale the server can take a long time to
  answer, causing a timeout and the message is sent again, causing a
  cascading failure and the situation does not resolve itself. The
  sync_routers server RPC response was optimized to mitigate this, it
  could also be helpful to simply increase the timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1516260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508205] [NEW] L2pop connectivity broken to nodes with L3/metadata/dhcp/non-L2 agents

2015-10-20 Thread Assaf Muller
Public bug reported:

Since https://review.openstack.org/#/c/236970/ merged, l2pop is broken
to nodes with non-L2 nodes, as get_agent_by_host on a host with non-L2
agents will return a random agent (Which might not have the tunneling_ip
key in its configurations dict). CI passed because our coverage for
l2pop is not amazing.

** Affects: neutron
 Importance: High
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l2-pop

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

** Changed in: neutron
   Status: New => In Progress

** Tags added: l2-pop

** Description changed:

  Since https://review.openstack.org/#/c/236970/ merged, l2pop is broken
  to nodes with non-L2 nodes, as get_agent_by_host on a host with non-L2
  agents will return a random agent (Which might not have the tunneling_ip
- key in its configurations dict).
+ key in its configurations dict). CI passed because our coverage for
+ l2pop is not amazing.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508205

Title:
  L2pop connectivity broken to nodes with L3/metadata/dhcp/non-L2 agents

Status in neutron:
  In Progress

Bug description:
  Since https://review.openstack.org/#/c/236970/ merged, l2pop is broken
  to nodes with non-L2 nodes, as get_agent_by_host on a host with non-L2
  agents will return a random agent (Which might not have the
  tunneling_ip key in its configurations dict). CI passed because our
  coverage for l2pop is not amazing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507552] Re: Changing in cisco exeption.py is requred

2015-10-19 Thread Assaf Muller
** Project changed: neutron => networking-cisco

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507552

Title:
  Changing in cisco exeption.py is requred

Status in networking-cisco:
  New

Bug description:
  Change set https://review.openstack.org/#/c/233766/ leads exeption in
  networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py

  there is issue like that
  --
  error: testr failed (3)
  Failed to import test module: 
networking_cisco.tests.unit.ml2.drivers.cisco.n1kv.test_cisco_n1kv_mech
  Traceback (most recent call last):
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ubuntu/workspace/python27/networking-cisco/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"networking_cisco/tests/unit/ml2/drivers/cisco/n1kv/test_cisco_n1kv_mech.py", 
line 23, in 
  from networking_cisco.plugins.ml2.drivers.cisco.n1kv import (
File "networking_cisco/plugins/ml2/drivers/cisco/n1kv/exceptions.py", line 
65, in 
  class ProfileDeletionNotSupported(exceptions.NotSupported):
  AttributeError: 'module' object has no attribute 'NotSupported'

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1507552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507761] Re: qos wrong units in max-burst-kbps option

2015-10-19 Thread Assaf Muller
The OVS documentation refers to documentation of its own implementation.
The Neutron QoS API offers an abstraction that happens to have only one
implementation at this time, but has your own LB patch
(https://review.openstack.org/236210) as a second one. As long as
something translates from the API's unit of measurement to the different
implementations unit of measurement, we're fine. We're not bound to any
one implementation's documentation.

Assigning to Miguel for further triaging.

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507761

Title:
  qos wrong units in max-burst-kbps option

Status in neutron:
  Opinion

Bug description:
  In neutron in qos bw limit rule table in database and in API extension
  parameter "max-burst-kbps" has got wrong units suggested. Burst should
  be given in kb instead of kbps because according to for example ovs
  documentation: http://openvswitch.org/support/config-cookbooks/qos-
  rate-limiting/ it is "a parameter to the policing algorithm to
  indicate the maximum amount of data (in Kb) that this interface can
  send beyond the policing rate."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505700] [NEW] Floating IPs disassociation does not remove conntrack state with HA routers

2015-10-13 Thread Assaf Muller
Public bug reported:

Reproduction:
1) Create HA router, connect to internal/external networks
2) Create VM, assign floating IP
3) Ping floating IP
4) Disassociate floating IP

Actual result:
Ping continues

Expected result:
Ping halts

Root cause:
Legacy routers floating IP disassociation delete conntrackd state, HA routers 
don't because they're sentient beings with a sense of self that choose to not 
follow common convention or reason.

** Affects: neutron
 Importance: Medium
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505700

Title:
  Floating IPs disassociation does not remove conntrack state with HA
  routers

Status in neutron:
  In Progress

Bug description:
  Reproduction:
  1) Create HA router, connect to internal/external networks
  2) Create VM, assign floating IP
  3) Ping floating IP
  4) Disassociate floating IP

  Actual result:
  Ping continues

  Expected result:
  Ping halts

  Root cause:
  Legacy routers floating IP disassociation delete conntrackd state, HA routers 
don't because they're sentient beings with a sense of self that choose to not 
follow common convention or reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505259] [NEW] Functional gate broken due to configure_for_func_testing.sh inability to configure DB backends

2015-10-12 Thread Assaf Muller
Public bug reported:

After https://review.openstack.org/#/c/233106/ was merged to Devstack, the 
configure_for_func_testing.sh script is unable to configure both DB backends, 
and test setup eventually fails as such:
http://logs.openstack.org/68/233068/5/check/gate-neutron-dsvm-functional/6c27c46/console.html#_2015-10-12_13_29_18_446

This happens because the Postgres disable is successful, but the Postgres 
enable is ignored by Devstack. The issue is here:
https://github.com/openstack/neutron/blob/master/tools/configure_for_func_testing.sh#L127

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505259

Title:
  Functional gate broken due to configure_for_func_testing.sh inability
  to configure DB backends

Status in neutron:
  In Progress

Bug description:
  After https://review.openstack.org/#/c/233106/ was merged to Devstack, the 
configure_for_func_testing.sh script is unable to configure both DB backends, 
and test setup eventually fails as such:
  
http://logs.openstack.org/68/233068/5/check/gate-neutron-dsvm-functional/6c27c46/console.html#_2015-10-12_13_29_18_446

  This happens because the Postgres disable is successful, but the Postgres 
enable is ignored by Devstack. The issue is here:
  
https://github.com/openstack/neutron/blob/master/tools/configure_for_func_testing.sh#L127

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505346] [NEW] Downgrade of DVR to legacy router returns wrong HTTP error code

2015-10-12 Thread Assaf Muller
Public bug reported:

Downgrading a DVR router to a legacy router raises a 'NotSupported'
exception, which inherits from NeutronException, and isn't given an
explicit error code in the API layer, therefor returning error 500:

http://paste.openstack.org/show/476056/

It should return error 501 (NotImplemented).

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505346

Title:
  Downgrade of DVR to legacy router returns wrong HTTP error code

Status in neutron:
  In Progress

Bug description:
  Downgrading a DVR router to a legacy router raises a 'NotSupported'
  exception, which inherits from NeutronException, and isn't given an
  explicit error code in the API layer, therefor returning error 500:

  http://paste.openstack.org/show/476056/

  It should return error 501 (NotImplemented).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505375] [NEW] Upgrading an existing HA router to DVR returns wrong error

2015-10-12 Thread Assaf Muller
Public bug reported:

1) Create an HA router
2) Update it to a DVR router

Expected result:
Error explaining that a router cannot be both HA and distributed

Actual result:
neutron.common.exceptions.BadRequest: Bad router request: Cannot upgrade active 
router to distributed. Please set router admin_state_up to False prior to 
upgrade.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505375

Title:
  Upgrading an existing HA router to DVR returns wrong error

Status in neutron:
  New

Bug description:
  1) Create an HA router
  2) Update it to a DVR router

  Expected result:
  Error explaining that a router cannot be both HA and distributed

  Actual result:
  neutron.common.exceptions.BadRequest: Bad router request: Cannot upgrade 
active router to distributed. Please set router admin_state_up to False prior 
to upgrade.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505382] [NEW] Wrong error code returned when an HA+DVR router is created or updated

2015-10-12 Thread Assaf Muller
Public bug reported:

This is the error returned when an HA+DVR router is created:
neutron router-create --ha=True --distributed=True dvr_ha
501-{u'NotImplementedError': {u'message': u'', u'type': 
u'DistributedHARouterNotSupported', u'detail': u''}}

It should be a properly formatted error, and the error code should not
be 501, it should be 400.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505382

Title:
  Wrong error code returned when an HA+DVR router is created or updated

Status in neutron:
  In Progress

Bug description:
  This is the error returned when an HA+DVR router is created:
  neutron router-create --ha=True --distributed=True dvr_ha
  501-{u'NotImplementedError': {u'message': u'', u'type': 
u'DistributedHARouterNotSupported', u'detail': u''}}

  It should be a properly formatted error, and the error code should not
  be 501, it should be 400.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502987] [NEW] fullstack runs leave neutron-server forked processes running

2015-10-05 Thread Assaf Muller
Public bug reported:

Since by default api_workers = Number of CPUs on the machine, the
neutron-server forks itself when it starts. Those child processes are
left behind on both successful and failed fullstack runs. This is
possibly related to bug https://bugs.launchpad.net/neutron/+bug/1487548.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1502987

Title:
  fullstack runs leave neutron-server forked processes running

Status in neutron:
  New

Bug description:
  Since by default api_workers = Number of CPUs on the machine, the
  neutron-server forks itself when it starts. Those child processes are
  left behind on both successful and failed fullstack runs. This is
  possibly related to bug
  https://bugs.launchpad.net/neutron/+bug/1487548.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1502987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496406] Re: Let RPC workers serve L3 queues too

2015-10-05 Thread Assaf Muller
*** This bug is a duplicate of bug 1498844 ***
https://bugs.launchpad.net/bugs/1498844

** This bug is no longer a duplicate of bug 1419970
   L3RpcCallback methods are not handled in rpc_worker processes
** This bug has been marked a duplicate of bug 1498844
   Service plugin queues should be consumed by all RPC workers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496406

Title:
  Let RPC workers serve L3 queues too

Status in neutron:
  New

Bug description:
  This is important for DVR clusters with lots of L3 agents.

  Right now L3 queue is only consumed by parent process of neutron-
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501150] [NEW] Reorganize and improve L3 agent functional tests

2015-09-29 Thread Assaf Muller
Public bug reported:

This bug is to track the following work:
1) neutron/tests/functional/agent/test_l3_agent is enormous. When I created 
that file it was 300 lines of code. It's now nearly 1,500 lines of code. It's 
very difficult to find what you're looking for. I propose splitting it up so 
that the common helper functions and base class is in 
neutron/tests/functional/agent/l3/framework. The tests themselves are then to 
be split up to the following four modules: legacy, HA, metadata_proxy and DVR. 
It would also be an opportunity to make further cosmetic clean ups, finding 
common code and extracting it out to the framework class.

2) The tests focus on the creation of a router with complete data: A
router with internal interfaces, an external interface, floating IPs,
extra routes and so on. The existing 'lifecycle' style test is useful:
Create a router, make assertions, delete it, make some more assertions.
However, I'd like to see improved coverage for update operations, for
all three router types (Legacy, HA, DVR): Create a router without
interfaces or floating IPs, add an internal interface, make assertions.
Add an external gateway, make assertions, and so on. The existing
coverage essentially covers the case of an existing router, and
restarting an agent so that a complete router is built. The latter (And
missing) coverage is for the case of a new router being created, and API
calls coming in to gradually attach the router to existing networks and
floating IPs. Both are important cases to cover and execute at times
different code paths.

3) Are there L3 agent  or router unit tests that are superseded entirely
and could be deleted?

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: functional-tests l3-dvr-backlog l3-ha l3-ipam-dhcp

** Tags added: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501150

Title:
  Reorganize and improve L3 agent functional tests

Status in neutron:
  New

Bug description:
  This bug is to track the following work:
  1) neutron/tests/functional/agent/test_l3_agent is enormous. When I created 
that file it was 300 lines of code. It's now nearly 1,500 lines of code. It's 
very difficult to find what you're looking for. I propose splitting it up so 
that the common helper functions and base class is in 
neutron/tests/functional/agent/l3/framework. The tests themselves are then to 
be split up to the following four modules: legacy, HA, metadata_proxy and DVR. 
It would also be an opportunity to make further cosmetic clean ups, finding 
common code and extracting it out to the framework class.

  2) The tests focus on the creation of a router with complete data: A
  router with internal interfaces, an external interface, floating IPs,
  extra routes and so on. The existing 'lifecycle' style test is useful:
  Create a router, make assertions, delete it, make some more
  assertions. However, I'd like to see improved coverage for update
  operations, for all three router types (Legacy, HA, DVR): Create a
  router without interfaces or floating IPs, add an internal interface,
  make assertions. Add an external gateway, make assertions, and so on.
  The existing coverage essentially covers the case of an existing
  router, and restarting an agent so that a complete router is built.
  The latter (And missing) coverage is for the case of a new router
  being created, and API calls coming in to gradually attach the router
  to existing networks and floating IPs. Both are important cases to
  cover and execute at times different code paths.

  3) Are there L3 agent  or router unit tests that are superseded
  entirely and could be deleted?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499821] [NEW] ovs_lib.OVSBridge.get_ports_attributes returns all ports in case there's no ports on the OVSBridge in question

2015-09-25 Thread Assaf Muller
Public bug reported:

If OVSBridge.get_ports_attributes is executed on an empty bridge,
get_port_name_list will return an empty string. In which case,
ovsdb.db_list is executed with port_names = '', meaning that it will
return all ports, instead of no ports.

The implication is that if, for example, br-ex (An ancillary bridge in
the OVS agent) is currently empty, then scan_ancillary_ports will pick
up all ports. All ports on the system will be considered
ancillary_ports, which is unexpected and can result in ports going DOWN
when they shouldn't.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499821

Title:
  ovs_lib.OVSBridge.get_ports_attributes returns all ports in case
  there's no ports on the OVSBridge in question

Status in neutron:
  New

Bug description:
  If OVSBridge.get_ports_attributes is executed on an empty bridge,
  get_port_name_list will return an empty string. In which case,
  ovsdb.db_list is executed with port_names = '', meaning that it will
  return all ports, instead of no ports.

  The implication is that if, for example, br-ex (An ancillary bridge in
  the OVS agent) is currently empty, then scan_ancillary_ports will pick
  up all ports. All ports on the system will be considered
  ancillary_ports, which is unexpected and can result in ports going
  DOWN when they shouldn't.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499864] [NEW] Fullstack infrastructure as a developer multi-node deployment tool

2015-09-25 Thread Assaf Muller
Public bug reported:

The fullstack testing infrastructure today is used purely in a testing
context. This RFE suggests that it could be useful to have fullstack
support another use case, which is a quick deployment tool for
developers that want to manually test something they're working on, or
if they want to learn about Neutron or a specific feature.

Neutron would expose a script that would accept a deployment topology document 
that will describe what is currently here:
https://github.com/openstack/neutron/blob/master/neutron/tests/fullstack/test_l3_agent.py#L61

For example, a .yaml file with:
How many OVS, L3, DHCP agents
Global configuration such as the segmentation type, l2pop={True,False}, OVS arp 
responder etc.

The script would then deploy the requested topology and spit out a
credentials file to interact with the API server, information about the
agents it deployed (Their host names, state paths, path to configuration
files etc), and perhaps also a plotnetcfg [1] output of the resulting
deployment (An image that shows how the OVS bridges are connected,
namespaces, what devices are connected to what bridges).

After deployment is finished, the script would also allow the creation
of fake VMs (Identical to the fake VMs we already create during
fullstack testing). Reminder: These VMs are backed by a Neutron port, a
namespace, and a device with the appropriate IP address connected to the
correct bridge. They can sufficiently simulate VMs, and resources on
external networks (To enable testing floating IPs and SNAT).

So, for example:
neutron-fullstack deploy dvr_ha_dhcp.yaml  # Spits out information about the 
topology

source 
neutron net-create 1
neutron net-create 2
neutron-fullstack create_vm --net_id=1, --binding:host_id=xyz
neutron-fullstack create_vm --net_id=2, --binding:host_id=abc
neutron router-create, attach it to both networks
Test ping from VM 1 to VM 2

neutron-fullstack destroy # Possibly accepting a topology ID if we were
to support deploying more than a single topology at any given time, I
need to think about this further


[1] https://github.com/jbenc/plotnetcfg

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack rfe

** Summary changed:

- Fullstack infrastructure as a dev deployment tool
+ Fullstack infrastructure as a developer multi-node deployment tool

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499864

Title:
  Fullstack infrastructure as a developer multi-node deployment tool

Status in neutron:
  New

Bug description:
  The fullstack testing infrastructure today is used purely in a testing
  context. This RFE suggests that it could be useful to have fullstack
  support another use case, which is a quick deployment tool for
  developers that want to manually test something they're working on, or
  if they want to learn about Neutron or a specific feature.

  Neutron would expose a script that would accept a deployment topology 
document that will describe what is currently here:
  
https://github.com/openstack/neutron/blob/master/neutron/tests/fullstack/test_l3_agent.py#L61

  For example, a .yaml file with:
  How many OVS, L3, DHCP agents
  Global configuration such as the segmentation type, l2pop={True,False}, OVS 
arp responder etc.

  The script would then deploy the requested topology and spit out a
  credentials file to interact with the API server, information about
  the agents it deployed (Their host names, state paths, path to
  configuration files etc), and perhaps also a plotnetcfg [1] output of
  the resulting deployment (An image that shows how the OVS bridges are
  connected, namespaces, what devices are connected to what bridges).

  After deployment is finished, the script would also allow the creation
  of fake VMs (Identical to the fake VMs we already create during
  fullstack testing). Reminder: These VMs are backed by a Neutron port,
  a namespace, and a device with the appropriate IP address connected to
  the correct bridge. They can sufficiently simulate VMs, and resources
  on external networks (To enable testing floating IPs and SNAT).

  So, for example:
  neutron-fullstack deploy dvr_ha_dhcp.yaml  # Spits out information about the 
topology

  source 
  neutron net-create 1
  neutron net-create 2
  neutron-fullstack create_vm --net_id=1, --binding:host_id=xyz
  neutron-fullstack create_vm --net_id=2, --binding:host_id=abc
  neutron router-create, attach it to both networks
  Test ping from VM 1 to VM 2

  neutron-fullstack destroy # Possibly accepting a topology ID if we
  were to support deploying more than a single topology at any given
  time, I need to think about this further

  
  [1] https://github.com/jbenc/plotnetcfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post 

[Yahoo-eng-team] [Bug 1498534] Re: The vms cant ping each other in different tenants but the same openstack environment

2015-09-22 Thread Assaf Muller
This isn't a bug, this is a support request. Please use
ask.openstack.org. It has an extremely active user base and you'll have
much better luck asking there.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498534

Title:
  The vms cant ping each other in different tenants but the same
  openstack environment

Status in neutron:
  Invalid

Bug description:

  setup:

  OS: ubuntu 14.04 based Juno

  1 controller + 1 network node + 2 nova computer node + 1 docker node

 vm 1-Router1(tenant1) ---router2(tenant2)|-vm2
  10.4/24   10.1/24  42.4/26   42.5/26 20.1/24| 20.4/24

|___vm3

  20.5/24
 
  Bring up one tunnel between two tenants in the same openstack enviroment 
based Juno.
  The vm1(10.1/24)could ping the router2 private network gw(20.1/24), but cant 
ping the vm2(20.4/24)
  This two vms located in differen computer node. 

  I try to capture the packets and found the the icmp request can go to
  the 20.1/24, but when I catpure the packets in vm2, it get nothing. No
  packets coming into vm2.

  And also I create another instance vm3 in tenant2 with the same
  subnets with vm2. And the vm2 could ping the vm3.

  So the issue is the vm2 could receive the packets coming from the vm3
  but cant receive the packets from vm1 after the vpn tunnel bring up.

  At last I try to bring up a small os cirros, but the result is the
  same.

  debug:
  root@network2:/var/log/neutron# ip netns
  qdhcp-30c3e9f5-afde-4723-b396-7aa6f754be52
  qdhcp-afcf5acb-2e26-4353-9cbe-0ab81a2354be
  qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9
  qrouter-0f9e22b4-30f4-4f7d-8cd1-595f116a0e2e


  
  root@network2:/var/log/neutron# ip netns exec 
qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9 ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  qg-b33c0f49-01 Link encap:Ethernet  HWaddr fa:16:3e:0a:c1:4d  
inet addr:10.130.42.5  Bcast:10.130.42.63  Mask:255.255.255.192
inet6 addr: fe80::f816:3eff:fe0a:c14d/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:1798 errors:0 dropped:0 overruns:0 frame:0
TX packets:487 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:155100 (155.1 KB)  TX bytes:53918 (53.9 KB)

  qr-01274858-78 Link encap:Ethernet  HWaddr fa:16:3e:72:7b:38  
inet addr:20.20.1.1  Bcast:20.20.1.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe72:7b38/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:120 errors:0 dropped:0 overruns:0 frame:0
TX packets:218 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:11004 (11.0 KB)  TX bytes:20152 (20.1 KB)

  
  root@network2:/var/log/neutron# 

  root@network2:/var/log/neutron# ip netns exec 
qrouter-7ec6eb64-3ff8-4242-a2dd-a2076a1cdcf9 tcpdump -i qr-01274858-78
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on qr-01274858-78, link-type EN10MB (Ethernet), capture size 65535 
bytes
  ^C09:13:34.748825 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1665, length 64
  09:13:35.748875 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1666, length 64
  09:13:36.748796 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1667, length 64
  09:13:37.748839 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1668, length 64
  09:13:38.748762 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1669, length 64
  09:13:39.748789 IP 10.10.1.4 > 20.20.1.4: ICMP echo request, id 19723, seq 
1670, length 64 >> the traffic could go to the private network gw in router.
  
  
  root@network2:/var/log/neutron# ip netns exec 
qdhcp-afcf5acb-2e26-4353-9cbe-0ab81a2354be ifconfig
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  

[Yahoo-eng-team] [Bug 1472309] Re: py34 tox env does not support regex

2015-09-21 Thread Assaf Muller
Fixed here:
https://review.openstack.org/#/c/216516/

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472309

Title:
  py34 tox env does not support regex

Status in neutron:
  Fix Released

Bug description:
  If I try to execute a single test by calling:

  tox -e py34 , it still executes the whole test suite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-09-21 Thread Assaf Muller
Removing Neutron from affected projects, this is not worth the effort.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in Glance:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  Triaged
Status in Manila:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in Sahara:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494866] Re: L3 HA router ports 'host' field do not point to the active router replica

2015-09-15 Thread Assaf Muller
Since
https://review.openstack.org/#/q/I8475548947526d8ea736ed7aa754fd0ca475cae2,n,z
we actually do update the port bindings 'host' field when HA router
states change. That patch was backported to Kilo, I am assuming the
reporter observed this behavior on an older Kilo version.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494866

Title:
  L3 HA router ports 'host' field do not point to the active router
  replica

Status in neutron:
  Fix Released

Bug description:
  We are using kilo. In our setup, we have 3 neutron controllers, l3
  agents are running on all the 3 neutron controllers. We make l3_ha =
  true in all the 3 neutron.conf.

  We notice that when we attach a network to a router, the gateway
  namespace is allocated to a controller node which doesn't match the
  record in neutron db. Following is one example.

  Create a router, a network, attach the network to the router.

  1. neutron tells that the gateway ip 1.1.1.1 is at controller-1
  [stack@c5220-01 ~]$ neutron port-show 3306c360-5a3d-4a08-aa92-017498758963
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:host_id   | overcloud-controller-1.localdomain  
   |
  | binding:profile   | {}  
   |
  | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}  
   |
  | binding:vif_type  | ovs 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 934f0b90-2d98-4d54-b9ca-5222aac2199d
   |
  | device_owner  | network:router_interface
   |
  | extra_dhcp_opts   | 
   |
  | fixed_ips | {"subnet_id": 
"463c2f0c-5d56-4abb-8b30-8450d8306f46", "ip_address": "1.1.1.1"} |
  | id| 3306c360-5a3d-4a08-aa92-017498758963
   |
  | mac_address   | fa:16:3e:72:34:4c   
   |
  | name  | 
   |
  | network_id| 98f125b6-6d4d-4417-a0b3-e8d9ff530d6f
   |
  | security_groups   | 
   |
  | status| ACTIVE  
   |
  | tenant_id | 4ef11838925940eb9d177ae9345711ee
   |
  
+---++

  
  2. However, the gateway ip is at controller-2
  [heat-admin@overcloud-controller-2 ~]$ sudo ip netns exec 
qrouter-934f0b90-2d98-4d54-b9ca-5222aac2199d ifconfig
  ha-6d47f13a-b7: flags=4163  mtu 1500
  inet 169.254.192.6  netmask 255.255.192.0  broadcast 169.254.255.255
  inet6 fe80::f816:3eff:fe43:9b80  prefixlen 64  scopeid 0x20
  ether fa:16:3e:43:9b:80  txqueuelen 1000  (Ethernet)
  RX packets 20  bytes 1638 (1.5 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 309  bytes 16926 (16.5 KiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  lo: flags=73  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  inet6 ::1  prefixlen 128  scopeid 0x10
  loop  txqueuelen 0  (Local Loopback)
  RX packets 0  bytes 0 (0.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 0  bytes 0 (0.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  qg-22431202-eb: flags=4163  mtu 1500
  inet 10.8.87.25  netmask 255.255.255.0  broadcast 0.0.0.0
  inet6 fe80::f816:3eff:febd:56ad  prefixlen 64  scopeid 0x20
  ether fa:16:3e:bd:56:ad  txqueuelen 1000  (Ethernet)
  RX packets 36  

[Yahoo-eng-team] [Bug 1493396] [NEW] Enable rootwrap daemon logging during functional tests

2015-09-08 Thread Assaf Muller
Public bug reported:

When triaging bugs found during functional tests (Either legit bugs with
Neutron, or issues related to the testing infrastructure), it is useful
to view the Oslo rootwrap daemon logs. It has an option to log to
syslog, but it is turned off by default. It should be turned on during
functional tests to provide additional useful information.

** Affects: neutron
 Importance: High
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: functional-tests

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493396

Title:
  Enable rootwrap daemon logging during functional tests

Status in neutron:
  New

Bug description:
  When triaging bugs found during functional tests (Either legit bugs
  with Neutron, or issues related to the testing infrastructure), it is
  useful to view the Oslo rootwrap daemon logs. It has an option to log
  to syslog, but it is turned off by default. It should be turned on
  during functional tests to provide additional useful information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491965] [NEW] test_list_show_tenant_networks triggering error "itemNotFound": {"message": "Network not found", "code": 404}

2015-09-03 Thread Assaf Muller
*** This bug is a duplicate of bug 1468523 ***
https://bugs.launchpad.net/bugs/1468523

Public bug reported:

Logstash query:
message:"Details: {u'message': u'Network not found', u'code': 404}" AND 
tags:"console" AND build_name:"gate-tempest-dsvm-neutron-full"

9 unique hits starting from today.

** Affects: neutron
 Importance: Critical
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: gate-failure

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491965

Title:
  test_list_show_tenant_networks triggering error "itemNotFound":
  {"message": "Network not found", "code": 404}

Status in neutron:
  New

Bug description:
  Logstash query:
  message:"Details: {u'message': u'Network not found', u'code': 404}" AND 
tags:"console" AND build_name:"gate-tempest-dsvm-neutron-full"

  9 unique hits starting from today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474204] Re: vlan id are invalid

2015-09-02 Thread Assaf Muller
Please try ask.openstack.org, thank you!

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474204

Title:
  vlan id are invalid

Status in neutron:
  Invalid

Bug description:
  i config CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:2:4096, but  "Error:
  vlan id are invalid. on node controller_251" is found, and i don't
  know what are vlan_min and vlan_max.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491581] [NEW] Some functional tests use 'sudo' alone without rootwrap

2015-09-02 Thread Assaf Muller
Public bug reported:

While looking at functional tests console outputs Ihar noticed that some
tests are executing 'sudo' without going through rootwrap.

Example:
http://logs.openstack.org/03/216603/4/check/gate-neutron-dsvm-functional/9ed19a4/console.html
 (CTRL-F for "['sudo'").
(Command: ['sudo', 'kill', '-15', '22920']).

Patch https://review.openstack.org/#/c/114717/ added a gate hook for the
Neutron functional job, and it's possible that since then we've been
allowing 'naked sudo' at the functional gate, because stripping the
'stack' user from 'sudo' was previously being done by the gate_hook in
the devstack_gate project.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491581

Title:
  Some functional tests use 'sudo' alone without rootwrap

Status in neutron:
  New

Bug description:
  While looking at functional tests console outputs Ihar noticed that
  some tests are executing 'sudo' without going through rootwrap.

  Example:
  
http://logs.openstack.org/03/216603/4/check/gate-neutron-dsvm-functional/9ed19a4/console.html
 (CTRL-F for "['sudo'").
  (Command: ['sudo', 'kill', '-15', '22920']).

  Patch https://review.openstack.org/#/c/114717/ added a gate hook for
  the Neutron functional job, and it's possible that since then we've
  been allowing 'naked sudo' at the functional gate, because stripping
  the 'stack' user from 'sudo' was previously being done by the
  gate_hook in the devstack_gate project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490043] [NEW] test_keepalived_respawns fails when trying to kill -15

2015-08-28 Thread Assaf Muller
Public bug reported:

test_keepalived_respawns spawns keepalived, asserts that it's up, kills
it,  then waits for the process monitor to respawn it. Sometimes, the
test seems to fail when sending signal 15 to the process.

Logstash:
message:pm.disable(sig='15')' AND tags:console

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicG0uZGlzYWJsZShzaWc9JzE1JyknXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA4MDA0MTU5OTZ9

3 hits in the last 7 days.

Example console log:
http://logs.openstack.org/91/215491/2/gate/gate-neutron-dsvm-functional/a5ea84a/console.html

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490043

Title:
  test_keepalived_respawns fails when trying to kill -15

Status in neutron:
  Confirmed

Bug description:
  test_keepalived_respawns spawns keepalived, asserts that it's up,
  kills it,  then waits for the process monitor to respawn it.
  Sometimes, the test seems to fail when sending signal 15 to the
  process.

  Logstash:
  message:pm.disable(sig='15')' AND tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicG0uZGlzYWJsZShzaWc9JzE1JyknXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA4MDA0MTU5OTZ9

  3 hits in the last 7 days.

  Example console log:
  
http://logs.openstack.org/91/215491/2/gate/gate-neutron-dsvm-functional/a5ea84a/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490037] [NEW] test_port_vlan_tags failing wait_until_ports_state

2015-08-28 Thread Assaf Muller
Public bug reported:

Logstash query:

message:, in test_port_vlan_tags AND tags:console AND (build_name
:check-neutron-dsvm-functional OR build_name:gate-neutron-dsvm-
functional)

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiLCBpbiB0ZXN0X3BvcnRfdmxhbl90YWdzXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIChidWlsZF9uYW1lOlwiY2hlY2stbmV1dHJvbi1kc3ZtLWZ1bmN0aW9uYWxcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLWRzdm0tZnVuY3Rpb25hbFwiKSAiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA3OTgyODcwMTZ9

6 failures in the last 7 days.

Sample of console:
http://logs.openstack.org/35/218035/1/check/gate-neutron-dsvm-functional/1988f1d/console.html

Traceback:
http://paste.openstack.org/show/431391/

The console log is also filled with 'RowNotFound: Cannot find Bridge
with name=test-br90906445' errors.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New = Confirmed

** Changed in: neutron
   Importance: Undecided = High

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490037

Title:
  test_port_vlan_tags failing wait_until_ports_state

Status in neutron:
  Confirmed

Bug description:
  Logstash query:

  message:, in test_port_vlan_tags AND tags:console AND (build_name
  :check-neutron-dsvm-functional OR build_name:gate-neutron-dsvm-
  functional)

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiLCBpbiB0ZXN0X3BvcnRfdmxhbl90YWdzXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIChidWlsZF9uYW1lOlwiY2hlY2stbmV1dHJvbi1kc3ZtLWZ1bmN0aW9uYWxcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLWRzdm0tZnVuY3Rpb25hbFwiKSAiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA3OTgyODcwMTZ9

  6 failures in the last 7 days.

  Sample of console:
  
http://logs.openstack.org/35/218035/1/check/gate-neutron-dsvm-functional/1988f1d/console.html

  Traceback:
  http://paste.openstack.org/show/431391/

  The console log is also filled with 'RowNotFound: Cannot find Bridge
  with name=test-br90906445' errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489650] [NEW] Prefix delegation testing issues

2015-08-27 Thread Assaf Muller
Public bug reported:

The pd, dibbler and agent side changes lack functional tests. There is
no test that validates that the entire feature works (Full stack or
Tempest).

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Prefix delegation agent-side functional testing non-existent
+ Prefix delegation testing issues

** Description changed:

- The pd, dibbler and agent side changes lack functional tests.
+ The pd, dibbler and agent side changes lack functional tests. There is
+ no test that validates that the entire feature works (Full stack or
+ Tempest).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489650

Title:
  Prefix delegation testing issues

Status in neutron:
  New

Bug description:
  The pd, dibbler and agent side changes lack functional tests. There is
  no test that validates that the entire feature works (Full stack or
  Tempest).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488619] Re: Neutron API reports both routers in active state for L3 HA

2015-08-25 Thread Assaf Muller
After an IRC conversation, this turned out to be an SELinux issue.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488619

Title:
  Neutron API reports both routers in active state for L3 HA

Status in neutron:
  Invalid

Bug description:
  I am running Kilo with L3 HA. Here is what I see:

  # neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
  
+--+-++---+--+
  | id   | host| admin_state_up | 
alive | ha_state |
  
+--+-++---+--+
  | 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | 
active   |
  | c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | 
active   |
  
+--+-++---+--+

  My relevant neutron config on both nodes is
  l3_ha = True
  max_l3_agents_per_router = 2
  min_l3_agents_per_router = 2

  We checked the following:
  1. IP monitor is running on both nodes
  2. Keepalived can talk between the nodes, we see packets on the HA interface

  What are we missing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477190] Re: dhcp._release_lease shouldn't stacktrace when a device isn't found since it could be legitimately gone

2015-08-23 Thread Assaf Muller
No action required from Tempest folks, this is purely a Neutron bug.

** Changed in: tempest
   Status: New = Invalid

** Changed in: tempest
 Assignee: Sridhar Gaddam (sridhargaddam) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477190

Title:
  dhcp._release_lease shouldn't stacktrace when a device isn't found
  since it could be legitimately gone

Status in neutron:
  In Progress
Status in tempest:
  Invalid

Bug description:
  http://logs.openstack.org/80/200380/3/gate/gate-tempest-dsvm-neutron-
  full/2b49b0d/logs/screen-q-dhcp.txt.gz?level=TRACE#_2015-07-22_09_30_43_303

  This kind of stuff is really annoying in debugging failures in the
  gate since I think it's a known failure during cleanup of ports when
  deleting an instance, so we shouldn't stacktrace this and log as an
  error, just handle it in the code and move on.

  2015-07-22 09:30:43.303 ERROR neutron.agent.dhcp.agent 
[req-c7faddea-956c-4544-8ec0-1f6e5029cae1 tempest-NetworksTestDHCPv6-1951433064 
tempest-NetworksTestDHCPv6-996832218] Unable to reload_allocations dhcp for 
301ef35d-2482-4a13-a088-f456958e878a.
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/dhcp/agent.py, line 115, in call_driver
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/linux/dhcp.py, line 432, in 
reload_allocations
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
self._release_unused_leases()
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/linux/dhcp.py, line 671, in 
_release_unused_leases
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
self._release_lease(mac, ip, client_id)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/linux/dhcp.py, line 415, in 
_release_lease
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
ip_wrapper.netns.execute(cmd, run_as_root=True)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py, line 701, in execute
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
extra_ok_codes=extra_ok_codes, **kwargs)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
/opt/stack/new/neutron/neutron/agent/linux/utils.py, line 138, in execute
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent raise 
RuntimeError(m)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent RuntimeError: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Command: ['ip', 
'netns', 'exec', u'qdhcp-301ef35d-2482-4a13-a088-f456958e878a', 'dhcp_release', 
'tapcb2fa9dc-f6', '2003::e6ef:eb57:47b4:91cf', 'fa:16:3e:18:e6:78']
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Exit code: 1
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stdin: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stdout: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stderr: cannot 
setup interface: No such device
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RkZXJyOiBjYW5ub3Qgc2V0dXAgaW50ZXJmYWNlOiBObyBzdWNoIGRldmljZVwiIEFORCB0YWdzOlwic2NyZWVuLXEtZGhjcC50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzU3NjE3ODQzNn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487548] [NEW] fullstack infrastructure tears down processes via kill -9

2015-08-21 Thread Assaf Muller
Public bug reported:

I can't imagine this has good implications. Distros typically kill
neutron processes via kill -15, so this should definitely be doable here
as well.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

** Tags added: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487548

Title:
  fullstack infrastructure tears down processes via kill -9

Status in neutron:
  New

Bug description:
  I can't imagine this has good implications. Distros typically kill
  neutron processes via kill -15, so this should definitely be doable
  here as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486199] [NEW] Fullstack tests sometimes crash OVS, causing subsequent tests to fail

2015-08-18 Thread Assaf Muller
Public bug reported:

I've observed both in the gate and locally, both in Ubuntu 14.04 with
OVS 2.0 and Fedora 22 with OVS 2.3.1, that sometimes a full stack test
can crash OVS. Subsequent tests in the same run will obviously fail.

To get around this issue locally I restart the OVS service.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486199

Title:
  Fullstack tests sometimes crash OVS, causing subsequent tests to fail

Status in neutron:
  New

Bug description:
  I've observed both in the gate and locally, both in Ubuntu 14.04 with
  OVS 2.0 and Fedora 22 with OVS 2.3.1, that sometimes a full stack test
  can crash OVS. Subsequent tests in the same run will obviously fail.

  To get around this issue locally I restart the OVS service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485606] Re: Unable to enable dhcp for networkid

2015-08-17 Thread Assaf Muller
This looks like more of a rabbit issue than a Neutron issue. It is most
likely deployment specific: Bad configuration etc.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485606

Title:
  Unable to enable dhcp for networkid

Status in neutron:
  Invalid

Bug description:
   Installed Mirantis Open Stack 6.0 release in ubuntu. Neutron vlan
  network is created but the VMs created on private network are not
  getting DHCP IP address.

  Toploogy : 1 fuel node, 1 controller and 2 compute nodes.

  All eth0s are connected to L2 switch for PXE boot, 
  eth1s are connected to uplink switch for connectivity, 
  eth2 are connected to L2 switch for Management/Storage network and 
  eth3s are connected to L2 switch for Private network

  Verified the connections between Fuel, controller and compute nodes,
  everything is proper. Seeing Unable to enable dhcp for network id“
  in /var/log/neutron/dhcp-agent.log.

  2015-07-20 10:05:23.255 10139 ERROR neutron.agent.dhcp_agent 
[req-fe1305ed-17da-4863-9b04-ca418495256d None] Unable to enable dhcp for 
0aff4aa3-e393-499d-b3bc-5c90dd3655b1.
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py, line 129, in 
call_driver
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py, line 204, in 
enable
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
interface_name = self.device_manager.setup(self.network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py, line 929, in 
setup
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent port = 
self.setup_dhcp_port(network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py, line 910, in 
setup_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent dhcp_port = 
self.plugin.create_dhcp_port({'port': port_dict})
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py, line 439, in 
create_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
host=self.host))
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/common/log.py, line 34, in wrapper
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
method(*args, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 161, in call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent context, 
msg, rpc_method='call', **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/common/rpc.py, line 187, in 
__call_rpc_method
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
func(context, msg['method'], **msg['args'])
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 389, in 
call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
self.prepare().call(ctxt, method, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py, line 152, in 
call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
retry=self.retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py, line 90, in 
_send
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
timeout=timeout, retry=retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py, line 
434, in send


  
  This is server log output - seeing AMPQ connection issue

  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
(class_id, method_id), ConnectionError)
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection 
closure with reason 'shutdown'
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit
  2015-07-20 10:01:30.215 9808 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 5.0 seconds ...
  2015-07-20 10:01:30.216 9808 ERROR 

[Yahoo-eng-team] [Bug 1484148] [NEW] stable/kilo and neutronclient gates broken following VPNaaS infra changes

2015-08-12 Thread Assaf Muller
Public bug reported:

https://etherpad.openstack.org/p/vpn-test-changes

** Affects: neutron
 Importance: Critical
 Assignee: Paul Michali (pcm)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484148

Title:
  stable/kilo and neutronclient gates broken following VPNaaS infra
  changes

Status in neutron:
  Confirmed

Bug description:
  https://etherpad.openstack.org/p/vpn-test-changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1484148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483091] Re: Same name SecurityGroup could not work

2015-08-12 Thread Assaf Muller
Moving to Horizon. Neutron SGs can be retrieved with ID or name like any
other resource, Horizon needs to boot VMs with SG ID, not name.

** Project changed: neutron = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1483091

Title:
  Same name SecurityGroup could not work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In icehouse, if two tenants create a security group with the same name
  respectively, then they could not create a vm in the dashboard using
  this security group, with the error says Multiple security_group
  matches found for name 'test', use an ID to be more specific. (HTTP
  409) (Request-ID: req-ece4dd00-d1a0-4c38-9587-394fa29610da).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1483091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484102] Re: add default value

2015-08-12 Thread Assaf Muller
And what would the default be? There is no one plugin that is more
important than another.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484102

Title:
  add default value

Status in neutron:
  Opinion

Bug description:
  
  neutron/common/Config.py

  cfg.StrOpt('core_plugin',
 help=_(The core plugin Neutron will use)),

  why not add default value, to make configure project more easier.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1484102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479105] [NEW] Possible local variable 'port_info' referenced before assignment in OVS agent

2015-07-28 Thread Assaf Muller
Public bug reported:

I got a UnboundLocalError: local variable 'port_info' referenced before
assignment exception while running local, yet unmerged fullstack tests.
Looking at the code you can get that exception in one of two scenarios:

1) _agent_has_updates is False during the first iteration, I'm not sure if this 
is possible.
2) Or, in my case while executing the first '*port_info* = 
self.scan_ports(reg_ports, updated_ports_copy)', self.scan_ports raised an 
exception which is caught in the main RPC loop and then the very end of the RPC 
loop is reached, at which point 'port_stats = self.get_port_stats(*port_info*, 
ancillary_port_info)' raises an UnboundLocalError exception because port_info 
was never defined.

This is a regression that results from
https://review.openstack.org/#/c/199164/.

** Affects: neutron
 Importance: Low
 Status: New

** Description changed:

  I got a UnboundLocalError: local variable 'port_info' referenced before
  assignment exception while running local, yet unmerged fullstack tests.
  Looking at the code you can get that exception in one of two scenarios:
  
  1) _agent_has_updates is False during the first iteration, I'm not sure if 
this is possible.
  2) Or, in my case while executing the first '*port_info* = 
self.scan_ports(reg_ports, updated_ports_copy)', self.scan_ports raised an 
exception which is caught in the main RPC loop and then the very end of the RPC 
loop is reached, at which point 'port_stats = self.get_port_stats(*port_info*, 
ancillary_port_info)' raises an UnboundLocalError exception because port_info 
was never defined.
+ 
+ This is a regression that results from
+ https://review.openstack.org/#/c/199164/.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479105

Title:
  Possible local variable 'port_info' referenced before assignment in
  OVS agent

Status in neutron:
  New

Bug description:
  I got a UnboundLocalError: local variable 'port_info' referenced
  before assignment exception while running local, yet unmerged
  fullstack tests. Looking at the code you can get that exception in one
  of two scenarios:

  1) _agent_has_updates is False during the first iteration, I'm not sure if 
this is possible.
  2) Or, in my case while executing the first '*port_info* = 
self.scan_ports(reg_ports, updated_ports_copy)', self.scan_ports raised an 
exception which is caught in the main RPC loop and then the very end of the RPC 
loop is reached, at which point 'port_stats = self.get_port_stats(*port_info*, 
ancillary_port_info)' raises an UnboundLocalError exception because port_info 
was never defined.

  This is a regression that results from
  https://review.openstack.org/#/c/199164/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >