[Yahoo-eng-team] [Bug 1612697] [NEW] ovs agent not ready ports should be listed as current

2016-08-12 Thread Rossella Sblendido
Public bug reported:

The ports that are not ready yet, shouldn't be listed as current ports,
see port log snippet

2016-08-12 15:00:44.593 12779 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Port 
ca63b83a-763c-402c-84f2-7f28ccbf87e3 not ready yet on the bridge _process_port 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1303
2016-08-12 15:00:44.596 12779 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor Interface name,ofport,external_ids 
--format=json]: 
{"data":[["9438e335-d7ce-47de-9888-2a96085b3bdf","old",null,["set",[]],null],["","new","trunk",2,["map",[["attached-mac","fa:16:3e:86:71:81"],["iface-id","ca63b83a-763c-402c-84f2-7f28ccbf87e3"],["iface-status","active"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout neutron/agent/linux/async_process.py:237
2016-08-12 15:00:44.629 12779 WARNING neutron.agent.linux.interface 
[req-60d7c1c6-a0b3-4581-ac34-a52d7f36b53c - - - - -] No MTU configured for port 
ca63b83a-763c-402c-84f2-7f28ccbf87e3
2016-08-12 15:00:45.619 12779 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Starting to 
process devices in:{'removed': set([]), 'added': 
set([u'ca63b83a-763c-402c-84f2-7f28ccbf87e3']), 'current': 
set([u'ca63b83a-763c-402c-84f2-7f28ccbf87e3'])} rpc_loop 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2048

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612697

Title:
  ovs agent not ready ports should be listed as current

Status in neutron:
  New

Bug description:
  The ports that are not ready yet, shouldn't be listed as current
  ports, see port log snippet

  2016-08-12 15:00:44.593 12779 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Port 
ca63b83a-763c-402c-84f2-7f28ccbf87e3 not ready yet on the bridge _process_port 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1303
  2016-08-12 15:00:44.596 12779 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor Interface name,ofport,external_ids 
--format=json]: 
{"data":[["9438e335-d7ce-47de-9888-2a96085b3bdf","old",null,["set",[]],null],["","new","trunk",2,["map",[["attached-mac","fa:16:3e:86:71:81"],["iface-id","ca63b83a-763c-402c-84f2-7f28ccbf87e3"],["iface-status","active"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout neutron/agent/linux/async_process.py:237
  2016-08-12 15:00:44.629 12779 WARNING neutron.agent.linux.interface 
[req-60d7c1c6-a0b3-4581-ac34-a52d7f36b53c - - - - -] No MTU configured for port 
ca63b83a-763c-402c-84f2-7f28ccbf87e3
  2016-08-12 15:00:45.619 12779 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Starting to 
process devices in:{'removed': set([]), 'added': 
set([u'ca63b83a-763c-402c-84f2-7f28ccbf87e3']), 'current': 
set([u'ca63b83a-763c-402c-84f2-7f28ccbf87e3'])} rpc_loop 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2048

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600774] [NEW] lookup DNS and use it before creating a new one

2016-07-11 Thread Rossella Sblendido
Public bug reported:

In some deployment DNS is already pre-populated for every IP. To make
Neutron use this DNS name, the attribute dns_name needs to be filled. It
would be more straightforward to provide in Neutron a way to fill
automatically this dns_name, doing a reverse DNS lookup.

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600774

Title:
  lookup DNS  and use it before creating a new one

Status in neutron:
  Confirmed

Bug description:
  In some deployment DNS is already pre-populated for every IP. To make
  Neutron use this DNS name, the attribute dns_name needs to be filled.
  It would be more straightforward to provide in Neutron a way to fill
  automatically this dns_name, doing a reverse DNS lookup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586931] Re: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time

2016-07-02 Thread Rossella Sblendido
I hit this. I can reproduce it almost every time on my env using
linuxbrige+vxlan. The nova trace is:

2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
[req-bf41dac1-8fc0-4fd6-9a35-d754cea79057 9a0be6e4b8bf4cadb4a43401696fec19 
48935f9a5ed84703973c70dd70859b7f - - -] Unexpected exception in API method
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/floating_ips.py", 
line 173, in delete
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions context, 
instance, floating_ip)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1527, in 
disassociate_and_release_floating_ip
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
raise_if_associated=False)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1536, in 
_release_floating_ip
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
client.delete_floatingip(fip['id'])
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions ret = 
self.function(instance, *args, **kwargs)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 751, in 
delete_floatingip
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions return 
self.delete(self.floatingip_path % (floatingip))
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 289, in 
delete
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
headers=headers, params=params)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
headers=headers, params=params)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 211, in 
do_request
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
self._handle_fault_response(status_code, replybody)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 185, in 
_handle_fault_response
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
exception_handler_v20(status_code, des_error_body)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 70, in 
exception_handler_v20
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
status_code=status_code)
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 
PortNotFoundClient: Port f4f11381-dc3b-41b2-94ca-4a9f494c0372 could not be 
found.
2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions

2 operation occurs: the VM deletion (that triggers the Neutron port
deletion) and the floating ip deletion. Nova sends a request to Neutron
to delete the floating IP. When a floating IP is deleted Neutron will
get the port associated with the floating ip to send a  network change
event notification to Nova. The get port fails with PortNotFound because
in the meanwhile the Neutron port that the VM was using, has been
deleted. The floating ip request fails because Neutron sends back to
Nova a PortNotFound error.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586931

Title:
  TestServerBasicOps: Test fails when deleting server and floating ip
  almost at the same time

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in tempest:
  In Progress

Bug description:
  In
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops,
  after last step:
  self.servers_client.delete_server(self.instance['id']) it doesn't wait
  for the server to be deleted, and then deletes the floating ip
  immediately in the clean up, this will cause faiure:

  Here is the 

[Yahoo-eng-team] [Bug 1596813] Re: stable/mitaka branch creation request for networking-midonet

2016-06-28 Thread Rossella Sblendido
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596813

Title:
  stable/mitaka branch creation request for networking-midonet

Status in networking-midonet:
  New

Bug description:
  Please cut stable/mitaka branch for networking-midonet
  on commit 797a2a892e08f3e8162f5f168a7c5e3420f961bc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595762] Re: HTTPS connection failing for Docker >= 1.10

2016-06-28 Thread Rossella Sblendido
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595762

Title:
  HTTPS connection failing for Docker >= 1.10

Status in neutron:
  Invalid

Bug description:
  We experience problems with outgoing HTTPS connections from Docker
  containers when running in OpenStack VMs.

  We assume this could be a bug in OpenStack because:
  - Ubuntu 14, 16 and CoreOS show the same problems
  - While there are no problems with Docker 1.6.2 and 1.9.1, it fails with 
Docker 1.10 and 1.11
  - The same containers work outside OpenStack
  - We found similar problem descriptions in the web that occured on other 
OpenStack providers

  The issue can easily be reproduced with:

  1.) Installing a docker version >= 1.10
  2.) docker run -it ubuntu apt-get update

  Expected output: Ubuntu updates its package list
  Actual output: Download does not start and runs into a timeout

  The same problem seems to occur with wget and curl and our Java
  application.

  Please note that plain HTTP works as expected, so does issuing the
  Https requests from the Docker host machine.

  Disabling network virtualization with Docker flag --net="host" fixes
  the problems with wget, curl and apt-get, unfortunately not with the
  Java app we're trying to deploy in OpenStack.

  For our current project this is actually a blocker since CoreOS comes
  bundled with a recent Docker version which is not trivial to
  downgrade.

  I can't see any version information in the Horizon interface of our
  provider, however I think I heard they are using Mitaka release.

  Links:
  - Related issue at Docker: https://github.com/docker/docker/issues/20178
  - ServerFault question by me: 
http://serverfault.com/questions/785768/https-request-fails-in-docker-1-10-with-virtualized-network
  - StackOverflow question by someone else: 
http://stackoverflow.com/questions/35300497/docker-container-not-connecting-to-https-endpoints

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595832] Re: lbaasv2:loadbalancer-create with tenant-id is not effective

2016-06-28 Thread Rossella Sblendido
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595832

Title:
  lbaasv2:loadbalancer-create with tenant-id is not effective

Status in neutron:
  Invalid

Bug description:
  In tenant of admin ,create a loadbalancer with tenant demo
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | 6403670bcb0f45cba4cb732a9a936da4 |  admin   |   True  |
  | f0e4abc0e4564b9db5a8e6f23c7244b9 |   demo   |   True  |
  | 1066af750dc54bcc985f4d6217aad3d4 | services |   True  |
  +--+--+-+
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-loadbalancer-create 
RF_INNER_NET_1subnet_net --tenant-id f0e4abc0e4564b9db5a8e6f23c7244b9 --name 
test
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 228237fb-86e1-434c-99aa-c587f07f9c59 |
  | listeners   |  |
  | name| test |
  | operating_status| OFFLINE  |
  | provider| zxveglb  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | 6403670bcb0f45cba4cb732a9a936da4 |
  | vip_address | 192.0.1.2|
  | vip_port_id | 9105a86e-fa7f-4fbd-9591-6eddda9d1938 |
  | vip_subnet_id   | f432f80a-c265-47c8-ab36-a522122892bf |
  +-+--+
  The loadbalancer's tenant_id is 6403670bcb0f45cba4cb732a9a936da4,not the 
input f0e4abc0e4564b9db5a8e6f23c7244b9.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585165] [NEW] floating ip not reachable after vm migration

2016-05-24 Thread Rossella Sblendido
Public bug reported:

On a cloud running Liberty, a VM is assigned a floating IP. The VM is
live migrated and the floating IP is no longer reachable from outside
the cloud. Steps to reproduce:

1) spawn a VM
2) assign a floating IP
3) live migrate the VM
4) ping the floating IP from outside the cloud

the problem seems to be that both the node that was hosting the VM
before the migration and the node that hosts it now answers the ARP
request:

admin:~ # arping -I eth0 10.127.128.12 
ARPING 10.127.128.12 from 10.127.0.1 eth0
Unicast reply from 10.127.128.12 [FA:16:3E:C8:E6:13]  305.145ms
Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  694.062ms
Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  0.964ms

on the compute that was hosting the VM:

root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
default via 10.127.0.1 dev fg-c100b010-af 
10.127.0.0/16 dev fg-c100b010-af  proto kernel  scope link  src 10.127.128.3 
10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

On the node that it's hosting the VM:

root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
default via 10.127.0.1 dev fg-e532a13f-35 
10.127.0.0/16 dev fg-e532a13f-35  proto kernel  scope link  src 10.127.128.8 9 
10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

the entry "10.127.128.12" is present in both nodes.  That happens
because when the VM is migrated no clean up is triggered on the source
host. Restarting the l3 agent fixes the problem because the stale entry
is removed.

** Affects: neutron
 Importance: High
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585165

Title:
  floating ip not reachable after vm migration

Status in neutron:
  New

Bug description:
  On a cloud running Liberty, a VM is assigned a floating IP. The VM is
  live migrated and the floating IP is no longer reachable from outside
  the cloud. Steps to reproduce:

  1) spawn a VM
  2) assign a floating IP
  3) live migrate the VM
  4) ping the floating IP from outside the cloud

  the problem seems to be that both the node that was hosting the VM
  before the migration and the node that hosts it now answers the ARP
  request:

  admin:~ # arping -I eth0 10.127.128.12 
  ARPING 10.127.128.12 from 10.127.0.1 eth0
  Unicast reply from 10.127.128.12 [FA:16:3E:C8:E6:13]  305.145ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  694.062ms
  Unicast reply from 10.127.128.12 [FA:16:3E:45:BF:9E]  0.964ms

  on the compute that was hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-c100b010-af 
  10.127.0.0/16 dev fg-c100b010-af  proto kernel  scope link  src 10.127.128.3 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  On the node that it's hosting the VM:

  root:~ # sudo ip netns exec fip-c622fafe-c663-456a-8549-ebd3dbed4792 ip route
  default via 10.127.0.1 dev fg-e532a13f-35 
  10.127.0.0/16 dev fg-e532a13f-35  proto kernel  scope link  src 10.127.128.8 
9 
  10.127.128.12 via 169.254.31.28 dev fpr-7d1a001a-9 

  the entry "10.127.128.12" is present in both nodes.  That happens
  because when the VM is migrated no clean up is triggered on the source
  host. Restarting the l3 agent fixes the problem because the stale
  entry is removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569693] Re: Ports not recreated during neutron agent restart

2016-04-15 Thread Rossella Sblendido
If l2pop is not enabled, tunnel ports  are created in tunnel_sync that it's 
executed at agent startup.  In this case tunnels are opened to all the hosts. 
When l2pop is enabled only tunnels to hosts that have VM running on the same 
network are opened. 
Not sure what's happening in your environment but it looks more like some 
misconfiguration than some real bug.


** Changed in: neutron
   Status: New => Invalid

** Tags removed: needs-attention

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569693

Title:
  Ports not recreated during neutron agent restart

Status in neutron:
  Invalid

Bug description:
  Ports are not recreated during neutron agent restart with l2_pop=False.
  When the agent is restarted after deleting br-int and br-tun, the bridges are 
recreated but the tunnel ports and some of the integration bridge ports are not 
recreated.
  When l2_pop is turned True, and the same procedure is retried, the bridges 
and tunnel ports to all compute nodes running instances are recreated properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557946] [NEW] functional test test_icmp_from_specific_address fails

2016-03-16 Thread Rossella Sblendido
Public bug reported:

This is the trace:

Traceback (most recent call last):
  File "neutron/tests/functional/agent/test_firewall.py", line 532, in 
test_icmp_from_specific_address
direction=self.tester.INGRESS)
  File "neutron/tests/common/conn_testers.py", line 36, in wrap
return f(self, direction, *args, **kwargs)
  File "neutron/tests/common/conn_testers.py", line 162, in assert_connection
testing_method(direction, protocol, src_port, dst_port)
  File "neutron/tests/common/conn_testers.py", line 147, in 
_test_icmp_connectivity
src_namespace, ip_address))
neutron.tests.common.conn_testers.ConnectionTesterException: ICMP packets can't 
get from test-d5baf3c4-aca8-4fab-84aa-ae3bfe9dbc14 namespace to 
2001:db8:::2 address


Example of a failure [1], logstash query [2] 16 hits in the last 2 days

[1] 
http://logs.openstack.org/47/286347/12/check/gate-neutron-dsvm-functional/6dd0cdb/testr_results.html.gz
[2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ICMP%20packets%20can't%20get%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests gate-failure

** Tags added: functional-tests

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557946

Title:
  functional test test_icmp_from_specific_address fails

Status in neutron:
  New

Bug description:
  This is the trace:

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_firewall.py", line 532, in 
test_icmp_from_specific_address
  direction=self.tester.INGRESS)
File "neutron/tests/common/conn_testers.py", line 36, in wrap
  return f(self, direction, *args, **kwargs)
File "neutron/tests/common/conn_testers.py", line 162, in assert_connection
  testing_method(direction, protocol, src_port, dst_port)
File "neutron/tests/common/conn_testers.py", line 147, in 
_test_icmp_connectivity
  src_namespace, ip_address))
  neutron.tests.common.conn_testers.ConnectionTesterException: ICMP packets 
can't get from test-d5baf3c4-aca8-4fab-84aa-ae3bfe9dbc14 namespace to 
2001:db8:::2 address

  
  Example of a failure [1], logstash query [2] 16 hits in the last 2 days

  [1] 
http://logs.openstack.org/47/286347/12/check/gate-neutron-dsvm-functional/6dd0cdb/testr_results.html.gz
  [2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ICMP%20packets%20can't%20get%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531210] Re: Ovs agent loses OpenFlow rules if OVS gets restarted while Neutron is disconnected from SQL

2016-03-09 Thread Rossella Sblendido
*** This bug is a duplicate of bug 1439472 ***
https://bugs.launchpad.net/bugs/1439472

** This bug has been marked a duplicate of bug 1439472
   OVS doesn't restart properly when Exception occurred

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531210

Title:
  Ovs agent loses OpenFlow rules if OVS gets restarted while Neutron is
  disconnected from SQL

Status in neutron:
  Confirmed

Bug description:
  Flow to reproduce in Juno:

  1. Node X has neutron-ovs-agent running
  2. Neutron-server running ML2 plugin loses its connection to SQL server. At 
this point neutron-ovs-agent is not aware to this, since it doesn't query 
device properties.
  3. OVS is restarted in the background of the neutron-ovs-agent.
  4. The neutron-ovs-agent realizes that OVS was restarted since the CANARY 
VALUE it placed in OpenFlow table 23 is missing.
  5. The agent raises a local flag ovs_restarted and replaces the CANARY value 
to signal it took care of the OVS restart in this iteration.
  6. It runs through the OVS restart flow, which erases the OpenFlow rules 
(again, this is Juno). 
  7. When accessing the Neutron server to in process_network_ports() it the 
following SQL error which breaks this iteration:

  

  2015-12-28 08:49:07,075.075 35862 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-bea668e9-3c52-4535-a4f9-71a63dc538c4 None] process_network_ports - 
iteration:41940 - failure while retrieving port details from server
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1230, in process_network_ports
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent devices_added_updated, 
ovs_restarted)
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1103, in treat_devices_added_or_updated
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise 
DeviceListRetrievalError(devices=devices, error=e)
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent DeviceListRetrievalError: 
Unable to retrieve port details for devices: 
set([u'918890c7-cbfd-4a3f-bb2c-030e0f5ded5b', 
u'9c8c6b21-4baa-4c7e-b2ac-9772a7653da9', 
u'dded408d-65e7-4adf-8490-3ba78e1496b0', 
u'9aec4ec1-5921-40f5-8db7-fec3635511ce', 
u'545ba077-e2ab-434b-a696-bf0bc8874dcb', 
u'9a03a23c-2ae9-422c-a8da-2578134001bb', 
u'b62aa4db-819c-4941-a457-8c19a9897e66', 
u'a47ff11b-0c57-435e-ac5e-4348dccd6f0f', 
u'55defa8f-016f-46b1-b240-5825bc282571']) because of error: Remote error: 
OperationalError (_mysql_exceptions.OperationalError) (1047, 'WSREP has not yet 
prepared node for application use')
  2015-12-28 08:49:07,075.075 35862 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [u'Traceback (most recent 
call last):\n', u'  File 
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', u'  File 
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', u' 
 File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', u'  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 115, in 
get_devices_details_list\nfor device in kwargs.pop(\'devices\', [])\n', u'  
File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py", line 92, in 
get_device_details\nhost)\n', u'  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1127, in 
update_port_status\n
 updated = True\n', u'  File "/usr/lib64/python2.7/contextlib.py", line 24, in 
__exit__\nself.gen.next()\n', u'  File 
"/usr/lib64/python2.7/contextlib.py", line 121, in nested\nif 
exit(*exc):\n', u'  File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 502, in 
__exit__\nself.rollback()\n', u'  File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 63, 
in __exit__\ncompat.reraise(type_, value, traceback)\n', u'  File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 502, in 
__exit__\nself.rollback()\n', u'  File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 423, in 
rollback\n

[Yahoo-eng-team] [Bug 1407601] Re: openvswitch agent logs ERROR when a port to be tagged is gone

2016-03-03 Thread Rossella Sblendido
This doesn't happen any more ...marking it as invalid

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1407601

Title:
  openvswitch agent logs ERROR when a port to be tagged is gone

Status in neutron:
  Invalid

Bug description:
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_building_state
  can produce in openvswitch agent log, errors like:
  2015-01-05 05:46:34.712 30809 DEBUG neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--format=json', 
'--', '--columns=external_ids,name,ofport', 'find', 'Interface', 
'external_ids:iface-id="34e73ef0-34a9-4ba1-95a8-5529c4d5b9ee"']
  Exit code: 0
  Stdout: 
'{"data":[[["map",[["attached-mac","fa:16:3e:37:be:0d"],["iface-id","34e73ef0-34a9-4ba1-95a8-5529c4d5b9ee"],["iface-status","active"],["vm-uuid","de2dbf79-9b4b-4804-a338-bbde2a827834"]]],"qvo34e73ef0-34",12]],"headings":["external_ids","name","ofport"]}\n'
  Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
  2015-01-05 05:46:34.713 30809 DEBUG neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] Running command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', 'iface-to-br', 'qvo34e73ef0-34'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:48
  2015-01-05 05:46:35.104 30809 DEBUG neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'iface-to-br', 
'qvo34e73ef0-34']
  Exit code: 0
  Stdout: 'br-int\n'
  Stderr: '' execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:81
  2015-01-05 05:46:35.105 30809 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] Port 
34e73ef0-34a9-4ba1-95a8-5529c4d5b9ee updated. Details: {u'profile': {}, 
u'admin_state_up': True, u'network_id': 
u'd130d030-c723-49eb-8958-9372d87beea0', u'segmentation_id': 1003, 
u'device_owner': u'compute:None', u'physical_network': None, u'mac_address': 
u'fa:16:3e:37:be:0d', u'device': u'34e73ef0-34a9-4ba1-95a8-5529c4d5b9ee', 
u'port_id': u'34e73ef0-34a9-4ba1-95a8-5529c4d5b9ee', u'fixed_ips': 
[{u'subnet_id': u'68285500-2be6-49da-bf40-62c78238400f', u'ip_address': 
u'10.100.0.2'}], u'network_type': u'vxlan'}
  2015-01-05 05:46:35.105 30809 DEBUG neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] Running command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', 'get', 'Port', 'qvo34e73ef0-34', 'tag'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:48
  2015-01-05 05:46:35.390 30809 DEBUG neutron.agent.linux.ovsdb_monitor [-] 
Output received from ovsdb monitor: 
{"data":[["ee671012-f4fc-4201-9140-e895f54cfd2f","delete","qvo34e73ef0-34",12]],"headings":["row","action","name","ofport"]}
   _read_stdout /opt/stack/new/neutron/neutron/agent/linux/ovsdb_monitor.py:45
  2015-01-05 05:46:35.506 30809 ERROR neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'get', 'Port', 
'qvo34e73ef0-34', 'tag']
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no row "qvo34e73ef0-34" in table Port\n'
  2015-01-05 05:46:35.507 30809 ERROR neutron.agent.linux.ovs_lib 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] Unable to execute ['ovs-vsctl', 
'--timeout=10', 'get', 'Port', u'qvo34e73ef0-34', 'tag']. Exception: 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'get', 'Port', 
'qvo34e73ef0-34', 'tag']
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no row "qvo34e73ef0-34" in table Port\n'
  2015-01-05 05:46:35.507 30809 DEBUG neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] Running command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', 'set', 'Port', 'qvo34e73ef0-34', 'tag=4'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:48
  2015-01-05 05:46:35.983 30809 ERROR neutron.agent.linux.utils 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 'Port', 
'qvo34e73ef0-34', 'tag=4']
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no row "qvo34e73ef0-34" in table Port\n'
  2015-01-05 05:46:35.983 30809 ERROR neutron.agent.linux.ovs_lib 
[req-95a486ae-8fc5-4e08-b90a-1080663f770b None] 

[Yahoo-eng-team] [Bug 1551593] Re: functional test failures caused by failure to setup OVS bridge

2016-03-03 Thread Rossella Sblendido
*** This bug is a duplicate of bug 1470234 ***
https://bugs.launchpad.net/bugs/1470234

** This bug is no longer a duplicate of bug 1547486
   ARPSpoofOFCtlTestCase functional gate tests failing intermittently
** This bug has been marked a duplicate of bug 1470234
   test_arp_spoof_allowed_address_pairs_0cidr sporadically failing functional 
job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551593

Title:
  functional test failures caused by failure to setup OVS bridge

Status in neutron:
  Fix Released

Bug description:
  Right now we get random functional test failures and it seems to be
  related to the fact that we only log and error and continue when a
  flow fails to be inserted into a bridge:
  http://logs.openstack.org/90/283790/10/check/gate-neutron-dsvm-
  
functional/ab54e0a/logs/neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_doesnt_block_ipv6_native_.log.txt.gz#_2016-03-01_01_47_17_903

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286209] Re: unhandled trace if no namespaces in metering agent

2016-03-03 Thread Rossella Sblendido
marking it as invalid, since we are not able to reproduce it

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286209

Title:
  unhandled trace if no namespaces in metering agent

Status in neutron:
  Invalid
Status in neutron package in Ubuntu:
  Triaged

Bug description:
  If network node has no active routers on it l3-agent, metering-agent
  tracing:

  
  2014-02-28 17:04:51.286 1121 DEBUG 
neutron.services.metering.agents.metering_agent [-] Get router traffic counters 
_get_traffic_counters 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py:214
  2014-02-28 17:04:51.286 1121 DEBUG neutron.openstack.common.lockutils [-] Got 
semaphore "metering-agent" for method "_invoke_driver"... inner 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:191
  2014-02-28 17:04:51.286 1121 DEBUG neutron.common.log [-] 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
 method get_traffic_counters called with arguments 
(, [{u'status': u'ACTIVE', 
u'name': u'r', u'gw_port_id': u'86be6088-d967-45a8-bf69-8af76d956a3e', 
u'admin_state_up': True, u'tenant_id': u'1483a06525a5485e8a7dd64abaa66619', 
u'_metering_labels': [{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', 
u'direction': u'ingress', u'metering_label_id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'3991421b-50ce-46ea-b264-74bb47d09e65', u'excluded': False}, 
{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'706e55db-e2f7-4eb9-940a-67144a075a2c', u'excluded': False}], u'id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540'}], u'id': 
u'5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8'}]) {} wrapper 
/usr/lib/python2.7/dist-packages/neutron/common/lo
 g.py:33
  2014-02-28 17:04:51.286 1121 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z'] execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:43
  2014-02-28 17:04:51.291 1121 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z']
  Exit code: 1
  Stdout: ''
  Stderr: 'Cannot open network namespace: No such file or directory\n' execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:60
  2014-02-28 17:04:51.291 1121 ERROR neutron.openstack.common.loopingcall [-] 
in fixed duration looping call
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py", 
line 78, in _inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 163, in _metering_loop
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self._add_metering_infos()
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 155, in _add_metering_infos
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
accs = self._get_traffic_counters(self.context, self.routers.values())
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 215, in _get_traffic_counters
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
return self._invoke_driver(context, routers, 'get_traffic_counters')
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", 
line 247, in inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
retval = f(*args, **kwargs)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 180, in _invoke_driver
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
{'driver': cfg.CONF.metering_driver,
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1648, in 
__getattr__
  

[Yahoo-eng-team] [Bug 1551179] [NEW] poor network speed due to tso enabled

2016-02-29 Thread Rossella Sblendido
Public bug reported:

In some deployments we are experiencing low network speed. When
disabling tso on all virtual interfaces the problem is fixed. See also
[1].  I need to dig more into it, anyway I wonder if we should disable
TSO automatically every time Neutron creates a vif...


[1] 
http://askubuntu.com/questions/503863/poor-upload-speed-in-kvm-guest-with-virtio-eth-driver-in-openstack-on-3-14

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551179

Title:
  poor network speed due to tso enabled

Status in neutron:
  New

Bug description:
  In some deployments we are experiencing low network speed. When
  disabling tso on all virtual interfaces the problem is fixed. See also
  [1].  I need to dig more into it, anyway I wonder if we should disable
  TSO automatically every time Neutron creates a vif...

  
  [1] 
http://askubuntu.com/questions/503863/poor-upload-speed-in-kvm-guest-with-virtio-eth-driver-in-openstack-on-3-14

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546742] Re: Unable to create an instance

2016-02-18 Thread Rossella Sblendido
It doesn't look like a bug in Neutron but it seems that something is
misconfigured/not working properly in your cloud

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546742

Title:
  Unable to create an instance

Status in neutron:
  Invalid

Bug description:
  I am trying to create an instance from command line.

  System requirements:
  OS: CentOS 7
  Openstack Liberty

  Reproduce a bug:
  openstack server create --debug --flavor m1.tiny --image 
97836f02-2059-40a8-99ba-1730e97aa101 --nic 
net-id=256eea0b-06fe-49a0-880d-6ecc8afeff5a --security-group default --key-name 
vladf public-instance

  Program output:
  Instantiating network client: 
  Instantiating network api: 
  REQ: curl -g -i -X GET 
http://controller:9696/v2.0/networks.json?fields=id=256eea0b-06fe-49a0-880d-6ecc8afeff5a
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}c153a4df19fb8e5fb095ea656a8dabbe26d88e13"
  RESP: [503] date: Wed, 17 Feb 2016 22:04:50 GMT connection: keep-alive 
content-type: text/plain; charset=UTF-8 content-length: 100 
x-openstack-request-id: req-72933065-8db0-4dd4-af82-4a529cd08e90
  RESP BODY: 503 Service Unavailable

  The server is currently unavailable. Please try again at a later time.


  Error message: 503 Service Unavailable

  The server is currently unavailable. Please try again at a later time.

  
  503 Service Unavailable

  The server is currently unavailable. Please try again at a later time.

  
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 92, in run
  column_names, data = self.take_action(parsed_args)
File "/usr/lib/python2.7/site-packages/openstackclient/common/utils.py", 
line 45, in wrapper
  return func(self, *args, **kwargs)
File 
"/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", line 
452, in take_action
  nic_info["net-id"])
File "/usr/lib/python2.7/site-packages/openstackclient/network/common.py", 
line 32, in find
  data = list_method(**kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
574, in list_networks
  **_params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
307, in list
  for r in self._pagination(collection, path, **params):
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
320, in _pagination
  res = self.get(path, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
293, in get
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  self._handle_fault_response(status_code, replybody)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
185, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
83, in exception_handler_v20
  message=message)
  NeutronClientException: 503 Service Unavailable

  The server is currently unavailable. Please try again at a later time.

  
  clean_up CreateServer: 503 Service Unavailable

  The server is currently unavailable. Please try again at a later time.

  
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 112, 
in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 255, in run
  result = self.run_subcommand(remainder)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 92, in run
  column_names, data = self.take_action(parsed_args)
File "/usr/lib/python2.7/site-packages/openstackclient/common/utils.py", 
line 45, in wrapper
  return func(self, *args, **kwargs)
File 
"/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", line 
452, in take_action
  nic_info["net-id"])
File "/usr/lib/python2.7/site-packages/openstackclient/network/common.py", 
line 32, in find
  data = list_method(**kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, 

[Yahoo-eng-team] [Bug 1545058] [NEW] ovs agent Runtime error when checking int_if_name type for the first time

2016-02-12 Thread Rossella Sblendido
Public bug reported:

When ovs agent starts for the first time I get

2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-38d5137d-054e-4a60-ac14-3b4d62cfa0da - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-public'].
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Traceback 
(most recent call last):
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_vsctl.py", line 63, 
in run_vsctl
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl 
log_fail_as_error=False).rstrip()
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl raise 
RuntimeError(m)
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl RuntimeError:
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', '--oneline', '--format=json', '--', '--columns=type', 'list', 
'Interface', 'int-br-public']
2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Exit code: 1

since int-br-public doesn't exists yet.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Medium

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1545058

Title:
  ovs agent Runtime error when checking int_if_name type for the first
  time

Status in neutron:
  New

Bug description:
  When ovs agent starts for the first time I get

  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-38d5137d-054e-4a60-ac14-3b4d62cfa0da - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-public'].
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Traceback 
(most recent call last):
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_vsctl.py", line 63, 
in run_vsctl
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl 
log_fail_as_error=False).rstrip()
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 159, in 
execute
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl raise 
RuntimeError(m)
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl 
RuntimeError:
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', 
'--timeout=10', '--oneline', '--format=json', '--', '--columns=type', 'list', 
'Interface', 'int-br-public']
  2016-02-11 18:49:15.778 12566 ERROR neutron.agent.ovsdb.impl_vsctl Exit code: 
1

  since int-br-public doesn't exists yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1545058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543510] [NEW] Improve logging for port binding

2016-02-09 Thread Rossella Sblendido
Public bug reported:

When the port binding fails is very hard to understand what's going on.
Logs need to be improved and some duplicate log needs to be removed.

** Affects: neutron
 Importance: Low
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543510

Title:
  Improve logging for port binding

Status in neutron:
  In Progress

Bug description:
  When the port binding fails is very hard to understand what's going
  on. Logs need to be improved and some duplicate log needs to be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543598] [NEW] test_network_basic_ops fail with libvirt+xen

2016-02-09 Thread Rossella Sblendido
Public bug reported:


test_network_basic_ops fails with
testtools.matchers._impl.MismatchError: 0 != 1: Found multiple IPv4 addresses: 
[]. Unable to determine which port to target.

This is because nova doesn't wait for Neutron to plug the vif.

** Affects: nova
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress


** Tags: libvirt xen

** Tags added: libvirt xen

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543598

Title:
  test_network_basic_ops fail with libvirt+xen

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
  test_network_basic_ops fails with
  testtools.matchers._impl.MismatchError: 0 != 1: Found multiple IPv4 
addresses: []. Unable to determine which port to target.

  This is because nova doesn't wait for Neutron to plug the vif.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542385] [NEW] dnsmasq: cannot open log

2016-02-05 Thread Rossella Sblendido
Public bug reported:

it seems dnsmasq can't create the log file...

2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
[req-a29e6030-7524-4d33-8a1e-b7bb31809441 a19d99e7a3ac400db02142044a3190eb 
728a8a9d8
1824100bc01735e12273607 - - -] Unable to enable dhcp for 
b3da5a48-39f7-43a0-b6a5-a26e426fec08.
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 206, in 
enable
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
self.spawn_process()
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 414, in 
spawn_process
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 428, in 
_spawn_or_reload_process
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
pm.enable(reload_cfg=reload_with_HUP)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 92, in enable
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
run_as_root=self.run_as_root)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 861, in 
execute
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error, **kwargs)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent raise 
RuntimeError(m)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent RuntimeError:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qdhcp-b3da5a48-39f7-43a0-b6a5-a26e426fec08', 'dnsmasq', '--no-hosts', 
'--no-resolv', '--strict-order', '--except-interface=lo', 
'--pid-file=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/pid', 
'--dhcp-hostsfile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/host',
 
'--addn-hosts=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/addn_hosts',
 
'--dhcp-optsfile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/opts',
 
'--dhcp-leasefile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/leases',
 '--dhcp-match=set:ipxe,175', '--bind-interfaces', 
'--interface=tap58da37d6-b4', 
'--dhcp-range=set:tag0,192.168.123.0,static,86400s', 
'--dhcp-option-force=option:mtu,1358', '--dhcp-lease-max=256', '--conf-file=', 
'--server=192.168.219.1', '--domain=openstack.local', '--log-queries', 
'--log-dhcp', '--l
 
og-facility=/var/log/neutron/b3da5a48-39f7-43a0-b6a5-a26e426fec08/dhcp_dns_log']
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Exit code: 3
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stdin:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stdout:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stderr:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent dnsmasq: cannot 
open log /var/log/neutron/b3da5a48-39f7-43a0-b6a5-a26e426fec08/dhcp_dns_log: No 
such file or directory

** Affects: neutron
 Importance: Medium
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542385

Title:
  dnsmasq: cannot open log

Status in neutron:
  New

Bug description:
  it seems dnsmasq can't create the log file...

  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
[req-a29e6030-7524-4d33-8a1e-b7bb31809441 a19d99e7a3ac400db02142044a3190eb 
728a8a9d8
  1824100bc01735e12273607 - - -] Unable to enable dhcp for 
b3da5a48-39f7-43a0-b6a5-a26e426fec08.
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 

[Yahoo-eng-team] [Bug 1533704] [NEW] networking doesn't work for VMs on xen

2016-01-13 Thread Rossella Sblendido
Public bug reported:

I didn't experience it myself, I got an email from Tom Carroll
explaining this problem with lots of details. I thought I'd file a bug
so that other people can benefit.

This is the report:

"I've been attempting to use liberty neutron on XenServer and I've
noticed some changes that make it difficult to do so.  These changes
begin with commit 3543d8858691c1a709127e25fc0838e054bd34ef, the
delegating of is_active() to AsyncProcess.

The root cause of the problem is that the root helper, in this, neutron-
rootwrap-xen-dom0, runs in a domU, but executes commands in dom0.

In this scenario, AsyncProcess.pid returns None. This is due to trying
to travel from the root helper down to leaf children. And again, the
children are running in a different dom. As a consequence,
AsyncProcess.is_active() returns false, causing the ovsdb client to be
eventually respawned.

Another complicating scenario, is neutron-rootwrap-xen-dom0 communicates
with dom0 using an XMLRPC style protocol. It reads the entire stdin,
launches the command in dom0 providing the buffer to stdin, reads the
entire stdout, and responds back. If the command never ends, a response
will never be returned.

The end result is that new interfaces are never annotated with the
proper 1Q tag, which means that the network is inoperable for the VM.

A complete restart of the neutron agent, fixesup the networking."

** Affects: neutron
 Importance: Undecided
 Assignee: Tom Carroll (h-thomas-carroll)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Tom Carroll (h-thomas-carroll)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533704

Title:
  networking doesn't work for VMs on xen

Status in neutron:
  New

Bug description:
  I didn't experience it myself, I got an email from Tom Carroll
  explaining this problem with lots of details. I thought I'd file a bug
  so that other people can benefit.

  This is the report:

  "I've been attempting to use liberty neutron on XenServer and I've
  noticed some changes that make it difficult to do so.  These changes
  begin with commit 3543d8858691c1a709127e25fc0838e054bd34ef, the
  delegating of is_active() to AsyncProcess.

  The root cause of the problem is that the root helper, in this,
  neutron-rootwrap-xen-dom0, runs in a domU, but executes commands in
  dom0.

  In this scenario, AsyncProcess.pid returns None. This is due to trying
  to travel from the root helper down to leaf children. And again, the
  children are running in a different dom. As a consequence,
  AsyncProcess.is_active() returns false, causing the ovsdb client to be
  eventually respawned.

  Another complicating scenario, is neutron-rootwrap-xen-dom0
  communicates with dom0 using an XMLRPC style protocol. It reads the
  entire stdin, launches the command in dom0 providing the buffer to
  stdin, reads the entire stdout, and responds back. If the command
  never ends, a response will never be returned.

  The end result is that new interfaces are never annotated with the
  proper 1Q tag, which means that the network is inoperable for the VM.

  A complete restart of the neutron agent, fixesup the networking."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528641] [NEW] rootwrap filter for conntrack and sysctl are missing for the openvswitch agent

2015-12-22 Thread Rossella Sblendido
Public bug reported:

I see these kind of traces where running the ovs agent:

2015-12-22 16:33:56.650 2593 ERROR neutron.agent.linux.ip_conntrack
Stderr: /usr/bin/neutron-rootwrap: Unauthorized command: conntrack -D -f
ipv4 -d 44.0.2.78 -w 125 -s 44.0.3.89 (no filter matched)

rootwrap filters are missing

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528641

Title:
  rootwrap filter for conntrack and sysctl are missing for the
  openvswitch agent

Status in neutron:
  New

Bug description:
  I see these kind of traces where running the ovs agent:

  2015-12-22 16:33:56.650 2593 ERROR neutron.agent.linux.ip_conntrack
  Stderr: /usr/bin/neutron-rootwrap: Unauthorized command: conntrack -D
  -f ipv4 -d 44.0.2.78 -w 125 -s 44.0.3.89 (no filter matched)

  rootwrap filters are missing

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525856] [NEW] func test test_reprocess_port_when_ovs_restarts fails from time to time

2015-12-14 Thread Rossella Sblendido
Public bug reported:

The functional tests func test test_reprocess_port_when_ovs_restarts
fails sporadically with

Traceback (most recent call last):
  File "neutron/agent/linux/polling.py", line 56, in stop
self._monitor.stop()
  File "neutron/agent/linux/async_process.py", line 131, in stop
raise AsyncProcessException(_('Process is not running.'))
neutron.agent.linux.async_process.AsyncProcessException: Process is not running.

see for example http://logs.openstack.org/98/202098/30/check/gate-
neutron-dsvm-functional/8b49362/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests

** Tags added: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525856

Title:
  func test test_reprocess_port_when_ovs_restarts fails from time to
  time

Status in neutron:
  New

Bug description:
  The functional tests func test test_reprocess_port_when_ovs_restarts
  fails sporadically with

  Traceback (most recent call last):
File "neutron/agent/linux/polling.py", line 56, in stop
  self._monitor.stop()
File "neutron/agent/linux/async_process.py", line 131, in stop
  raise AsyncProcessException(_('Process is not running.'))
  neutron.agent.linux.async_process.AsyncProcessException: Process is not 
running.

  see for example http://logs.openstack.org/98/202098/30/check/gate-
  neutron-dsvm-functional/8b49362/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521314] Re: Changing physical interface mapping may result in multiple physical interfaces in bridge

2015-12-01 Thread Rossella Sblendido
The agent doesn't delete unused bridges. If you want to clean up,
there's a tool for that, it's neutron/cmd/linuxbridge_cleanup.py :)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521314

Title:
  Changing physical interface mapping may result in multiple physical
  interfaces in bridge

Status in neutron:
  Invalid

Bug description:
  Version: 2015.2 (Liberty)
  Plugin: ML2 w/ LinuxBridge

  While testing various NICs, I found that changing the physical
  interface mapping in the ML2 configuration file and restarting the
  agent resulted in the old physical interface remaining in the bridge.
  This can be observed with the following steps:

  Original configuration:

  [linux_bridge]
  physical_interface_mappings = physnet1:eth2

  racker@compute01:~$ brctl show
  bridge name   bridge id  STP enabled   interfaces
  brqad516357-478000.e41d2d5b6213  noeth2
 tap72e7d2be-24

  Modify the bridge mapping:

  [linux_bridge]
  #physical_interface_mappings = physnet1:eth2
  physical_interface_mappings = physnet1:eth1

  Restart the agent:

  racker@compute01:~$ sudo service neutron-plugin-linuxbridge-agent restart
  neutron-plugin-linuxbridge-agent stop/waiting
  neutron-plugin-linuxbridge-agent start/running, process 12803

  Check the bridge:

  racker@compute01:~$ brctl show
  bridge name   bridge id  STP enabled   interfaces
  brqad516357-478000.6805ca37dc39  noeth1
 eth2
 tap72e7d2be-24

  This behavior was observed with flat or vlan networks, and can result
  in some wonky behavior. Removing the original interface from the
  bridge(s) by hand or restarting the node is a workaround, but I
  suspect LinuxBridge users aren't used to modifying the bridges
  manually as the agent usually handles that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520336] Re: ovs-vsctl command in neutron agent fails with version 1.4.2

2015-12-01 Thread Rossella Sblendido
Paul sorry but I think we can't help here. You should contact Wheezy
maintainers to update the package. Marking the bug as invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1520336

Title:
  ovs-vsctl command in neutron agent fails with version 1.4.2

Status in neutron:
  Invalid

Bug description:
  Wheezy (which I believe is still supported?) comes with version 1.4.2
  of ovs-vsctl.

  It fails to start neutron agent (which also uses  --may-exist)  with

  2015-11-26 15:01:07.155 92027 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ovs-vsctl', '--timeout=10', '--oneline', 
'--format=json', '--', 'set', 'Bridge', 'br-int', 'protocols=[OpenFlow10]'] 
execute_rootwrap_daemon 
/home/pcarlton/openstack/neutron/neutron/agent/linux/utils.py:100
  2015-11-26 15:01:07.162 92027 ERROR neutron.agent.ovsdb.impl_vsctl [-] Unable 
to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'set', 'Bridge', 'br-int', 'protocols=[OpenFlow10]']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: Bridge does not contain a column whose 
name matches "protocols"

  2015-11-26 15:01:07.163 92027 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Exit code: 
1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: Bridge does not contain a column whose 
name matches "protocols"
   Agent terminated!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1520336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519980] Re: Add availability_zone support for network

2015-11-27 Thread Rossella Sblendido
Hello Hirofumi, there was a change in the way DocImpact flag workflow,
see here [1]. Now the bugs generated by the DocImpact flag are assigned
to the Neutron team instead of the doc team as it was before. Since you
are the author of the patch with the DocImpact flag would you mind
taking care of adding doc for this feature, thanks!

[1] http://lists.openstack.org/pipermail/openstack-
dev/2015-November/080294.html

** Changed in: neutron
   Status: Invalid => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Hirofumi Ichihara (ichihara-hirofumi)

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519980

Title:
  Add availability_zone support for network

Status in neutron:
  Confirmed

Bug description:
  https://review.openstack.org/204436
  commit 6e500278191296f75e6bd900b94f7e36cc69edf2
  Author: Hirofumi Ichihara 
  Date:   Thu Nov 19 15:05:27 2015 +0900

  Add availability_zone support for network
  
  This patch adds the availability_zone support for network.
  
  APIImpact
  DocImpact
  
  Change-Id: I9259d9679c74d3b3658771290e920a7896631e62
  Co-Authored-By: IWAMOTO Toshihiro 
  Partially-implements: blueprint add-availability-zone

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517018] [NEW] typo in iptables_firewall

2015-11-17 Thread Rossella Sblendido
Public bug reported:

devices_with_udpated_sg_members should be
devices_with_updated_sg_members

** Affects: neutron
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517018

Title:
  typo in iptables_firewall

Status in neutron:
  New

Bug description:
  devices_with_udpated_sg_members should be
  devices_with_updated_sg_members

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479264] Re: test_resync_devices_set_up_after_exception fails with "RowNotFound: Cannot find Bridge with name=test-br69135803"

2015-11-04 Thread Rossella Sblendido
If the bridge doesn't exists the exception is still not caught , which
is the right behaviour according to me because the agent tries to get
the ancillary ports only if it detects ancillary bridges. We were seeing
some race in the test clean up probably., which disappeared since I get
0 hit now. I will mark it as invalid, feel free to reopen it if the
problem persists.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479264

Title:
  test_resync_devices_set_up_after_exception fails with "RowNotFound:
  Cannot find Bridge with name=test-br69135803"

Status in neutron:
  Invalid

Bug description:
  Example: 
http://logs.openstack.org/88/206188/1/check/gate-neutron-dsvm-functional/a797b68/testr_results.html.gz
  Logstash: 

  ft1.205: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_resync_devices_set_up_after_exception(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stdout

  pythonlogging:'': {{{
  2015-07-28 21:38:06,203 INFO [neutron.agent.l2.agent_extensions_manager] 
Configured agent extensions names: ('qos',)
  2015-07-28 21:38:06,204 INFO [neutron.agent.l2.agent_extensions_manager] 
Loaded agent extensions names: ['qos']
  2015-07-28 21:38:06,204 INFO [neutron.agent.l2.agent_extensions_manager] 
Initializing agent extension 'qos'
  2015-07-28 21:38:06,280 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Mapping 
physical network physnet to bridge br-int359443631
  2015-07-28 21:38:06,349  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int359443631 exceeds the 15 character limitation. It was 
shortened to int-br-in3cbf05 to fit.
  2015-07-28 21:38:06,349  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int359443631 exceeds the 15 character limitation. It was 
shortened to phy-br-in3cbf05 to fit.
  2015-07-28 21:38:06,970 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Adding 
test-br69135803 to list of bridges.
  2015-07-28 21:38:06,974  WARNING [neutron.agent.securitygroups_rpc] Driver 
configuration doesn't match with enable_security_group
  2015-07-28 21:38:07,061 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Agent out of 
sync with plugin!
  2015-07-28 21:38:07,062 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Agent tunnel 
out of sync with plugin!
  2015-07-28 21:38:07,204ERROR [neutron.agent.ovsdb.impl_idl] Traceback 
(most recent call last):
File "neutron/agent/ovsdb/native/connection.py", line 84, in run
  txn.results.put(txn.do_commit())
File "neutron/agent/ovsdb/impl_idl.py", line 92, in do_commit
  ctx.reraise = False
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 119, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "neutron/agent/ovsdb/impl_idl.py", line 87, in do_commit
  command.run_idl(txn)
File "neutron/agent/ovsdb/native/commands.py", line 355, in run_idl
  br = idlutils.row_by_value(self.api.idl, 'Bridge', 'name', self.bridge)
File "neutron/agent/ovsdb/native/idlutils.py", line 59, in row_by_value
  raise RowNotFound(table=table, col=column, match=match)
  RowNotFound: Cannot find Bridge with name=test-br69135803

  2015-07-28 21:38:07,204ERROR [neutron.agent.ovsdb.native.commands] Error 
executing command
  Traceback (most recent call last):
File "neutron/agent/ovsdb/native/commands.py", line 35, in execute
  txn.add(self)
File "neutron/agent/ovsdb/api.py", line 70, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 70, in commit
  raise result.ex
  RowNotFound: Cannot find Bridge with name=test-br69135803
  2015-07-28 21:38:07,205ERROR 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Error while 
processing VIF ports
  Traceback (most recent call last):
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1569, in rpc_loop
  ancillary_ports)
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1104, in scan_ancillary_ports
  cur_ports |= bridge.get_vif_port_set()
File "neutron/agent/common/ovs_lib.py", line 376, in get_vif_port_set
  port_names = self.get_port_name_list()
File "neutron/agent/common/ovs_lib.py", line 313, in get_port_name_list
  return self.ovsdb.list_ports(self.br_name).execute(check_error=True)
File "neutron/agent/ovsdb/native/commands.py", line 42, in execute
  ctx.reraise = False
File 

[Yahoo-eng-team] [Bug 1332923] Re: Deadlock updating port with fixed ips

2015-10-22 Thread Rossella Sblendido
I think this doesn't occur any more, marking it as invalid if it appears
again we can resurrect it...

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332923

Title:
  Deadlock updating port with fixed ips

Status in neutron:
  Invalid

Bug description:
  Traceback:

   TRACE neutron.api.v2.resource Traceback (most recent call last):
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
   TRACE neutron.api.v2.resource result = method(request=request, **args)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 531, in update
   TRACE neutron.api.v2.resource obj = obj_updater(request.context, id, 
**kwargs)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 682, in update_port
   TRACE neutron.api.v2.resource port)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 1497, in 
update_port
   TRACE neutron.api.v2.resource p['fixed_ips'])
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 650, in 
_update_ips_for_port
   TRACE neutron.api.v2.resource ips = self._allocate_fixed_ips(context, 
network, to_add)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 612, in 
_allocate_fixed_ips
   TRACE neutron.api.v2.resource result = self._generate_ip(context, 
subnets)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 364, in 
_generate_ip
   TRACE neutron.api.v2.resource return 
NeutronDbPluginV2._try_generate_ip(context, subnets)
   TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 381, in 
_try_generate_ip
   TRACE neutron.api.v2.resource range = 
range_qry.filter_by(subnet_id=subnet['id']).first()
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2333, in 
first
   TRACE neutron.api.v2.resource ret = list(self[0:1])
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2200, in 
__getitem__
   TRACE neutron.api.v2.resource return list(res)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2404, in 
__iter__
   TRACE neutron.api.v2.resource return self._execute_and_instances(context)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2419, in 
_execute_and_instances
   TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 720, 
in execute
   TRACE neutron.api.v2.resource return meth(self, multiparams, params)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 317, 
in _execute_on_connection
   TRACE neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 817, 
in _execute_clauseelement
   TRACE neutron.api.v2.resource compiled_sql, distilled_params
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 947, 
in _execute_context
   TRACE neutron.api.v2.resource context)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1108, 
in _handle_dbapi_exception
   TRACE neutron.api.v2.resource exc_info
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 185, 
in raise_from_cause
   TRACE neutron.api.v2.resource reraise(type(exception), exception, 
tb=exc_tb)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 940, 
in _execute_context
   TRACE neutron.api.v2.resource context)
   TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
435, in do_execute
   TRACE neutron.api.v2.resource cursor.execute(statement, parameters)
   TRACE neutron.api.v2.resource DBAPIError: (TransactionRollbackError) 
deadlock detected
   TRACE neutron.api.v2.resource DETAIL:  Process 21690 waits for ShareLock on 
transaction 10397; blocked by process 21692.
   TRACE neutron.api.v2.resource Process 21692 waits for ShareLock on 
transaction 10396; blocked by process 21690.
   TRACE 

[Yahoo-eng-team] [Bug 1500567] Re: port binding host_id does not update when removing openvswitch agent

2015-10-05 Thread Rossella Sblendido
Out of curiosity why did you have to delete the ovs agent on that host?
I can't imagine any  use case...

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500567

Title:
  port binding host_id does not update when removing openvswitch agent

Status in neutron:
  Opinion

Bug description:
  SPECS:
  Openstack Juno
  neutron version 2.3.9
  nova version 2.20.0
  OS: CentOS 6.5 Final
  kernel: 2.6.32-504.1.3.el6.mos61.x86_64 (Mirantis Fuel 6.1 installed 
compute+ceph node)

  
  SCENARIO:
  I had a compute node that was also running an neutron-openvswitch-agent.  
This was 'node-12'.
  Before node-12's primary disk died, there was an instance being hosted on the 
node, which was in the state 'SHUTDOWN'.
  I have created a node-15, which also runs the neutron-openvswitch-agent with 
nova-compute.  I did not migrate the instance before perfroming a neutron 
agent-delete on node-12, so now there is metadata that looks like this:

  [root@node-14 ~]# neutron port-show c209538b-ecc1-4414-9f97-e0f6a5d08ecc
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | node-12

  
  ACTION:
  Node-12 neutron agent is deleted, using the command, `neutron agent-delete 
6bcadbe2-7631-41f5-9124-6fe75016217a`

  EXPECTED:
  all neturon ports bound with that agent should be updated and modified to use 
an alternative binding host_id, preferably the host currently running the VM. 
In my scenario, this would be node-15. NOT node-12.

  ACTUAL:
  The neutron ports maintained the same binding:host_id, which was node-12.


  
  ADDITIONAL INFORMATION:

  I was able to update the value using the following request:

  curl -X PUT -d '{"port":{"binding:host_id": "node-15.domain.com"}}' -H
  "X-Auth_token:f3f1c03239b246a8a7ffa9ca0eb323bf" -H "Content-type:
  application/json"
  http://10.10.30.2:9696/v2.0/ports/f98fe798-d522-4b6c-b084-45094fdc5052.json

  However, I'm not sure if there are modifications to the openvswitch
  agent on node-15 that also need to be performed.

  Also, since my node-12 died before I could migrate the instances, and
  I attempted to power them on before i realized they needed migration,
  I was forced to update the instances table in the database, and
  specify node-15 as the new host.

  > update instances set task_state = NULL where task_state = 'powering-on';
  > update instances set host = 'node-15.domain.com' where host = 
'node-12.domain.com';
  > update instances set node = 'node-15.domain.com' where node = 
'node-12.domain.com';
  > update instances set launched_on = 'node-15.domain.com' where launched_on = 
'node-12.domain.com';

  In my case, the work around is to kick off a 'migrate', in which case
  the binding:host_id is updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489802] [NEW] gate-neutron-dsvm-api FWaaSExtensionTestJSON tests fail from time to time

2015-08-28 Thread Rossella Sblendido
Public bug reported:

gate-neutron-dsvm-api FWaaSExtensionTestJSON tests fail from time to
time with

2015-08-27 17:14:37.316 | 2015-08-27 17:14:37.309 | Captured traceback:
2015-08-27 17:14:37.318 | 2015-08-27 17:14:37.311 | ~~~
2015-08-27 17:14:37.319 | 2015-08-27 17:14:37.312 | Traceback (most recent 
call last):
2015-08-27 17:14:37.320 | 2015-08-27 17:14:37.314 |   File 
neutron/tests/tempest/test.py, line 288, in tearDownClass
2015-08-27 17:14:37.322 | 2015-08-27 17:14:37.316 | teardown()
2015-08-27 17:14:37.324 | 2015-08-27 17:14:37.317 |   File 
neutron/tests/api/base.py, line 107, in resource_cleanup
2015-08-27 17:14:37.325 | 2015-08-27 17:14:37.319 | fw_policy['id'])
2015-08-27 17:14:37.327 | 2015-08-27 17:14:37.320 |   File 
neutron/tests/api/base.py, line 218, in _try_delete_resource
2015-08-27 17:14:37.328 | 2015-08-27 17:14:37.322 | 
delete_callable(*args, **kwargs)
2015-08-27 17:14:37.330 | 2015-08-27 17:14:37.323 |   File 
neutron/tests/tempest/services/network/json/network_client.py, line 121, in 
_delete
2015-08-27 17:14:37.331 | 2015-08-27 17:14:37.324 | resp, body = 
self.delete(uri)
2015-08-27 17:14:37.332 | 2015-08-27 17:14:37.326 |   File 
/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 287, in delete
2015-08-27 17:14:37.334 | 2015-08-27 17:14:37.327 | return 
self.request('DELETE', url, extra_headers, headers, body)
2015-08-27 17:14:37.335 | 2015-08-27 17:14:37.328 |   File 
/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 643, in request
2015-08-27 17:14:37.336 | 2015-08-27 17:14:37.330 | resp, resp_body)
2015-08-27 17:14:37.339 | 2015-08-27 17:14:37.332 |   File 
/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 705, in _error_checker
2015-08-27 17:14:37.349 | 2015-08-27 17:14:37.339 | raise 
exceptions.Conflict(resp_body)
2015-08-27 17:14:37.349 | 2015-08-27 17:14:37.342 | 
tempest_lib.exceptions.Conflict: An object with that identifier already exists
2015-08-27 17:14:37.351 | 2015-08-27 17:14:37.344 | Details: {u'message': 
u'Firewall Policy c3912df0-ea23-4843-af00-49ce25da6c08 is being used.', 
u'type': u'FirewallPolicyInUse', u'detail': u''}

logstash query

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdF9saWIuZXhjZXB0aW9ucy5Db25mbGljdDogQW4gb2JqZWN0IHdpdGggdGhhdCBpZGVudGlmaWVyIGFscmVhZHkgZXhpc3RzXCIgQU5EIGJ1aWxkX25hbWU6XCJnYXRlLW5ldXRyb24tZHN2bS1hcGlcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQ0MDc1Mzc1NTE3MywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489802

Title:
  gate-neutron-dsvm-api FWaaSExtensionTestJSON tests fail from time to
  time

Status in neutron:
  New

Bug description:
  gate-neutron-dsvm-api FWaaSExtensionTestJSON tests fail from time to
  time with

  2015-08-27 17:14:37.316 | 2015-08-27 17:14:37.309 | Captured traceback:
  2015-08-27 17:14:37.318 | 2015-08-27 17:14:37.311 | ~~~
  2015-08-27 17:14:37.319 | 2015-08-27 17:14:37.312 | Traceback (most 
recent call last):
  2015-08-27 17:14:37.320 | 2015-08-27 17:14:37.314 |   File 
neutron/tests/tempest/test.py, line 288, in tearDownClass
  2015-08-27 17:14:37.322 | 2015-08-27 17:14:37.316 | teardown()
  2015-08-27 17:14:37.324 | 2015-08-27 17:14:37.317 |   File 
neutron/tests/api/base.py, line 107, in resource_cleanup
  2015-08-27 17:14:37.325 | 2015-08-27 17:14:37.319 | fw_policy['id'])
  2015-08-27 17:14:37.327 | 2015-08-27 17:14:37.320 |   File 
neutron/tests/api/base.py, line 218, in _try_delete_resource
  2015-08-27 17:14:37.328 | 2015-08-27 17:14:37.322 | 
delete_callable(*args, **kwargs)
  2015-08-27 17:14:37.330 | 2015-08-27 17:14:37.323 |   File 
neutron/tests/tempest/services/network/json/network_client.py, line 121, in 
_delete
  2015-08-27 17:14:37.331 | 2015-08-27 17:14:37.324 | resp, body = 
self.delete(uri)
  2015-08-27 17:14:37.332 | 2015-08-27 17:14:37.326 |   File 
/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 287, in delete
  2015-08-27 17:14:37.334 | 2015-08-27 17:14:37.327 | return 
self.request('DELETE', url, extra_headers, headers, body)
  2015-08-27 17:14:37.335 | 2015-08-27 17:14:37.328 |   File 
/opt/stack/new/neutron/.tox/api/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 643, in request
  2015-08-27 17:14:37.336 | 2015-08-27 17:14:37.330 | resp, resp_body)
  2015-08-27 17:14:37.339 | 

[Yahoo-eng-team] [Bug 1489019] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489019

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489015] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489015

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489014] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489014

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487435] [NEW] don't hardcode tunnel bridge name in setup_tunnel_br

2015-08-21 Thread Rossella Sblendido
Public bug reported:

Since https://review.openstack.org/#/c/182920/ merged ovs agent
functional tests were failing on my machine, the reason is the the name
of the tunnel bridge is hard coded.

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487435

Title:
  don't hardcode tunnel bridge name in setup_tunnel_br

Status in neutron:
  In Progress

Bug description:
  Since https://review.openstack.org/#/c/182920/ merged ovs agent
  functional tests were failing on my machine, the reason is the the
  name of the tunnel bridge is hard coded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485206] [NEW] ovsdb monitor handle ofport correctly

2015-08-15 Thread Rossella Sblendido
Public bug reported:

Since ofport can be an integer or  an empty set when the ovs port is not
ready, ovsdb monitor needs to handle the second case too

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485206

Title:
  ovsdb monitor handle ofport correctly

Status in neutron:
  In Progress

Bug description:
  Since ofport can be an integer or  an empty set when the ovs port is
  not ready, ovsdb monitor needs to handle the second case too

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473038] [NEW] func test ovs agent KeyError: 'ofport'

2015-07-09 Thread Rossella Sblendido
Public bug reported:

From time to time I see this error:

2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.600 | Traceback (most recent 
call last):
2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.601 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1550, in rpc_loop
2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.603 | port_info = 
self.scan_ports(reg_ports, updated_ports_copy)
2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.604 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1075, in scan_ports
2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.605 | cur_ports = 
self.int_br.get_vif_port_set()
2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.607 |   File 
neutron/agent/common/ovs_lib.py, line 372, in get_vif_port_set
2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.608 | if result['ofport'] 
== UNASSIGNED_OFPORT:
2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.609 | KeyError: 'ofport'
2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.610 | 2015-07-07 16:30:31,332 
INFO [neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] 
Agent out of sync with plugin!


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiS2V5RXJyb3I6ICdvZnBvcnQnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzY0NDQwNTczMDksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473038

Title:
  func test ovs agent KeyError: 'ofport'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  From time to time I see this error:

  2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.600 | Traceback (most 
recent call last):
  2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.601 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1550, in rpc_loop
  2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.603 | port_info = 
self.scan_ports(reg_ports, updated_ports_copy)
  2015-07-07 16:32:03.667 | 2015-07-07 16:32:03.604 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1075, in scan_ports
  2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.605 | cur_ports = 
self.int_br.get_vif_port_set()
  2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.607 |   File 
neutron/agent/common/ovs_lib.py, line 372, in get_vif_port_set
  2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.608 | if 
result['ofport'] == UNASSIGNED_OFPORT:
  2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.609 | KeyError: 'ofport'
  2015-07-07 16:32:03.668 | 2015-07-07 16:32:03.610 | 2015-07-07 
16:30:31,332 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Agent out of 
sync with plugin!

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiS2V5RXJyb3I6ICdvZnBvcnQnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzY0NDQwNTczMDksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473199] [NEW] OVSBridge.db_list should query only the ports belonging to the bridge

2015-07-09 Thread Rossella Sblendido
Public bug reported:

This change introduced bulk operation for vif ports 
https://review.openstack.org/#/c/186734/
Anyway when calling

self.int_br.db_list


we query all the ports existing, not only the ports on the int_br

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473199

Title:
  OVSBridge.db_list should query only the ports belonging to the bridge

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This change introduced bulk operation for vif ports 
https://review.openstack.org/#/c/186734/
  Anyway when calling

  self.int_br.db_list

  
  we query all the ports existing, not only the ports on the int_br

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470894] [NEW] test_port_creation_and_deletion KeyError: UUID

2015-07-02 Thread Rossella Sblendido
Public bug reported:

Functional test test_port_creation_and_deletion sometimes fails with
this stacktrace:

2015-07-02 03:24:13.028 | 2015-07-02 03:24:13.005 | Traceback (most recent 
call last):
2015-07-02 03:24:13.029 | 2015-07-02 03:24:13.007 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1526, in rpc_loop
2015-07-02 03:24:13.031 | 2015-07-02 03:24:13.008 | port_info = 
self.scan_ports(reg_ports, updated_ports_copy)
2015-07-02 03:24:13.032 | 2015-07-02 03:24:13.010 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1065, in scan_ports
2015-07-02 03:24:13.034 | 2015-07-02 03:24:13.011 | cur_ports = 
self.int_br.get_vif_port_set()
2015-07-02 03:24:13.036 | 2015-07-02 03:24:13.013 |   File 
neutron/agent/common/ovs_lib.py, line 365, in get_vif_port_set
2015-07-02 03:24:13.037 | 2015-07-02 03:24:13.014 | results = 
cmd.execute(check_error=True)
2015-07-02 03:24:13.038 | 2015-07-02 03:24:13.016 |   File 
neutron/agent/ovsdb/native/commands.py, line 42, in execute
2015-07-02 03:24:13.040 | 2015-07-02 03:24:13.017 | ctx.reraise = False
2015-07-02 03:24:13.041 | 2015-07-02 03:24:13.019 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
2015-07-02 03:24:13.045 | 2015-07-02 03:24:13.022 | 
six.reraise(self.type_, self.value, self.tb)
2015-07-02 03:24:13.046 | 2015-07-02 03:24:13.023 |   File 
neutron/agent/ovsdb/native/commands.py, line 35, in execute
2015-07-02 03:24:13.048 | 2015-07-02 03:24:13.025 | txn.add(self)
2015-07-02 03:24:13.955 | 2015-07-02 03:24:13.932 |   File 
neutron/agent/ovsdb/api.py, line 70, in __exit__
2015-07-02 03:24:13.957 | 2015-07-02 03:24:13.934 | self.result = 
self.commit()
2015-07-02 03:24:13.958 | 2015-07-02 03:24:13.935 |   File 
neutron/agent/ovsdb/impl_idl.py, line 70, in commit
2015-07-02 03:24:13.959 | 2015-07-02 03:24:13.937 | raise result.ex
2015-07-02 03:24:13.961 | 2015-07-02 03:24:13.938 | KeyError: 
UUID('7a90521e-6b79-444f-a63f-0973b71b8018')

see query

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiS2V5RXJyb3I6IFVVSURcIiBBTkQgYnVpbGRfbmFtZTpcImNoZWNrLW5ldXRyb24tZHN2bS1mdW5jdGlvbmFsXCIgIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU4NDk3OTU1MjIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

example of console log
http://logs.openstack.org/27/197727/4/check/check-neutron-dsvm-functional/7fd5c5c/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470894

Title:
  test_port_creation_and_deletion KeyError: UUID

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Functional test test_port_creation_and_deletion sometimes fails with
  this stacktrace:

  2015-07-02 03:24:13.028 | 2015-07-02 03:24:13.005 | Traceback (most 
recent call last):
  2015-07-02 03:24:13.029 | 2015-07-02 03:24:13.007 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1526, in rpc_loop
  2015-07-02 03:24:13.031 | 2015-07-02 03:24:13.008 | port_info = 
self.scan_ports(reg_ports, updated_ports_copy)
  2015-07-02 03:24:13.032 | 2015-07-02 03:24:13.010 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1065, in scan_ports
  2015-07-02 03:24:13.034 | 2015-07-02 03:24:13.011 | cur_ports = 
self.int_br.get_vif_port_set()
  2015-07-02 03:24:13.036 | 2015-07-02 03:24:13.013 |   File 
neutron/agent/common/ovs_lib.py, line 365, in get_vif_port_set
  2015-07-02 03:24:13.037 | 2015-07-02 03:24:13.014 | results = 
cmd.execute(check_error=True)
  2015-07-02 03:24:13.038 | 2015-07-02 03:24:13.016 |   File 
neutron/agent/ovsdb/native/commands.py, line 42, in execute
  2015-07-02 03:24:13.040 | 2015-07-02 03:24:13.017 | ctx.reraise = 
False
  2015-07-02 03:24:13.041 | 2015-07-02 03:24:13.019 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
  2015-07-02 03:24:13.045 | 2015-07-02 03:24:13.022 | 
six.reraise(self.type_, self.value, self.tb)
  2015-07-02 03:24:13.046 | 2015-07-02 03:24:13.023 |   File 
neutron/agent/ovsdb/native/commands.py, line 35, in execute
  2015-07-02 03:24:13.048 | 2015-07-02 03:24:13.025 | txn.add(self)
  2015-07-02 03:24:13.955 | 2015-07-02 03:24:13.932 |   File 
neutron/agent/ovsdb/api.py, line 70, in __exit__
  2015-07-02 03:24:13.957 | 2015-07-02 03:24:13.934 | self.result = 
self.commit()
  2015-07-02 03:24:13.958 | 2015-07-02 03:24:13.935 |   File 
neutron/agent/ovsdb/impl_idl.py, line 70, 

[Yahoo-eng-team] [Bug 1454640] [NEW] ml2.test_rpc cannot be run with testtools

2015-05-13 Thread Rossella Sblendido
Public bug reported:

When running ml2.test_rpc with testtools the following error is got:

./run_tests.sh -d  
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
Tests running...
==
ERROR: 
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
--
Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'

Traceback (most recent call last):
  File neutron/tests/unit/plugins/ml2/test_rpc.py, line 41, in setUp
self.type_manager = managers.TypeManager()
  File neutron/plugins/ml2/managers.py, line 46, in __init__
cfg.CONF.ml2.type_drivers)
  File 
/opt/stack/neutron/.venv/local/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1867, in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option: ml2

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454640

Title:
  ml2.test_rpc cannot be run with testtools

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When running ml2.test_rpc with testtools the following error is got:

  ./run_tests.sh -d  
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
  Tests running...
  ==
  ERROR: 
neutron.tests.unit.plugins.ml2.test_rpc.RpcCallbacksTestCase.test_update_device_list_no_failure
  --
  Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'

  Traceback (most recent call last):
File neutron/tests/unit/plugins/ml2/test_rpc.py, line 41, in setUp
  self.type_manager = managers.TypeManager()
File neutron/plugins/ml2/managers.py, line 46, in __init__
  cfg.CONF.ml2.type_drivers)
File 
/opt/stack/neutron/.venv/local/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1867, in __getattr__
  raise NoSuchOptError(name)
  oslo_config.cfg.NoSuchOptError: no such option: ml2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210274] Re: InvalidQuotaValue should be 400 or 403 rather than 409

2015-03-24 Thread Rossella Sblendido
Marking this as invalid, looking at the comments in the proposed code
review it seems that 409 is OK after all

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210274

Title:
  InvalidQuotaValue should be 400 or 403 rather than 409

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The neutron.common.exceptions.InvalidQuotaValue exception extends
  Conflict which is a 409 status code but that doesn't really make sense
  for this exception.  I'd say it should be a BadRequest (400) but the
  API docs list possible response codes for quota extension operations
  as 401 or 403:

  http://docs.openstack.org/api/openstack-
  network/2.0/content/Update_Quotas.html

  In this case I'd say it's more of a 403 than a 409.  Regardless, 409
  isn't even in the API doc for the quota extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1210274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425988] [NEW] in some tests in L3SchedulerTestBaseMixin plugin is mocked without any reason

2015-02-26 Thread Rossella Sblendido
Public bug reported:

Plugin shouldn't be mocked since it's defined by the classes that
inherits from L3SchedulerTestBaseMixin

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425988

Title:
  in some tests in L3SchedulerTestBaseMixin plugin is mocked without any
  reason

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Plugin shouldn't be mocked since it's defined by the classes that
  inherits from L3SchedulerTestBaseMixin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1425988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424587] [NEW] In ML2 plugin for accessing private attributes of PortContext members use accessors

2015-02-23 Thread Rossella Sblendido
Public bug reported:

In the ML2 plugin instead of accessing the private members of
PortContext directly, use accessors. For example:

orig_context._network_context._network

should be:

orig_context.network.current


port = mech_context._port

should be:

port = mech_context.current

and so on...

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit ml2

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424587

Title:
  In ML2 plugin for accessing private attributes of PortContext members
  use accessors

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the ML2 plugin instead of accessing the private members of
  PortContext directly, use accessors. For example:

  orig_context._network_context._network

  should be:

  orig_context.network.current

  
  port = mech_context._port

  should be:

  port = mech_context.current

  and so on...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421626] [NEW] _sync_vlan_allocations thowing DBDuplicateEntry with postgres HA

2015-02-13 Thread Rossella Sblendido
/session.py,
 line 559, in _raise_
if_duplicate_entry_error
2014-12-15 14:20:39.644 14746 TRACE neutron raise 
exception.DBDuplicateEntry(columns, integrity_error)
2014-12-15 14:20:39.644 14746 TRACE neutron DBDuplicateEntry: (IntegrityError) 
duplicate key value violates unique constraint ml2_vlan_allocations_pkey
2014-12-15 14:20:39.644 14746 TRACE neutron DETAIL:  Key (physical_network, 
vlan_id)=(physnet1, 3156) already exists.
2014-12-15 14:20:39.644 14746 TRACE neutron  'INSERT INTO ml2_vlan_allocations 
(physical_network, vlan_id, allocated) VALUES (%(physical_network)s, %(vlan_id)s
, %(allocated)s)' ({'allocated': False, 'physical_network': 'physnet1', 
'vlan_id': 3156}, {'allocated': False, 'physical_network': 'physnet1', 
'vlan_id': 3158}
, {'allocated': False, 'physical_network': 'physnet1', 'vlan_id': 3160}, 
{'allocated': False, 'physical_network': 'physnet1', 'vlan_id': 3162}, 
{'allocated': F
alse, 'physical_network': 'physnet1', 'vlan_id': 3164}, {'allocated': False, 
'physical_network': 'physnet1', 'vlan_id': 3165}, {'allocated': False, 
'physical_n
etwork': 'physnet1', 'vlan_id': 3167}, {'allocated': False, 'physical_network': 
'physnet1', 'vlan_id': 3169}  ... displaying 10 of 486 total bound parameter se
ts ...  {'allocated': False, 'physical_network': 'physnet1', 'vlan_id': 3648}, 
{'allocated': False, 'physical_network': 'physnet1', 'vlan_id': 3649})

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421626

Title:
  _sync_vlan_allocations thowing DBDuplicateEntry with postgres HA

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  _sync_vlan_allocations is thowing DBDuplicateEntry when 2 neutron
  servers are rebooted at the same time.  Postgres is HA, the FOR UPDATE
  lock is not working and both server try to write the data in the DB at
  the same time.

  2014-12-15 14:20:39.644 14746 TRACE neutron Traceback (most recent call last):
  2014-12-15 14:20:39.644 14746 TRACE neutron   File /usr/bin/neutron-server, 
line 10, in module
  2014-12-15 14:20:39.644 14746 TRACE neutron sys.exit(main())
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/server/__init__.py, line 48, in 
main
  2014-12-15 14:20:39.644 14746 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 112, in serve_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron 
LOG.exception(_('Unrecoverable error: please check log '
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/openstack/common/excutils.py, line 
82, in __exit__
  2014-12-15 14:20:39.644 14746 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 105, in serve_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron service.start()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 74, in start
  2014-12-15 14:20:39.644 14746 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/service.py, line 173, in _run_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron app = 
config.load_paste_app(app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/neutron/common/config.py, line 170, in 
load_paste_app
  2014-12-15 14:20:39.644 14746 TRACE neutron app = 
deploy.loadapp(config:%s % config_path, name=app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
  2014-12-15 14:20:39.644 14746 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
  2014-12-15 14:20:39.644 14746 TRACE neutron return context.create()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py, line 710, in 
create
  2014-12-15 14:20:39.644 14746 TRACE neutron return 
self.object_type.invoke(self)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py, line 144, in 
invoke
  2014-12-15 14:20:39.644 14746 TRACE neutron **context.local_conf)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
/usr/lib64/python2.6/site-packages/paste/deploy

[Yahoo-eng-team] [Bug 1421132] [NEW] test_rebuild_availability_range is failing from time to time

2015-02-12 Thread Rossella Sblendido
Public bug reported:

Functional test test_rebuild_availability_range is failing quite often
with the following stacktrace:

2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.890 | Traceback (most recent 
call last):
2015-02-11 02:48:31.256 | 2015-02-11 02:48:29.892 |   File 
neutron/tests/functional/db/test_ipam.py, line 198, in 
test_rebuild_availability_range
2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.893 | 
self._create_port(self.port_id)
2015-02-11 02:48:31.257 | 2015-02-11 02:48:29.894 |   File 
neutron/tests/functional/db/test_ipam.py, line 128, in _create_port
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.896 | 
self.plugin.create_port(self.cxt, {'port': port})
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.897 |   File 
neutron/db/db_base_plugin_v2.py, line 1356, in create_port
2015-02-11 02:48:31.258 | 2015-02-11 02:48:29.898 | context, 
ip_address, network_id, subnet_id, port_id)
2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.900 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 470, 
in __exit__
2015-02-11 02:48:31.259 | 2015-02-11 02:48:29.901 | self.rollback()
2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.902 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
2015-02-11 02:48:31.260 | 2015-02-11 02:48:29.904 | 
compat.reraise(exc_type, exc_value, exc_tb)
2015-02-11 02:48:35.598 | 2015-02-11 02:48:29.905 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 467, 
in __exit__
2015-02-11 02:48:35.599 | 2015-02-11 02:48:29.906 | self.commit()
2015-02-11 02:48:35.600 | 2015-02-11 02:48:29.907 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 377, 
in commit
2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.909 | self._prepare_impl()
2015-02-11 02:48:35.601 | 2015-02-11 02:48:29.910 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 357, 
in _prepare_impl
2015-02-11 02:48:35.602 | 2015-02-11 02:48:29.911 | self.session.flush()
2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.913 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1919, 
in flush
2015-02-11 02:48:35.603 | 2015-02-11 02:48:29.914 | self._flush(objects)
2015-02-11 02:48:35.604 | 2015-02-11 02:48:29.915 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2037, 
in _flush
2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.917 | 
transaction.rollback(_capture_exception=True)
2015-02-11 02:48:35.605 | 2015-02-11 02:48:29.918 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
2015-02-11 02:48:35.606 | 2015-02-11 02:48:29.919 | 
compat.reraise(exc_type, exc_value, exc_tb)
2015-02-11 02:48:35.607 | 2015-02-11 02:48:29.921 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2001, 
in _flush
2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.922 | 
flush_context.execute()
2015-02-11 02:48:35.608 | 2015-02-11 02:48:29.923 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 
372, in execute
2015-02-11 02:48:35.609 | 2015-02-11 02:48:29.925 | rec.execute(self)
2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.926 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 
526, in execute
2015-02-11 02:48:35.610 | 2015-02-11 02:48:29.927 | uow
2015-02-11 02:48:35.611 | 2015-02-11 02:48:29.929 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
46, in save_obj
2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.930 | uowtransaction)
2015-02-11 02:48:35.612 | 2015-02-11 02:48:29.931 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
171, in _organize_states_for_save
2015-02-11 02:48:35.613 | 2015-02-11 02:48:29.933 | 
state_str(existing)))
2015-02-11 02:48:35.614 | 2015-02-11 02:48:29.934 | FlushError: New 
instance IPAllocation at 0x7fdadf237d10 with identity key (class 
'neutron.db.models_v2.IPAllocation', (u'10.10.10.2', u'test_sub_id', 
'test_net_id')) conflicts with persistent instance IPAllocation at 
0x7fdad2488cd0


See for example 
http://logs.openstack.org/35/149735/4/gate/gate-neutron-dsvm-functional/fc960fe/console.html

Logstack query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmx1c2hFcnJvclwiICBBTkQgdGFnczpcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIzNzMzNTU0NDc5LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1409774] [NEW] sqlalchemy session needs to be rolled back after catching a db exception

2015-01-12 Thread Rossella Sblendido
Public bug reported:

To avoid errors like this:

sqlalchemy.exc.InvalidRequestError: This Session's transaction has been
rolled back by a nested rollback() call.  To begin a new transaction,
issue Session.rollback() first

the sqlalchemy session needs to be rolled back after catching a db
exception in a transaction, see sqlalchemy faq
http://docs.sqlalchemy.org/en/rel_0_8/faq.html#this-session-s
-transaction-has-been-rolled-back-due-to-a-previous-exception-during-
flush-or-similar . There are places in Neutron code where a db exception
is caught and the session is not properly rolled back. As explained in
the sqlalchemy faq, this is the right way:

try:
use session
session.commit()
except:
session.rollback()
raise
finally:
session.close() # optional, depends on use case

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409774

Title:
  sqlalchemy session needs to be rolled back after catching a db
  exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  To avoid errors like this:

  sqlalchemy.exc.InvalidRequestError: This Session's transaction has
  been rolled back by a nested rollback() call.  To begin a new
  transaction, issue Session.rollback() first

  the sqlalchemy session needs to be rolled back after catching a db
  exception in a transaction, see sqlalchemy faq
  http://docs.sqlalchemy.org/en/rel_0_8/faq.html#this-session-s
  -transaction-has-been-rolled-back-due-to-a-previous-exception-during-
  flush-or-similar . There are places in Neutron code where a db
  exception is caught and the session is not properly rolled back. As
  explained in the sqlalchemy faq, this is the right way:

  try:
  use session
  session.commit()
  except:
  session.rollback()
  raise
  finally:
  session.close() # optional, depends on use case

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1409774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401973] [NEW] HTTPBadRequest: Invalid input for ethertype

2014-12-12 Thread Rossella Sblendido
Public bug reported:

http://logs.openstack.org/57/141057/4/check/check-tempest-dsvm-neutron-
full-2/beb8bb1/logs/screen-q-svc.txt.gz

2014-12-12 07:17:15.382 30282 ERROR neutron.api.v2.resource 
[req-c8bb4096-ddce-4dd8-b1ea-8f213aa4e307 None] create failed
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 382, in create
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 645, in 
prepare_request_body
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPBadRequest(msg)
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource HTTPBadRequest: 
Invalid input for ethertype. Reason: 'None' is not in ['IPv4', 'IPv6'].
2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource

1058 hits in the last 7 days

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW52YWxpZCBpbnB1dCBmb3IgZXRoZXJ0eXBlXCIgIEFORCB0YWdzOlwic2NyZWVuLXEtc3ZjLnR4dFwiIEFORCBidWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIiLCJmaWVsZHMiOlsidGFncyIsImZpbGVuYW1lIiwiYnVpbGRfbWFzdGVyIiwiYnVpbGRfY2hhbmdlIiwiYnVpbGRfYnJhbmNoIl0sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxODQwMTY4Mjc4MCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401973

Title:
  HTTPBadRequest: Invalid input for ethertype

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/57/141057/4/check/check-tempest-dsvm-
  neutron-full-2/beb8bb1/logs/screen-q-svc.txt.gz

  2014-12-12 07:17:15.382 30282 ERROR neutron.api.v2.resource 
[req-c8bb4096-ddce-4dd8-b1ea-8f213aa4e307 None] create failed
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 382, in create
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 645, in 
prepare_request_body
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPBadRequest(msg)
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource HTTPBadRequest: 
Invalid input for ethertype. Reason: 'None' is not in ['IPv4', 'IPv6'].
  2014-12-12 07:17:15.382 30282 TRACE neutron.api.v2.resource

  1058 hits in the last 7 days

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSW52YWxpZCBpbnB1dCBmb3IgZXRoZXJ0eXBlXCIgIEFORCB0YWdzOlwic2NyZWVuLXEtc3ZjLnR4dFwiIEFORCBidWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIiLCJmaWVsZHMiOlsidGFncyIsImZpbGVuYW1lIiwiYnVpbGRfbWFzdGVyIiwiYnVpbGRfY2hhbmdlIiwiYnVpbGRfYnJhbmNoIl0sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQxODQwMTY4Mjc4MCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285165] Re: test_add_remove_router_interface_with_port_id fails

2014-11-21 Thread Rossella Sblendido
If it's still valid please add a comment and explain how to reproduce
it. For now marking it as invalid

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285165

Title:
  test_add_remove_router_interface_with_port_id fails

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Full log:
  
http://logs.openstack.org/54/72854/6/check/check-tempest-dsvm-neutron-pg/b4e8e1f/logs/testr_results.html.gz

  
tempest.api.network.test_routers.RoutersTest.test_add_remove_router_interface_with_port_id[gate,smoke]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-02-26 13:07:19,174 Request: POST http://127.0.0.1:5000/v2.0/tokens
  2014-02-26 13:07:19,174 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json'}
  2014-02-26 13:07:19,174 Request Body: {auth: {tenantName: 
RoutersTest-1350141744, passwordCredentials: {username: 
RoutersTest-1283447519, password: pass}}}
  2014-02-26 13:07:19,322 Response Status: 200
  2014-02-26 13:07:19,322 Response Headers: {'content-length': '11138', 'date': 
'Wed, 26 Feb 2014 13:07:19 GMT', 'content-type': 'application/json', 'vary': 
'X-Auth-Token', 'connection': 'close'}
  2014-02-26 13:07:19,323 Response Body: {access: {token: {issued_at: 
2014-02-26T13:07:19.291621, expires: 2014-02-26T14:07:19Z, id: 
MIITZgYJKoZIhvcNAQcCoIITVzCCE1MCAQExCTAHBgUrDgMCGjCCEbwGCSqGSIb3DQEHAaCCEa0EghGpeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wMi0yNlQxMzowNzoxOS4yOTE2MjEiLCAiZXhwaXJlcyI6ICIyMDE0LTAyLTI2VDE0OjA3OjE5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIlJvdXRlcnNUZXN0LTEzNTAxNDE3NDQtZGVzYyIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImEyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgIm5hbWUiOiAiUm91dGVyc1Rlc3QtMTM1MDE0MTc0NCJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4IiwgImlkIjogIjZjYmFjNTE4MDllZDQxOTVhN2YwZjVmZmRmOWU1OGJkIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzQvdjIvYTJkMjBkMjI4ODkwNDY1Mz
 
hiMTlhNWVjMTQzMDM0ZTgifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjk2OTYvIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5Njk2LyIsICJpZCI6ICIxNTFhOTFiNjRkNDE0ZjZlODc3MjFiYzNlMjc2NmNjMSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi9hMmQyMGQyMjg4OTA0NjUzOGIxOWE1ZWMxNDMwMzRlOCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi9hMmQyMGQyMjg4OTA0NjUzOGIxOWE1ZWMxNDMwMzRlOCIsICJpZCI6ICJkYWNjMjQ1NDM1NTc0ZWJkYjdlY2JkMjc2YzI5MzQ3YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc2L3YyL2EyZDIwZDIyODg5MDQ2NTM4YjE5YTVlYzE0MzAzNGU4In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVydjIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8
 
vMTI3LjAuMC4xOjg3NzQvdjMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3L
  2014-02-26 13:07:19,323 Large body (11138) md5 summary: 
e2591b9572309e9279426e6c3f734264
  2014-02-26 13:07:19,324 Request: POST http://127.0.0.1:9696/v2.0/networks
  2014-02-26 13:07:19,324 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': 'Token omitted'}
  2014-02-26 13:07:19,324 Request Body: {network: {name: 
test-network--1963481106}}
  2014-02-26 13:07:19,507 Response Status: 201
  2014-02-26 13:07:19,507 OpenStack request id 
req-b0e7bcdf-9f1d-4171-973f-b00c2cbec9b9
  2014-02-26 13:07:19,508 Response Headers: {'content-length': '220', 'date': 
'Wed, 26 Feb 2014 13:07:19 GMT', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
  2014-02-26 13:07:19,508 Response Body: {network: {status: ACTIVE, 
subnets: [], name: test-network--1963481106, admin_state_up: true, 
tenant_id: a2d20d22889046538b19a5ec143034e8, shared: false, id: 
d70222a5-7488-41ea-89c4-43cb82607668}}
  2014-02-26 13:07:19,509 Request: POST http://127.0.0.1:9696/v2.0/subnets
  2014-02-26 13:07:19,510 Request Headers: {'Content-Type': 'application/json', 
'Accept': 'application/json', 'X-Auth-Token': 'Token omitted'}
  2014-02-26 13:07:19,510 Request Body: {subnet: {network_id: 
d70222a5-7488-41ea-89c4-43cb82607668, ip_version: 4, cidr: 
10.100.0.0/28}}
  2014-02-26 13:07:19,610 Response Status: 201
  2014-02-26 13:07:19,610 OpenStack request id 
req-430cbc36-ad09-4984-8cf7-2be673a8540b
  2014-02-26 13:07:19,610 Response Headers: 

[Yahoo-eng-team] [Bug 1306759] Re: 14.04 some updates will cover file /etc/sudoers

2014-11-21 Thread Rossella Sblendido
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1306759

Title:
  14.04 some updates will cover file /etc/sudoers

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  After I installed some updates from Software Updater for a while, I found my 
account cannot use sudo command anymore. Ubuntu told me that I'm not a sudoer. 
Reboot my laptop and enter recover mode, I found 2 lines of my /etc/sudoers 
file was changed from 
root ALL=(ALL) ALL
my_account ALL=(ALL) ALL
  to
 root ALL=(ALL:ALL) ALL

  I'm the only user of my laptop, and recent days I didn't change
  sudoers. I think it's a bug in a update files.

  x86_64
  Ubuntu 14.04 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1306759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282206] Re: Uncaught GreenletExit in ProcessLauncher if wait called after greenlet kill

2014-11-21 Thread Rossella Sblendido
Fix for Neutron was released https://review.openstack.org/#/c/98259/

** Changed in: neutron
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282206

Title:
  Uncaught GreenletExit in ProcessLauncher if wait called after greenlet
  kill

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  I'm running wait in ProcessLauncher in a green thread.  I attempted to
  kill the green thread and then call wait so that the process launcher
  object can reap its child processes cleanly.  This resulted in a trace
  resulting from a GreenletExit exception being thrown.

  The eventlet documentation states multiple times that GreenletExit is
  thrown after .kill() has been called to kill a thread.  I think
  ProcessLauncher should expect this and deal with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366056] [NEW] delete_subnet in ml2 plugin makes 2 calls to get the subnet

2014-09-05 Thread Rossella Sblendido
Public bug reported:

delete_subnet makes 2 calls to get the same subnet:

one here: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L711
the second here: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L734

Only one call is needed

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366056

Title:
  delete_subnet in ml2 plugin makes 2 calls to get the  subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  delete_subnet makes 2 calls to get the same subnet:

  one here: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L711
  the second here: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L734

  Only one call is needed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364085] [NEW] DBDuplicateEntry: (IntegrityError) duplicate key value violates unique constraint ipavailabilityranges_pkey

2014-09-01 Thread Rossella Sblendido
Public bug reported:

It affects only postgresql
To reproduce this bug:

1) Create a network with few IPs (eg /29)
2) Create 2 VMs that use the /29 network.
3) Destroy the 2 VMs
4) From horizon create a number of VMs  IP available (eg. 8)

When there's no IP available, _rebuild_availability_ranges will be
called to recycle the IPs of the VMs  that no longer exist (the ones
that we created at step 2 ) . From the logs i see that when there's a
bulk creation of ports and the IP range is over, 2 or more
_rebuild_availability_ranges are triggered at the same time by different
port creation operation. This leads to the DBDuplicateEntry, since the
operation that is performed last, will try to insert in the DB stale
data.

See log:

 362399 2014-08-26 10:01:01.926 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
 362400 2014-08-26 10:01:01.927 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254'}], 'host_routes': [], 'cidr': 
u'10.238.192.0/18', 'id': u'a77b383d-e881-49c1-8143-910ec46fe42a', 
'name': u'floating', 'enable_dhcp': False, 'network_id': 
u'a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a', 'tenant_id': u'774289027a8441babaed
f5774e49e971', 'dns_nameservers': [], 'gateway_ip': u'10.238.192.3', 
'ip_version': 4, 'shared': False} _rebuild_availability_ranges /usr/li
b64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:414
 362401 2014-08-26 10:01:01.928 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Generated mac for network a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a is fa:1
6:3e:6f:f3:02 _generate_mac 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:305
 362402 2014-08-26 10:01:01.952 27821 INFO neutron.wsgi [-] 10.1.100.1 - - 
[26/Aug/2014 10:01:01] GET /v2.0/subnets.json?id=fa32a700-57ce-4bfc-a2b
f-7fb19e20f81b HTTP/1.1 200 504 0.091873
 362403 
 362404 2014-08-26 10:01:01.966 27821 INFO neutron.wsgi [-] (27821) accepted 
('10.1.100.1', 21561)
 362405 
 362406 2014-08-26 10:01:01.977 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
 362407 2014-08-26 10:01:01.977 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254'}], 'host_routes': [], 'cidr': 
u'10.238.192.0/18', 'id': u'a77b383d-e881-49c1-8143-910ec46fe42a', 
'name': u'floating', 'enable_dhcp': False, 'network_id': 
u'a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a', 'tenant_id': u'774289027a8441babaed
f5774e49e971', 'dns_nameservers': [], 'gateway_ip': u'10.238.192.3', 
'ip_version': 4, 'shared': False} _rebuild_availability_ranges 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:414

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New


** Tags: postgresql

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

** Tags added: postgresql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364085

Title:
  DBDuplicateEntry: (IntegrityError) duplicate key value violates unique
  constraint ipavailabilityranges_pkey

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It affects only postgresql
  To reproduce this bug:

  1) Create a network with few IPs (eg /29)
  2) Create 2 VMs that use the /29 network.
  3) Destroy the 2 VMs
  4) From horizon create a number of VMs  IP available (eg. 8)

  When there's no IP available, _rebuild_availability_ranges will be
  called to recycle the IPs of the VMs  that no longer exist (the ones
  that we created at step 2 ) . From the logs i see that when there's a
  bulk creation of ports and the IP range is over, 2 or more
  _rebuild_availability_ranges are triggered at the same time by
  different port creation operation. This leads to the DBDuplicateEntry,
  since the operation that is performed last, will try to insert in the
  DB stale data.

  See log:

   362399 2014-08-26 10:01:01.926 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
   362400 2014-08-26 10:01:01.927 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254

[Yahoo-eng-team] [Bug 1361573] [NEW] TAP_PREFIX_LEN constant is defined several times

2014-08-26 Thread Rossella Sblendido
Public bug reported:

TAP_PREFIX_LEN constant is defined several times, it should be defined
only once preferably in neutron/common/constants.py .

From a coarse grep:

grep -r TAP . | grep 3

./neutron/plugins/brocade/NeutronPlugin.py:TAP_PREFIX_LEN = 3
./neutron/plugins/linuxbridge/lb_neutron_plugin.py:TAP_PREFIX_LEN = 3
./neutron/plugins/ml2/rpc.py:TAP_DEVICE_PREFIX_LENGTH = 3
./neutron/plugins/mlnx/rpc_callbacks.py:TAP_PREFIX_LEN = 3

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361573

Title:
  TAP_PREFIX_LEN constant is defined several times

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  TAP_PREFIX_LEN constant is defined several times, it should be defined
  only once preferably in neutron/common/constants.py .

  From a coarse grep:

  grep -r TAP . | grep 3

  ./neutron/plugins/brocade/NeutronPlugin.py:TAP_PREFIX_LEN = 3
  ./neutron/plugins/linuxbridge/lb_neutron_plugin.py:TAP_PREFIX_LEN = 3
  ./neutron/plugins/ml2/rpc.py:TAP_DEVICE_PREFIX_LENGTH = 3
  ./neutron/plugins/mlnx/rpc_callbacks.py:TAP_PREFIX_LEN = 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348703] [NEW] LinuxInterfaceDriver plug and unplug methods in derived class already check for device existence

2014-07-25 Thread Rossella Sblendido
Public bug reported:

LinuxInterfaceDriver plug and unplug in derived classes already check if
the device exists. There's no need to duplicate the check in the code
that is calling those methods. See l3_agent.py in internal_network_added
for example:

if not ip_lib.device_exists(interface_name,
root_helper=self.root_helper,
namespace=ri.ns_name):
self.driver.plug(network_id, port_id, interface_name, mac_address,
 namespace=ri.ns_name,
 prefix=INTERNAL_DEV_PREFIX)

the check if not ip_lib.device_exists is a duplicate and it's
expensive.

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: low-hanging-fruit

** Changed in: neutron
   Status: New = Confirmed

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348703

Title:
  LinuxInterfaceDriver plug and unplug methods in derived class already
  check for device existence

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  LinuxInterfaceDriver plug and unplug in derived classes already check
  if the device exists. There's no need to duplicate the check in the
  code that is calling those methods. See l3_agent.py in
  internal_network_added for example:

  if not ip_lib.device_exists(interface_name,
  root_helper=self.root_helper,
  namespace=ri.ns_name):
  self.driver.plug(network_id, port_id, interface_name, mac_address,
   namespace=ri.ns_name,
   prefix=INTERNAL_DEV_PREFIX)

  the check if not ip_lib.device_exists is a duplicate and it's
  expensive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338606] [NEW] if a deadlock exception is got a retry logic should be in place to retry the operation

2014-07-07 Thread Rossella Sblendido
Public bug reported:

In Neutron there's no retry logic in case a DB deadlock is got.
If a deadlock occurs the operation should be retried.

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338606

Title:
  if a deadlock exception is got a retry logic should be in place to
  retry the operation

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In Neutron there's no retry logic in case a DB deadlock is got.
  If a deadlock occurs the operation should be retried.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337275] Re: fail to launch more than 12 instances after updating qouta with 'PortLimitExceeded: Maximum number of ports exceeded'

2014-07-03 Thread Rossella Sblendido
You should modify the port quota and use something  100 if you plan to launch 
100 instances. Some ports are created automatically by Neutron and are included 
in the quota (like dhcp ports for example). 
Modify the port quota in horizon. Or you can use the command line,  specifying 
in the credential OS_USERNAME=admin_user, OS_TENANT_NAME= 
tenant_that_will_create_the_VMs 

neutron quota-update --port 120

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337275

Title:
  fail to launch more than 12 instances after updating qouta with
  'PortLimitExceeded: Maximum number of ports exceeded'

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I installed openstack with packstack as AIO + 3 computes. 
  Trying to run 100 instances, we fail to launch more than 12 with 
'PortLimitExceeded: Maximum number of ports exceeded' ERROR. 

  to reproduce - launch 100 instances at once after changing admin
  tenant project default quota.

  attaching the answer file + logs but here is the ERROR from nova-
  compute.log

  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] Traceback (most recent call last):
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1311, in 
_build_instance
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] set_access_ip=set_access_ip)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 399, in 
decorated_function
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return function(self, context, *args, 
**kwargs)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1723, in _spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1720, in _spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] block_device_info)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2253, in 
spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] admin_pass=admin_password)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2704, in 
_create_image
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] instance, network_info, admin_pass, 
files, suffix)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2522, in 
_inject_data
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] net = 
netutils.get_injected_network_template(network_info)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/virt/netutils.py, line 71, in 
get_injected_network_template
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] if not (network_info and template):
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 420, in __len__
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return self._sync_wrapper(fn, *args, 
**kwargs)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1334323] Re: Check ips availability before adding network to DHCP agent

2014-06-26 Thread Rossella Sblendido
Hello mouadino,
thanks for filing this bug. 
This is the expected behavior, every dhcp agent will have a port on a network. 
You should take this into account when you create the allocation pool.
If you want to change this behavior I suggest you file a blueprint instead of a 
bug.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334323

Title:
  Check ips availability before adding network to DHCP agent

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Hi there,

  How to reproduce ?
  ===

  First of all it's better to use HA DHCP agent, i.e. running more than
  one DHCP agent, and setting the dhcp_agents_per_network to the number
  of DHCP agent that you're running.

  Now for the sake of this example let's say that
  dhcp_agents_per_network=3.

  Now create a network with a subnet /30 for example or big subnet e..g
  /24 but with a smaller allocation pool e.g that contain only 1 or 2
  ips..

  
  What happen ?
  

  A lot of exception start showing up in the logs in the form:

 IpAddressGenerationFailure: No more IP addresses available on
  network

  
  What happen really ?
  

  Our small network was basically scheduled to all DHCP agents that are
  up and active and each one of them will try to create a port for him
  self, but because our small network has less IPs than
  dhcp_agents_per_network, then some of this port will fail to be
  created, and this will happen each iteration of the DHCP agent main
  loop.

  Another case where if you have more than one subnet in a network, and
  one of them is pretty small e.g.

  net1 - subnet1 10.0.0.0/24
subnet2 10.10.0.0/30

  Than errors also start to happen in every iteration of the dhcp agent.

  What is expected ?
  ===

  IMHO only agent that can handle the network should hold this later,
  and a direct call to add a network to a DHCP agent should also fail if
  there is no IPs left to satisfy the new DHCP port creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332481] Re: Creating and deleting the vm port multiple times the allocated ips are not used automatically

2014-06-20 Thread Rossella Sblendido
This is the expected behavior. This change https://review.openstack.org/58017 
modified the way IP are recycled.
Instead of being recycled immediately after the release of a port, the complex 
operation of rebuilding the availability table is performed  when the table is 
exhausted

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332481

Title:
  Creating and deleting the vm port multiple times the allocated ips are
  not used automatically

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  DESCRIPTION: Creating and deleting the vm port multiple times the
  allocated ips are not reused while spawning the new vm port

  Steps to Reproduce: 
  1. Create a network. 
  2. Create a subnet 10.10.1.0/24
  3. Spawned a vm make sure vm got the ip.  (say 10.10.1.2)
  4. Delete the vm 
  5. Spawned another vm make sure its gets the ip .
  6. Make sure in the port list the previous allocated ip is not listed.

  Actual Results: VM get the next  ip (10.10.1.3 ) not the ip that got
  released.

  Expected Results: VM should get the relased ips.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331564] [NEW] remove select for update lock from db_base_plugin

2014-06-18 Thread Rossella Sblendido
Public bug reported:

MySQL Galera does not support SELECT ... FOR UPDATE[1], since it has no
concept of cross-node locking of records and results are non-
deterministic.

Remove the use of SELECT FOR UPDATE in the db_base_plugin

[1]http://lists.openstack.org/pipermail/openstack-
dev/2014-May/035264.html

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1331564

Title:
  remove select for update lock from db_base_plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  MySQL Galera does not support SELECT ... FOR UPDATE[1], since it has
  no concept of cross-node locking of records and results are non-
  deterministic.

  Remove the use of SELECT FOR UPDATE in the db_base_plugin

  [1]http://lists.openstack.org/pipermail/openstack-
  dev/2014-May/035264.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1331564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330925] [NEW] Create bulk rpc method for update_device_up and update_device_down

2014-06-17 Thread Rossella Sblendido
Public bug reported:

Something similar to what was done for get_device_details should be done
for update_device_up and update_device_down. That is being able to set
multiple devices down or up using one rpc call

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330925

Title:
  Create bulk rpc method for update_device_up and update_device_down

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Something similar to what was done for get_device_details should be
  done for update_device_up and update_device_down. That is being able
  to set multiple devices down or up using one rpc call

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330924] [NEW] create a method to get all devices details in one shot

2014-06-17 Thread Rossella Sblendido
Public bug reported:

There's an rpc call that allows to get the details of multiple devices
at one shot. See https://review.openstack.org/#/c/66899 . There's should
be a method on the plugin side to minimize the access to the db and
instead of issuing a select for every port, select all the ports at once

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330924

Title:
  create a method to get all devices details in one shot

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There's an rpc call that allows to get the details of multiple devices
  at one shot. See https://review.openstack.org/#/c/66899 . There's
  should be a method on the plugin side to minimize the access to the db
  and instead of issuing a select for every port, select all the ports
  at once

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305547] [NEW] granade job TestSecurityGroupsBasicOps fails with IpAddressGenerationFailureClient

2014-04-10 Thread Rossella Sblendido
Public bug reported:

4-04-09 17:21:12.389 | Traceback (most recent call last):
2014-04-09 17:21:12.389 |   File tempest/test.py, line 122, in wrapper
2014-04-09 17:21:12.389 | return f(self, *func_args, **func_kwargs)
2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 440, in 
test_cross_tenant_traffic
2014-04-09 17:21:12.389 | self._deploy_tenant(self.alt_tenant)
2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 308, in 
_deploy_tenant
2014-04-09 17:21:12.389 | self._set_access_point(tenant)
2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 271, in 
_set_access_point
2014-04-09 17:21:12.389 | self._assign_floating_ips(server)
2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 275, in 
_assign_floating_ips
2014-04-09 17:21:12.389 | floating_ip = self._create_floating_ip(server, 
public_network_id)
2014-04-09 17:21:12.390 |   File tempest/scenario/manager.py, line 670, in 
_create_floating_ip
2014-04-09 17:21:12.390 | result = 
self.network_client.create_floatingip(body=body)
2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 109, 
in with_params
2014-04-09 17:21:12.390 | ret = self.function(instance, *args, **kwargs)
2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 452, 
in create_floatingip
2014-04-09 17:21:12.390 | return self.post(self.floatingips_path, body=body)
2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1294, 
in post
2014-04-09 17:21:12.390 | headers=headers, params=params)
2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1217, 
in do_request
2014-04-09 17:21:12.390 | self._handle_fault_response(status_code, 
replybody)
2014-04-09 17:21:12.391 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1187, 
in _handle_fault_response
2014-04-09 17:21:12.391 | exception_handler_v20(status_code, des_error_body)
2014-04-09 17:21:12.391 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 76, in 
exception_handler_v20
2014-04-09 17:21:12.391 | status_code=status_code)
2014-04-09 17:21:12.391 | IpAddressGenerationFailureClient: No more IP 
addresses available on network 9965a7b8-6f2f-40fd-bd44-66a23f8bec6c.

Example here http://logs.openstack.org/44/86344/1/check/check-grenade-
dsvm-neutron/b7e18eb/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305547

Title:
  granade  job TestSecurityGroupsBasicOps fails with
  IpAddressGenerationFailureClient

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  4-04-09 17:21:12.389 | Traceback (most recent call last):
  2014-04-09 17:21:12.389 |   File tempest/test.py, line 122, in wrapper
  2014-04-09 17:21:12.389 | return f(self, *func_args, **func_kwargs)
  2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 440, in 
test_cross_tenant_traffic
  2014-04-09 17:21:12.389 | self._deploy_tenant(self.alt_tenant)
  2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 308, in 
_deploy_tenant
  2014-04-09 17:21:12.389 | self._set_access_point(tenant)
  2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 271, in 
_set_access_point
  2014-04-09 17:21:12.389 | self._assign_floating_ips(server)
  2014-04-09 17:21:12.389 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 275, in 
_assign_floating_ips
  2014-04-09 17:21:12.389 | floating_ip = self._create_floating_ip(server, 
public_network_id)
  2014-04-09 17:21:12.390 |   File tempest/scenario/manager.py, line 670, in 
_create_floating_ip
  2014-04-09 17:21:12.390 | result = 
self.network_client.create_floatingip(body=body)
  2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 109, 
in with_params
  2014-04-09 17:21:12.390 | ret = self.function(instance, *args, **kwargs)
  2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 452, 
in create_floatingip
  2014-04-09 17:21:12.390 | return self.post(self.floatingips_path, 
body=body)
  2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1294, 
in post
  2014-04-09 17:21:12.390 | headers=headers, params=params)
  2014-04-09 17:21:12.390 |   File 
/opt/stack/new/python-neutronclient/neutronclient/v2_0/client.py, line 1217, 
in do_request
  2014-04-09 17:21:12.390 |  

[Yahoo-eng-team] [Bug 1305656] [NEW] test that inherit from BaseTestCase don't need to clean up the mock.patches

2014-04-10 Thread Rossella Sblendido
Public bug reported:

In BaseTestCase a clean up is added to stop all patches :

self.addCleanup(mock.patch.stopall)


the tests that inherits from BaseTestCase don't need to stop their patches

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305656

Title:
  test that inherit from BaseTestCase don't need to clean up the
  mock.patches

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In BaseTestCase a clean up is added to stop all patches :

  self.addCleanup(mock.patch.stopall)

  
  the tests that inherits from BaseTestCase don't need to stop their patches

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1305656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305800] [NEW] linuxbridge remove the use of root helper when it is not needed

2014-04-10 Thread Rossella Sblendido
Public bug reported:

The linux bridge agent uses the root helper even where there's no need

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305800

Title:
  linuxbridge remove the use of root helper when it is  not needed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The linux bridge agent uses the root helper even where there's no need

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1305800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305083] [NEW] reuse_existing is never used by the dhcp agent

2014-04-09 Thread Rossella Sblendido
Public bug reported:

The dhcp agent never uses the reuse_existing flag, which is set to true
by default. It can be removed.

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1305083

Title:
  reuse_existing is never used by the dhcp agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The dhcp agent never uses the reuse_existing flag, which is set to
  true by default. It can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1305083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302463] [NEW] LinuxBridgeManager is not using device_exist from ip_lib

2014-04-04 Thread Rossella Sblendido
Public bug reported:

LinuxBridgeManager is adding a new method for checking the existence of
a device instead of using the one in ip_lib

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302463

Title:
  LinuxBridgeManager is not using device_exist from ip_lib

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  LinuxBridgeManager is adding a new method for checking the existence
  of a device instead of using the one in ip_lib

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300322] [NEW] TestSecurityGroupsBasicOps IndexError

2014-03-31 Thread Rossella Sblendido
Public bug reported:

Seen in the neutron full job:

Console log:
2014-03-29 19:09:49.985 | Traceback (most recent call last):
2014-03-29 19:09:49.985 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 167, in setUp
2014-03-29 19:09:49.985 | self._verify_mac_addr(self.primary_tenant)
2014-03-29 19:09:49.985 |   File 
tempest/scenario/test_security_groups_basic_ops.py, line 429, in 
_verify_mac_addr
2014-03-29 19:09:49.985 | port['mac_address'].lower()) for port in port_list
2014-03-29 19:09:49.985 | IndexError: list index out of range

Neutron log reports 2 error, problem in the communication with nova:

2014-03-28 01:38:20.922 31361 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'status': 'completed', 'tag': 
u'd15f3fdd-31fb-48da-9563-17796df7b6aa', 'name': 'network-vif-plugged', 
'server_uuid': u'devstack-precise-check-rax-iad-3242364.slave.openstack.org'}]
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova Traceback (most 
recent call last):
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/neutron/neutron/notifiers/nova.py, line 186, in send_events
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova batched_events)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/v1_1/contrib/server_external_events.py,
 line 39, in create
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova return_raw=True)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/base.py, line 152, in _create
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/client.py, line 287, in post
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova return 
self._cs_request(url, 'POST', **kwargs)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/client.py, line 261, in 
_cs_request
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova **kwargs)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/client.py, line 243, in 
_time_request
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova resp, body = 
self.request(url, method, **kwargs)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova   File 
/opt/stack/new/python-novaclient/novaclient/client.py, line 237, in request
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova NotFound: No 
instances found for any event (HTTP 404) (Request-ID: 
req-904163b3-3b79-4b6f-8dc7-e53ed7520212)
2014-03-28 01:38:20.922 31361 TRACE neutron.notifiers.nova 

And problems querying the DB:
2014-03-28 01:59:10.584 31361 ERROR neutron.openstack.common.rpc.amqp 
[req-6f6479c4-4893-4c8e-8a06-da6c70c572e8 None] Exception during message 
handling
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 462, in 
_process_data
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp 
**args)
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/common/agent_driver_base.py,
 line 97, in get_logical_device
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp pool 
= qry.one()
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2361, in 
one
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp raise 
orm_exc.NoResultFound(No row was found for one())
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp 
NoResultFound: No row was found for one()
2014-03-28 01:59:10.584 31361 TRACE neutron.openstack.common.rpc.amqp

Example here: http://logs.openstack.org/05/77905/4/check/check-tempest-
dsvm-neutron-full/a511ba9/

** Affects: neutron
 Importance: Undecided

[Yahoo-eng-team] [Bug 1298310] [NEW] neutron takes long to reconnect to the db

2014-03-27 Thread Rossella Sblendido
Public bug reported:

In an HA setup, DB node 1 goes down and the DB is fail-overing to DB
node 2 using the same IP. Neutron reports

OperationalError: (OperationalError) socket not open

This is because the  TCP connection is broken.
Anyway Neutron takes minutes to re-connect to the new DB node.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298310

Title:
  neutron takes long to reconnect to the db

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In an HA setup, DB node 1 goes down and the DB is fail-overing to DB
  node 2 using the same IP. Neutron reports

  OperationalError: (OperationalError) socket not open

  This is because the  TCP connection is broken.
  Anyway Neutron takes minutes to re-connect to the new DB node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291918] [NEW] ClientException The server has either erred or is incapable of performing the requested operation

2014-03-13 Thread Rossella Sblendido
Public bug reported:

This error is often seen in the Neutron full job.
Example here:
http://logs.openstack.org/27/49227/34/check/check-tempest-dsvm-neutron-full/4841d69/console.html
Logstash query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBtZXNzYWdlOlwiQ2xpZW50RXhjZXB0aW9uOiBUaGUgc2VydmVyIGhhcyBlaXRoZXIgZXJyZWQgb3IgaXMgaW5jYXBhYmxlIG9mIHBlcmZvcm1pbmcgdGhlIHJlcXVlc3RlZCBvcGVyYXRpb25cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk0NzA3OTMxNTExfQ==

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291918

Title:
  ClientException The server has either erred or is incapable of
  performing the requested operation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This error is often seen in the Neutron full job.
  Example here:
  
http://logs.openstack.org/27/49227/34/check/check-tempest-dsvm-neutron-full/4841d69/console.html
  Logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBtZXNzYWdlOlwiQ2xpZW50RXhjZXB0aW9uOiBUaGUgc2VydmVyIGhhcyBlaXRoZXIgZXJyZWQgb3IgaXMgaW5jYXBhYmxlIG9mIHBlcmZvcm1pbmcgdGhlIHJlcXVlc3RlZCBvcGVyYXRpb25cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk0NzA3OTMxNTExfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291920] [NEW] test_server_actions TimeoutException: Request timed out

2014-03-13 Thread Rossella Sblendido
Public bug reported:

Observed on the Neutron full job

2014-03-13 03:15:57.038 | Traceback (most recent call last):
2014-03-13 03:15:57.039 |   File 
tempest/api/compute/servers/test_server_actions.py, line 296, in 
test_create_backup
2014-03-13 03:15:57.039 | 
self.os.image_client.wait_for_resource_deletion(image1_id)
2014-03-13 03:15:57.039 |   File tempest/common/rest_client.py, line 494, in 
wait_for_resource_deletion
2014-03-13 03:15:57.039 | raise exceptions.TimeoutException
2014-03-13 03:15:57.039 | TimeoutException: Request timed out

Example here:
http://logs.openstack.org/02/73802/13/check/check-tempest-dsvm-neutron-full/c53ebe7/console.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291920

Title:
  test_server_actions TimeoutException: Request timed out

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Observed on the Neutron full job

  2014-03-13 03:15:57.038 | Traceback (most recent call last):
  2014-03-13 03:15:57.039 |   File 
tempest/api/compute/servers/test_server_actions.py, line 296, in 
test_create_backup
  2014-03-13 03:15:57.039 | 
self.os.image_client.wait_for_resource_deletion(image1_id)
  2014-03-13 03:15:57.039 |   File tempest/common/rest_client.py, line 494, 
in wait_for_resource_deletion
  2014-03-13 03:15:57.039 | raise exceptions.TimeoutException
  2014-03-13 03:15:57.039 | TimeoutException: Request timed out

  Example here:
  
http://logs.openstack.org/02/73802/13/check/check-tempest-dsvm-neutron-full/c53ebe7/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291922] [NEW] MismatchError test_l3_agent_scheduler.py

2014-03-13 Thread Rossella Sblendido
Public bug reported:

Observed in the neutron full job.

2014-03-13 05:00:40.322 | Traceback (most recent call last):
2014-03-13 05:00:40.322 |   File 
tempest/api/network/admin/test_l3_agent_scheduler.py, line 83, in 
test_add_list_remove_router_on_l3_agent
2014-03-13 05:00:40.323 | self.assertNotIn(self.agent['id'], l3_agent_ids)
2014-03-13 05:00:40.323 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 365, in 
assertNotIn
2014-03-13 05:00:40.323 | self.assertThat(haystack, matcher)
2014-03-13 05:00:40.323 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
2014-03-13 05:00:40.323 | raise mismatch_error
2014-03-13 05:00:40.323 | MismatchError: 
[u'c2f76b31-57be-4687-9c08-ede7d677cd0a'] matches 
Contains(u'c2f76b31-57be-4687-9c08-ede7d677cd0a')

Taken from: http://logs.openstack.org/71/62771/3/check/check-tempest-
dsvm-neutron-full/73b1d0d/console.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291922

Title:
  MismatchError test_l3_agent_scheduler.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Observed in the neutron full job.

  2014-03-13 05:00:40.322 | Traceback (most recent call last):
  2014-03-13 05:00:40.322 |   File 
tempest/api/network/admin/test_l3_agent_scheduler.py, line 83, in 
test_add_list_remove_router_on_l3_agent
  2014-03-13 05:00:40.323 | self.assertNotIn(self.agent['id'], l3_agent_ids)
  2014-03-13 05:00:40.323 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 365, in 
assertNotIn
  2014-03-13 05:00:40.323 | self.assertThat(haystack, matcher)
  2014-03-13 05:00:40.323 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
  2014-03-13 05:00:40.323 | raise mismatch_error
  2014-03-13 05:00:40.323 | MismatchError: 
[u'c2f76b31-57be-4687-9c08-ede7d677cd0a'] matches 
Contains(u'c2f76b31-57be-4687-9c08-ede7d677cd0a')

  Taken from: http://logs.openstack.org/71/62771/3/check/check-tempest-
  dsvm-neutron-full/73b1d0d/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291926] [NEW] MismatchError test_update_all_quota_resources_for_tenant

2014-03-13 Thread Rossella Sblendido
Public bug reported:

Observed in the Neutron full job

2014-03-13 02:08:54.390 | Traceback (most recent call last):
2014-03-13 02:08:54.390 |   File 
tempest/api/volume/admin/test_volume_quotas.py, line 67, in 
test_update_all_quota_resources_for_tenant
2014-03-13 02:08:54.390 | self.assertEqual(new_quota_set, quota_set)
2014-03-13 02:08:54.391 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in 
assertEqual
2014-03-13 02:08:54.391 | self.assertThat(observed, matcher, message)
2014-03-13 02:08:54.391 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
2014-03-13 02:08:54.392 | raise mismatch_error
2014-03-13 02:08:54.392 | MismatchError: !=:
2014-03-13 02:08:54.392 | reference = {'gigabytes': 1009, 'snapshots': 11, 
'volumes': 11}
2014-03-13 02:08:54.393 | actual= {'gigabytes': 1009,
2014-03-13 02:08:54.393 |  'gigabytes_volume-type--928001277': -1,
2014-03-13 02:08:54.393 |  'snapshots': 11,
2014-03-13 02:08:54.394 |  'snapshots_volume-type--928001277': -1,
2014-03-13 02:08:54.394 |  'volumes': 11,
2014-03-13 02:08:54.394 |  'volumes_volume-type--928001277': -1}

Example here http://logs.openstack.org/82/67382/3/check/check-tempest-
dsvm-neutron-full/8ffd266/console.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291926

Title:
  MismatchError test_update_all_quota_resources_for_tenant

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Observed in the Neutron full job

  2014-03-13 02:08:54.390 | Traceback (most recent call last):
  2014-03-13 02:08:54.390 |   File 
tempest/api/volume/admin/test_volume_quotas.py, line 67, in 
test_update_all_quota_resources_for_tenant
  2014-03-13 02:08:54.390 | self.assertEqual(new_quota_set, quota_set)
  2014-03-13 02:08:54.391 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 321, in 
assertEqual
  2014-03-13 02:08:54.391 | self.assertThat(observed, matcher, message)
  2014-03-13 02:08:54.391 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 406, in 
assertThat
  2014-03-13 02:08:54.392 | raise mismatch_error
  2014-03-13 02:08:54.392 | MismatchError: !=:
  2014-03-13 02:08:54.392 | reference = {'gigabytes': 1009, 'snapshots': 11, 
'volumes': 11}
  2014-03-13 02:08:54.393 | actual= {'gigabytes': 1009,
  2014-03-13 02:08:54.393 |  'gigabytes_volume-type--928001277': -1,
  2014-03-13 02:08:54.393 |  'snapshots': 11,
  2014-03-13 02:08:54.394 |  'snapshots_volume-type--928001277': -1,
  2014-03-13 02:08:54.394 |  'volumes': 11,
  2014-03-13 02:08:54.394 |  'volumes_volume-type--928001277': -1}

  Example here http://logs.openstack.org/82/67382/3/check/check-tempest-
  dsvm-neutron-full/8ffd266/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291947] [NEW] Image metadata Object not found

2014-03-13 Thread Rossella Sblendido
Public bug reported:

Observed in the Neutron full job

2014-03-11 14:12:45.836 | _StringException: Traceback (most recent call last):
2014-03-11 14:12:45.836 |   File 
tempest/api/compute/images/test_image_metadata.py, line 44, in setUpClass
2014-03-11 14:12:45.836 | cls.client.wait_for_image_status(cls.image_id, 
'ACTIVE')
2014-03-11 14:12:45.836 |   File 
tempest/services/compute/json/images_client.py, line 85, in 
wait_for_image_status
2014-03-11 14:12:45.836 | waiters.wait_for_image_status(self, image_id, 
status)
2014-03-11 14:12:45.836 |   File tempest/common/waiters.py, line 105, in 
wait_for_image_status
2014-03-11 14:12:45.836 | resp, image = client.get_image(image_id)
2014-03-11 14:12:45.836 |   File 
tempest/services/compute/json/images_client.py, line 74, in get_image
2014-03-11 14:12:45.837 | resp, body = self.get(images/%s % str(image_id))
2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 202, in 
get
2014-03-11 14:12:45.837 | return self.request('GET', url, headers)
2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 374, in 
request
2014-03-11 14:12:45.837 | resp, resp_body)
2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 418, in 
_error_checker
2014-03-11 14:12:45.837 | raise exceptions.NotFound(resp_body)
2014-03-11 14:12:45.837 | NotFound: Object not found
2014-03-11 14:12:45.837 | Details: {itemNotFound: {message: Image not 
found., code: 404}}

Example here: http://logs.openstack.org/47/75447/2/check/check-tempest-
dsvm-neutron-full/6a47f29/console.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-full-job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291947

Title:
  Image metadata Object not found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Observed in the Neutron full job

  2014-03-11 14:12:45.836 | _StringException: Traceback (most recent call last):
  2014-03-11 14:12:45.836 |   File 
tempest/api/compute/images/test_image_metadata.py, line 44, in setUpClass
  2014-03-11 14:12:45.836 | cls.client.wait_for_image_status(cls.image_id, 
'ACTIVE')
  2014-03-11 14:12:45.836 |   File 
tempest/services/compute/json/images_client.py, line 85, in 
wait_for_image_status
  2014-03-11 14:12:45.836 | waiters.wait_for_image_status(self, image_id, 
status)
  2014-03-11 14:12:45.836 |   File tempest/common/waiters.py, line 105, in 
wait_for_image_status
  2014-03-11 14:12:45.836 | resp, image = client.get_image(image_id)
  2014-03-11 14:12:45.836 |   File 
tempest/services/compute/json/images_client.py, line 74, in get_image
  2014-03-11 14:12:45.837 | resp, body = self.get(images/%s % 
str(image_id))
  2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 202, 
in get
  2014-03-11 14:12:45.837 | return self.request('GET', url, headers)
  2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 374, 
in request
  2014-03-11 14:12:45.837 | resp, resp_body)
  2014-03-11 14:12:45.837 |   File tempest/common/rest_client.py, line 418, 
in _error_checker
  2014-03-11 14:12:45.837 | raise exceptions.NotFound(resp_body)
  2014-03-11 14:12:45.837 | NotFound: Object not found
  2014-03-11 14:12:45.837 | Details: {itemNotFound: {message: Image not 
found., code: 404}}

  Example here: http://logs.openstack.org/47/75447/2/check/check-
  tempest-dsvm-neutron-full/6a47f29/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282741] [NEW] lb agent should not only check if the device exists but also if it´s up

2014-02-20 Thread Rossella Sblendido
Public bug reported:

Lb  agent before creating a bridge checks if it already exists. If it
exists, it doesn´t creates a new one. Anyway if the device exists but
it´s not up, the agent doesn´t set it up.

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282741

Title:
  lb agent should not only check if the device exists but also if it´s
  up

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Lb  agent before creating a bridge checks if it already exists. If it
  exists, it doesn´t creates a new one. Anyway if the device exists but
  it´s not up, the agent doesn´t set it up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp