[Yahoo-eng-team] [Bug 1786226] [NEW] Use sqlalchemy baked query

2018-08-09 Thread venkata anil
 | 0.0
  | 0.0   | 0.0   | 100.0%  | 400   |
++---+--+--+--+---+---+-+---+

In https://review.openstack.org/#/c/430973/2 we are using baked query for only 
get_by_id i.e
https://review.openstack.org/#/c/430973/2/neutron/db/_model_query.py@225

While creating port, plugin will call get_network() which internally
calls get_by_id.

The cumulative time taken for 4800 ncalls of get_network is 3249 seconds

48000.3250.000 3249.5370.677 /usr/lib/python2.7/site-
packages/neutron/plugins/ml2/plugin.py:928(Ml2Plugin.get_network)

but when get_network is using baked query, this has come down to 1075
seconds.

48000.3210.000 1075.6950.224 /usr/lib/python2.7/site-
packages/neutron/plugins/ml2/plugin.py:928(Ml2Plugin.get_network)

If we enhance any other neutron DB methods to use baked query, then we
can improve the neutron performance further.

** Affects: neutron
 Importance: Wishlist
 Assignee: venkata anil (anil-venkata)
 Status: Confirmed

** Attachment added: "rally_result_with_baked_query.pdf"
   
https://bugs.launchpad.net/bugs/1786226/+attachment/5173292/+files/rally_result_with_baked_query.pdf

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786226

Title:
  Use sqlalchemy baked query

Status in neutron:
  Confirmed

Bug description:
  I am running rally scenario test create_and_list_ports on a 3
  controller setup(each controller have 8 CPUs i.e 4 cores*2 HTs) with
  (function call trace enabled on neutron server processes) a
  concurrency of 8 for 400 iterations.

  Average time taken for create port is 7.207 seconds(when 400 ports are 
created) and the function call trace  for this run is at 
http://paste.openstack.org/show/727718/ and rally results are 
  
+---+
  |   Response Times (sec)  
  |
  
++---+--+--+--+---+---+-+---+
  | Action | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile 
(sec) | Max (sec) | Avg (sec) | Success | Count |
  
++---+--+--+--+---+---+-+---+
  | neutron.create_network | 2.085 | 2.491| 3.01 | 3.29 
| 7.558 | 2.611 | 100.0%  | 400   |
  | neutron.create_port| 5.69  | 6.878| 7.755| 9.394
| 17.0  | 7.207 | 100.0%  | 400   |
  | neutron.list_ports | 0.72  | 5.552| 9.123| 9.599
| 11.165| 5.559 | 100.0%  | 400   |
  | total  | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> duration   | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> idle_duration  | 0.0   | 0.0  | 0.0  | 0.0  
| 0.0   | 0.0   | 100.0%  | 400   |
  
++---+--+--+--+---+---+-+---+


  Michael Bayer (zzzeek) has analysed this callgraph and had some
  suggestions. One suggestion is to use baked query i.e
  https://review.openstack.org/#/c/430973/2

  This is his analysis - "But looking at the profile I see here, it is
  clear that the vast majority of time is spent doing lots and lots of
  small queries, and all of the mechanics involved with turning them
  into SQL strings and invoking them.   SQLAlchemy has a very effective
  optimization for this but it must be coded into Neutron.

  Here is the total time spent for Query to convert its state into SQL:

  148029/356073   15.2320.000 4583.8200.013 /usr/lib64/python2.7
  /site-packages/sqlalchemy/orm/query.py:3372(Query._compile_context)

  that's 4583 seconds spent in Query compilation, which if Neutron were
  modified  to use baked queries, would be vastly reduced.  I
  demonstrated the beginning of this work in 2017 here:
  https://review.openstack.org/#/c/430973/1  , which illustrates how to
  first start to create a base query method in neutron that other
  functions can begin to make use of.  As more queries start using the
  baked form, this 4500 seconds number will begin to drop."

  
  I have restored his patch https://review.openstack.org/#/c/430973/2 , with 
this the average time taken to create port is 5.196 seconds (when 400 ports are 
created), and the function call trace  for this run is at 
http://paste.openstack.org/show/727719/ also total time spent on Query 
c

[Yahoo-eng-team] [Bug 1765691] [NEW] OVN vlan networks use geneve tunneling for SNAT traffic

2018-04-20 Thread venkata anil
Public bug reported:

In OVN driver, traffic from vlan (or any other) tenant network to
external network uses a geneve tunnel between the compute node and the
gateway node. So MTU for the VLAN networks needs to account for geneve
tunnel overhead.

This doc [1] explains about OVN vlan networks and current issue and future 
enhancements.
There is ovs-discuss mailing list thread [2] discussing the surprising geneve 
tunnel usage.

[1] 
https://docs.google.com/document/d/1JecGIXPH0RAqfGvD0nmtBdEU1zflHACp8WSRnKCFSgg/edit#heading=h.st3xgs77kfx4
[2] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1765691

Title:
  OVN vlan networks use geneve tunneling for SNAT traffic

Status in neutron:
  New

Bug description:
  In OVN driver, traffic from vlan (or any other) tenant network to
  external network uses a geneve tunnel between the compute node and the
  gateway node. So MTU for the VLAN networks needs to account for geneve
  tunnel overhead.

  This doc [1] explains about OVN vlan networks and current issue and future 
enhancements.
  There is ovs-discuss mailing list thread [2] discussing the surprising geneve 
tunnel usage.

  [1] 
https://docs.google.com/document/d/1JecGIXPH0RAqfGvD0nmtBdEU1zflHACp8WSRnKCFSgg/edit#heading=h.st3xgs77kfx4
  [2] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1765691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760584] [NEW] IpAddressGenerationFailure warning while running tempest test test_create_subnet_from_pool_with_subnet_cidr

2018-04-02 Thread venkata anil
Public bug reported:

When I run tempest run

neutron.tests.tempest.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr

I see below warning in neutron-server
Apr 02 08:46:28 test.rdocloud neutron-server[12690]: WARNING 
neutron.api.rpc.handlers.dhcp_rpc [None 
req-4bfb3f1d-659f-49b8-8572-c74e7d955731 None None] Action create_port for 
network 8e90dae6-018e-4979-bfad-2cc96e281ea8 could not complete successfully: 
No more IP addresses available on network 
8e90dae6-018e-4979-bfad-2cc96e281ea8.: IpAddressGenerationFailure: No more IP 
addresses available on network 8e90dae6-018e-4979-bfad-2cc96e281ea8.

This test tries to create a subnet with cidr 10.11.12.0/31 i.e only one
address to allocate(which will be taken for gateway_ip). This subnet
creation will notify dhcp agent, which will try to create a dhcp port
but will fail as there are no address available. Still the subnet create
api will be successful as port creation is triggered later from dhcp
agent.

These tests may fail with vendor's drivers if their implementation try
to create dhcp port as part of subnet creation. There is no point in
creating a subnet with no IP address. Better to change the tempest tests
to provide CIDR with adequate addresses.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1760584

Title:
  IpAddressGenerationFailure warning while running  tempest test
  test_create_subnet_from_pool_with_subnet_cidr

Status in neutron:
  New
Status in tempest:
  New

Bug description:
  When I run tempest run

  
neutron.tests.tempest.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr

  I see below warning in neutron-server
  Apr 02 08:46:28 test.rdocloud neutron-server[12690]: WARNING 
neutron.api.rpc.handlers.dhcp_rpc [None 
req-4bfb3f1d-659f-49b8-8572-c74e7d955731 None None] Action create_port for 
network 8e90dae6-018e-4979-bfad-2cc96e281ea8 could not complete successfully: 
No more IP addresses available on network 
8e90dae6-018e-4979-bfad-2cc96e281ea8.: IpAddressGenerationFailure: No more IP 
addresses available on network 8e90dae6-018e-4979-bfad-2cc96e281ea8.

  This test tries to create a subnet with cidr 10.11.12.0/31 i.e only
  one address to allocate(which will be taken for gateway_ip). This
  subnet creation will notify dhcp agent, which will try to create a
  dhcp port but will fail as there are no address available. Still the
  subnet create api will be successful as port creation is triggered
  later from dhcp agent.

  These tests may fail with vendor's drivers if their implementation try
  to create dhcp port as part of subnet creation. There is no point in
  creating a subnet with no IP address. Better to change the tempest
  tests to provide CIDR with adequate addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1760584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1753713] Re: Rally job on neutron CI broken

2018-03-06 Thread venkata anil
Same errors seen on glance rally job
http://logs.openstack.org/95/549695/4/check/legacy-rally-dsvm-glance/cb489e7/job-output.txt.gz#_2018-03-06_08_59_25_771502


** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1753713

Title:
  Rally job on neutron CI broken

Status in Glance:
  New
Status in networking-ovn:
  New
Status in neutron:
  New
Status in Rally:
  New

Bug description:
  Rally jobs on neutron and networking-ovn CI failing with below errors
  2018-03-06 07:11:07.165947 | primary | 2018-03-06 07:11:07.165 |   File 
"/opt/stack/new/rally/tests/ci/osresources.py", line 172, in list_hosts
  2018-03-06 07:11:07.167823 | primary | 2018-03-06 07:11:07.167 | return 
self.client.hosts.list()
  2018-03-06 07:11:07.170413 | primary | 2018-03-06 07:11:07.169 | 
AttributeError: 'Client' object has no attribute 'hosts'

  failure log in networking-ovn -
  
http://logs.openstack.org/60/512560/12/check/networking-ovn-rally-dsvm/e4e0fb2/job-output.txt.gz#_2018-03-06_07_11_07_170413

  failure log in neutron -
  
http://logs.openstack.org/76/548976/2/check/neutron-rally-neutron/be32104/job-output.txt.gz#_2018-03-06_00_47_42_529805

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1753713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727262] [NEW] L3 agent config option to pass provider:physical_network while creating HA network

2017-10-25 Thread venkata anil
Public bug reported:

In a customer environment, they are using sriov and openvswitch like below
mechanism_drivers = openvswitch,sriovnicswitch
[ml2_type_vlan]
network_vlan_ranges=ovs:1300:1500,sriov:1300:1500
but sriov is enabled only on compute nodes and openvswitch on controllers.
When first HA router is created on a tenant, l3 agent is creating new HA tenant 
network with segmentation_id from ovs driver.
But when no more vlan ids available on ovs, it is picking up segmentation_id 
from sriov for later HA tenant networks.
As ovs agent in controllers only supporting ovs and not sriov, binding for HA 
router network port(i.e device_owner as router_ha_interface) is failing.
As HA network creation and HA router creation was succeful, but keepalived was 
not spawned, confusing admins why it was not spawned.

So we need to enhance l3 agent to pass "provider:physical_network" while 
creating HA network.
In this case, if l3 agent was able to pass provider:physical_network=ovs, and 
if no free vlan id's available, then both HA network and HA router creat
ion will fail and admin can debug the failure easily.


Binding failure errors

2017-10-24 04:47:04.835 411054 DEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Attempting to bind port 
95f5d893-3490-410b-8c3e-cd82e4831f34 on network 
d439f80d-ce31-496b-b048-a1056ed3f8b7 bind_port 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:111
2017-10-24 04:47:04.836 411054 DEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Refusing to bind due to 
unsupported vnic_type: normal bind_port 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:116
2017-10-24 04:47:04.836 411054 ERROR neutron.plugins.ml2.managers 
[req-c50a05a9-d889-4abc-a267-dc3098ad854c - - - - -] Failed to bind port 
95f5d893-3490-410b-8c3e-cd82e4831f34 on host corea-controller0.mtcelab.com for 
vnic_type normal using segments [{'segmentation_id': 1321, 'physical_network': 
u'sriov_a', 'id': u'1ec5240e-84c1-4fad-b9ff-a40cea8ec14e', 'network_type': 
u'vlan'}]


[stack@txwlvcpdirector04 ~]$ neutron net-show 
d439f80d-ce31-496b-b048-a1056ed3f8b7
+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| availability_zone_hints   |   
 |
| availability_zones|   
 |
| created_at| 2017-10-17T22:17:29Z  
 |
| description   |   
 |
| id| d439f80d-ce31-496b-b048-a1056ed3f8b7  
 |
| ipv4_address_scope|   
 |
| ipv6_address_scope|   
 |
| mtu   | 9200  
 |
| name  | HA network tenant 
ca7ceaf971014d01992b455119ca5990 |
| port_security_enabled | True  
 |
| project_id|   
 |
| provider:network_type | vlan  
 |
| provider:physical_network | sriov_a   
 |
| provider:segmentation_id  | 1321  
 |
| qos_policy_id |   
 |
| revision_number   | 5 
 |
| router:external   | False 
 |
| shared| False 
 |
| status| ACTIVE
 |
| subnets   | 4bdaed16-43c7-4b5e-acf2-0e9e55f44304  
 |
| tags  |   
 |
| tenant_id |   
 |
| updated_at| 2017-10-17T22:17:29Z  
 |
+---++

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727262

Title:
  L3 agent config option to pass 

[Yahoo-eng-team] [Bug 1726370] [NEW] Trace in fetch_and_sync_all_routers

2017-10-23 Thread venkata anil
Public bug reported:

I am seeing below trace fetch_and_sync_all_routers for HA router


2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
[req-19638c71-4ad9-412f-b5d7-dc9cb84eca4f - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task task(self, 
context)
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 568, in 
periodic_sync_routers_task
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 603, in 
fetch_and_sync_all_routers
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 81, in 
ha_state
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
ha_state_path = self.keepalived_manager.get_full_config_file_path(
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task AttributeError: 
'NoneType' object has no attribute 'get_full_config_file_path'
2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-ha

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726370

Title:
  Trace in fetch_and_sync_all_routers

Status in neutron:
  New

Bug description:
  I am seeing below trace fetch_and_sync_all_routers for HA router

  
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
[req-19638c71-4ad9-412f-b5d7-dc9cb84eca4f - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task task(self, 
context)
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 568, in 
periodic_sync_routers_task
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 603, in 
fetch_and_sync_all_routers
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 81, in 
ha_state
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
ha_state_path = self.keepalived_manager.get_full_config_file_path(
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task 
AttributeError: 'NoneType' object has no attribute 'get_full_config_file_path'
  2017-10-12 16:17:03.425 12387 ERROR oslo_service.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1726370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723848] [NEW] Restarting l3 agent not spawning keepalived

2017-10-15 Thread venkata anil
Public bug reported:

When a keepalived is killed manually and then l3 agent is restarted, l3
agent is not spawning keepalived.

When l3 agent is restarted, it sets HA network port status to DOWN
because of [1] with the assumption that

1) server will notify port update to l2 agent and
2) then l2 agent will rewire the port and set status to ACTIVE. 
3) when port status is set to ACTIVE, server will notify l3 agent
4) when port status is ACTIVE, l3 agent will spawn keepalived

But in newton code base, I see step 1 not happening (i.e server
notifying port update to l2 agent) because of that next steps also not
happening and keepalived is never respawned.

But in upstream master code base, step 1 is happening because of OVO,
and then all next steps, resulting in spawning keepalived.

Before submitting [1] I posted [2] in mail thread regarding status
update notification to l2 agent. Kevin's suggestion was working on u/s
master (because of OVO) but not on stable branches. So as a generic fix,
server has to notify l2 agent when port_update('status') is called.

[1] https://code.engineering.redhat.com/gerrit/#/c/108938/
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117557.html

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723848

Title:
  Restarting l3 agent not spawning keepalived

Status in neutron:
  New

Bug description:
  When a keepalived is killed manually and then l3 agent is restarted,
  l3 agent is not spawning keepalived.

  When l3 agent is restarted, it sets HA network port status to DOWN
  because of [1] with the assumption that

  1) server will notify port update to l2 agent and
  2) then l2 agent will rewire the port and set status to ACTIVE. 
  3) when port status is set to ACTIVE, server will notify l3 agent
  4) when port status is ACTIVE, l3 agent will spawn keepalived

  But in newton code base, I see step 1 not happening (i.e server
  notifying port update to l2 agent) because of that next steps also not
  happening and keepalived is never respawned.

  But in upstream master code base, step 1 is happening because of OVO,
  and then all next steps, resulting in spawning keepalived.

  Before submitting [1] I posted [2] in mail thread regarding status
  update notification to l2 agent. Kevin's suggestion was working on u/s
  master (because of OVO) but not on stable branches. So as a generic
  fix, server has to notify l2 agent when port_update('status') is
  called.

  [1] https://code.engineering.redhat.com/gerrit/#/c/108938/
  [2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117557.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1721084] [NEW] openvswitch firewall driver is dropping packets when router migrated from DVR to HA

2017-10-03 Thread venkata anil
Public bug reported:

Openvswitch firewall driver is dropping packets when router is migrated
from DVR to HA.

I see the packet is dropped at table 72

cookie=0x6b90d3f7582969b5, duration=62.044s, table=72, n_packets=7,
n_bytes=518, idle_age=1, priority=50,ct_state=+inv+trk actions=drop

complete br-int flows are - http://paste.openstack.org/show/622528/
output of "ovs-ofctl show br-int" http://paste.openstack.org/show/622530/

But with iptables firewall driver this works fine.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1721084

Title:
  openvswitch firewall driver is dropping packets when router migrated
  from DVR to HA

Status in neutron:
  In Progress

Bug description:
  Openvswitch firewall driver is dropping packets when router is
  migrated from DVR to HA.

  I see the packet is dropped at table 72

  cookie=0x6b90d3f7582969b5, duration=62.044s, table=72, n_packets=7,
  n_bytes=518, idle_age=1, priority=50,ct_state=+inv+trk actions=drop

  complete br-int flows are - http://paste.openstack.org/show/622528/
  output of "ovs-ofctl show br-int" http://paste.openstack.org/show/622530/

  But with iptables firewall driver this works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1721084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718585] [NEW] set floatingip status to DOWN during creation

2017-09-20 Thread venkata anil
Public bug reported:

floatingip status is not reliable as it is set to active during creation itself 
[1] rather than waiting for agent [2] to update it once agent finishes adding 
SNAT/DNAT rules.
[1] https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L1234
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/agent.py#L131

User can check floatingip status after creation and can initiate data traffic 
before agent finishes 
 processing floatingip resulting in connection failures. Also fixing this can 
help tempest tests to initiate connection only after agent has finished 
floatingip processing and avoid failures.

Also floatingip status has to be properly updated during migration of
router.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-dvr-backlog l3-ha

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718585

Title:
  set floatingip status to DOWN during creation

Status in neutron:
  New

Bug description:
  floatingip status is not reliable as it is set to active during creation 
itself [1] rather than waiting for agent [2] to update it once agent finishes 
adding SNAT/DNAT rules.
  [1] https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L1234
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/agent.py#L131

  User can check floatingip status after creation and can initiate data traffic 
before agent finishes 
   processing floatingip resulting in connection failures. Also fixing this can 
help tempest tests to initiate connection only after agent has finished 
floatingip processing and avoid failures.

  Also floatingip status has to be properly updated during migration of
  router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718345] [NEW] ml2_distributed_port_bindings not cleared after migration from DVR

2017-09-19 Thread venkata anil
Public bug reported:

When a router is migrated from DVR to HA/legacy, router interface
bindings in ml2_distributed_port_bindings table not cleared and still
exist after the migration.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718345

Title:
  ml2_distributed_port_bindings not cleared after migration from DVR

Status in neutron:
  New

Bug description:
  When a router is migrated from DVR to HA/legacy, router interface
  bindings in ml2_distributed_port_bindings table not cleared and still
  exist after the migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1718345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715370] [NEW] Migration between DVR+HA and HA creating redundant "network:router_ha_interface" ports

2017-09-06 Thread venkata anil
Public bug reported:

When a router is migrated between DVR+HA and HA(i.e DVR+HA->HA and
HA->DVR+HA), redundant "network:router_ha_interface" ports are created.

To reproduce the issue(2 node setup with "dvr" and "dvr-snat" modes is 
sufficient), create a router 
dr1 in DVR+HA mode. Then repeatedly flip this router's  DVR+HA and HA flags. 
You can see redundant "network:router_ha_interface" ports.

I have a 2 node devstack setup, 1st l3-agent in "dvr" mode and 2nd one in 
"dvr-snat" mode.
Whenever  HA flag is set to router, port with device_owner 
"network:router_ha_interface" should be created for only 2nd node i.e l3 agent  
with "dvr-snat" mode.

Steps to reproduce:
1) create a network n1, and subnet on this network with name sn1
2) create a DVR+HA router(with name 'dr1'), attach it to sn1 through router 
interface add and set gateway(router-gateway-set public)
3) boot a vm on n1 and associate a floating ip
4) set admin-state to False i.e neutron router-update --admin-state-up False dr1
5) Now update the router to HA  i.e
   neutron router-update --distributed=False --ha=True 
   set admin-state to True
6) There will be two "network:router_ha_interface" ports, though one will be 
used by qrouter-xx namespace
7_ Again update router to DVR+HA
8) There will be three "network:router_ha_interface" ports, though one will be 
used by snat-xx namespace

I observed the "network:router_ha_interface" port first created will
always be used by qrouter-xx(when router is HA) and snat-xx(when router
is DVR+HA) and later created ports are never used.

** Affects: neutron
 Importance: Undecided
     Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715370

Title:
  Migration between DVR+HA and HA creating redundant
  "network:router_ha_interface" ports

Status in neutron:
  New

Bug description:
  When a router is migrated between DVR+HA and HA(i.e DVR+HA->HA and
  HA->DVR+HA), redundant "network:router_ha_interface" ports are
  created.

  To reproduce the issue(2 node setup with "dvr" and "dvr-snat" modes is 
sufficient), create a router 
  dr1 in DVR+HA mode. Then repeatedly flip this router's  DVR+HA and HA flags. 
You can see redundant "network:router_ha_interface" ports.

  I have a 2 node devstack setup, 1st l3-agent in "dvr" mode and 2nd one in 
"dvr-snat" mode.
  Whenever  HA flag is set to router, port with device_owner 
"network:router_ha_interface" should be created for only 2nd node i.e l3 agent  
with "dvr-snat" mode.

  Steps to reproduce:
  1) create a network n1, and subnet on this network with name sn1
  2) create a DVR+HA router(with name 'dr1'), attach it to sn1 through router 
interface add and set gateway(router-gateway-set public)
  3) boot a vm on n1 and associate a floating ip
  4) set admin-state to False i.e neutron router-update --admin-state-up False 
dr1
  5) Now update the router to HA  i.e
 neutron router-update --distributed=False --ha=True 
 set admin-state to True
  6) There will be two "network:router_ha_interface" ports, though one will be 
used by qrouter-xx namespace
  7_ Again update router to DVR+HA
  8) There will be three "network:router_ha_interface" ports, though one will 
be used by snat-xx namespace

  I observed the "network:router_ha_interface" port first created will
  always be used by qrouter-xx(when router is HA) and snat-xx(when
  router is DVR+HA) and later created ports are never used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715371] [NEW] _ensure_vr_id_and_network is not used anywhere in the code

2017-09-06 Thread venkata anil
Public bug reported:

_ensure_vr_id_and_network is not used anywhere in the code, hence can be
removed.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715371

Title:
  _ensure_vr_id_and_network is not used anywhere in the code

Status in neutron:
  New

Bug description:
  _ensure_vr_id_and_network is not used anywhere in the code, hence can
  be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715163] [NEW] StaleDataError in _reconfigure_ha_resources while running tempest tests

2017-09-05 Thread venkata anil
ERROR neutron_lib.callbacks.manager with 
context.session.begin(subtransactions=True):
Sep 05 10:00:10.802932 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 824, 
in begin
Sep 05 10:00:10.803106 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager self, 
nested=nested)
Sep 05 10:00:10.803252 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 218, 
in __init__
Sep 05 10:00:10.803437 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
self._take_snapshot()
Sep 05 10:00:10.803575 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 327, 
in _take_snapshot
Sep 05 10:00:10.803713 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
self.session.flush()
Sep 05 10:00:10.803864 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2171, 
in flush
Sep 05 10:00:10.804017 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
self._flush(objects)
Sep 05 10:00:10.804170 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2291, 
in _flush
Sep 05 10:00:10.804302 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
transaction.rollback(_capture_exception=True)
Sep 05 10:00:10.804441 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
66, in __exit__
Sep 05 10:00:10.804589 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
compat.reraise(exc_type, exc_value, exc_tb)
Sep 05 10:00:10.804731 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2255, 
in _flush
Sep 05 10:00:10.804996 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
flush_context.execute()
Sep 05 10:00:10.805177 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
389, in execute
Sep 05 10:00:10.805323 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager rec.execute(self)
Sep 05 10:00:10.805451 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
548, in execute
Sep 05 10:00:10.805612 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager uow
Sep 05 10:00:10.805762 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
177, in save_obj
Sep 05 10:00:10.805893 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager mapper, table, 
update)
Sep 05 10:00:10.806023 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
760, in _emit_update_statements
Sep 05 10:00:10.806153 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 
(table.description, len(records), rows))
Sep 05 10:00:10.806357 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager StaleDataError: 
UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 
were matched.
Sep 05 10:00:10.806500 ubuntu-xenial-2-node-rax-iad-10771875 
neutron-server[30374]: ERROR neutron_lib.callbacks.manager 


[1] 
http://logs.openstack.org/84/500384/3/check/gate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv/66eea59/logs/screen-q-svc.txt.gz

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venk

[Yahoo-eng-team] [Bug 1714802] [NEW] DVR and HA migration tests failing intermittently for gate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv job

2017-09-03 Thread venkata anil
Public bug reported:

For the migration test failures Jakub has already created this etherpad
https://etherpad.openstack.org/p/neutron-dvr-multinode-scenario-gate-
failures

My analysis is this - 
DVR and HA migration tempest scenario tests are failing(or passing) 
intermittently. In the existing tests, immediately after the port update API is 
returned we are trying ssh connectivity, without checking the dependent 
resources (like below) created or updated properly.
1) new interfaces are created
2) existing interfaces updated
3) interfaces bound to agents
4) interfaces status updated
5) agents creates namespaces etc

For example, during DVR to HA migration, as soon as the router update
api is returned, ssh test might try to use old data plane created with
DVR router, as agents might have not synced(removed namespaces, ovs
flows and ip routes) with server.  If the ssh reply packets arrived back
before the old data plane is removed, then ssh can be succesful. If this
data path is reconstructed(because of the migration) before the packet
arrived, then ssh can fail. Though ssh can retry, it may use existing
conection track and try to follow the same old data path(just my
assumption)

When I updated tests to check for the dependent resources before trying
for ssh, tests are passing reliably. So we can have these checks before
we try for ssh connectivity.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-dvr-backlog l3-ha tempest

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-dvr-backlog l3-ha tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714802

Title:
  DVR and HA migration tests failing intermittently for gate-tempest-
  dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-nv job

Status in neutron:
  New

Bug description:
  For the migration test failures Jakub has already created this
  etherpad https://etherpad.openstack.org/p/neutron-dvr-multinode-
  scenario-gate-failures

  My analysis is this - 
  DVR and HA migration tempest scenario tests are failing(or passing) 
intermittently. In the existing tests, immediately after the port update API is 
returned we are trying ssh connectivity, without checking the dependent 
resources (like below) created or updated properly.
  1) new interfaces are created
  2) existing interfaces updated
  3) interfaces bound to agents
  4) interfaces status updated
  5) agents creates namespaces etc

  For example, during DVR to HA migration, as soon as the router update
  api is returned, ssh test might try to use old data plane created with
  DVR router, as agents might have not synced(removed namespaces, ovs
  flows and ip routes) with server.  If the ssh reply packets arrived
  back before the old data plane is removed, then ssh can be succesful.
  If this data path is reconstructed(because of the migration) before
  the packet arrived, then ssh can fail. Though ssh can retry, it may
  use existing conection track and try to follow the same old data
  path(just my assumption)

  When I updated tests to check for the dependent resources before
  trying for ssh, tests are passing reliably. So we can have these
  checks before we try for ssh connectivity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714796] [NEW] Router interface not updated during DVR->HA, HA->DVR migratons

2017-09-03 Thread venkata anil
Public bug reported:

While debugging migration tests failures for gate-tempest-dsvm-neutron-
dvr-multinode-scenario-ubuntu-xenial-nv job, I observed that
device_owner is not updated properly during migration.

When HA router is migrated to DVR(i.e HA->DVR), router's
DEVICE_OWNER_HA_REPLICATED_INT interface has to be updated to
DEVICE_OWNER_DVR_INTERFACE. Similarly for DVR to HA migration,
DEVICE_OWNER_DVR_INTERFACE has to be replaced with
DEVICE_OWNER_HA_REPLICATED_INT.

But the existing migration code [1] doesn't consider
DEVICE_OWNER_HA_REPLICATED_INT. As DVR and HA code depends on these
device_owner types, it has to be updated properly.

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L127-L137

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714796

Title:
  Router interface not updated during DVR->HA, HA->DVR migratons

Status in neutron:
  New

Bug description:
  While debugging migration tests failures for gate-tempest-dsvm-
  neutron-dvr-multinode-scenario-ubuntu-xenial-nv job, I observed that
  device_owner is not updated properly during migration.

  When HA router is migrated to DVR(i.e HA->DVR), router's
  DEVICE_OWNER_HA_REPLICATED_INT interface has to be updated to
  DEVICE_OWNER_DVR_INTERFACE. Similarly for DVR to HA migration,
  DEVICE_OWNER_DVR_INTERFACE has to be replaced with
  DEVICE_OWNER_HA_REPLICATED_INT.

  But the existing migration code [1] doesn't consider
  DEVICE_OWNER_HA_REPLICATED_INT. As DVR and HA code depends on these
  device_owner types, it has to be updated properly.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L127-L137

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714251] [NEW] router_centralized_snat not removed when router migrated from DVR to HA

2017-08-31 Thread venkata anil
Public bug reported:

When a router is migrated from DVR to HA, all ports related to DVR
should be removed. But I still see port with device_owner
router_centralized_snat not removed.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714251

Title:
  router_centralized_snat not removed when router migrated from DVR to
  HA

Status in neutron:
  New

Bug description:
  When a router is migrated from DVR to HA, all ports related to DVR
  should be removed. But I still see port with device_owner
  router_centralized_snat not removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712388] [NEW] Keepalived v1.3.5 failing to assign IP for HA interface

2017-08-22 Thread venkata anil
Public bug reported:

>From the below syslog, I see keepalived is unable to read IP address
from config filr to configure HA interface(though config file is
properly configured). Need to check why this is happening. I also tested
Keepalived v1.3.5 on ubuntu machine and saw the same error.


Steps to reproduce:
Install Keepalived v1.3.5 and run below functional test
neutron.tests.functional.sanity.test_sanity.SanityTestCaseRoot.test_keepalived_ipv6_support
 
This test will fail and you can see the errors in syslog.


Complete syslog -

Aug 22 19:52:01 vagrant6 Keepalived[14752]: Starting Keepalived v1.3.5 
(03/19,2017), git commit v1.3.5-6-g6fa32f2
Aug 22 19:52:01 vagrant6 Keepalived[14752]: Unable to resolve default script 
username 'keepalived_script' - ignoring
Aug 22 19:52:01 vagrant6 Keepalived[14752]: Opening file 
'/tmp/tmpxxwEQH/tmpeVAVKR/router1/keepalived.conf'.
Aug 22 19:52:01 vagrant6 Keepalived[14753]: Starting VRRP child process, 
pid=14754
Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering Kernel netlink 
reflector
Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering Kernel netlink 
command channel
Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering gratuitous ARP 
shared channel
Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Opening file 
'/tmp/tmpxxwEQH/tmpeVAVKR/router1/keepalived.conf'.
Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: (VR_1): Cannot find an IP 
address to use for interface ha-c896a1c
Aug 22 19:52:02 vagrant6 Keepalived_vrrp[14754]: Stopped
Aug 22 19:52:02 vagrant6 Keepalived[14753]: Keepalived_vrrp exited with 
permanent error CONFIG. Terminating
Aug 22 19:52:02 vagrant6 Keepalived[14753]: Stopping
Aug 22 19:52:07 vagrant6 Keepalived[14753]: Stopped Keepalived v1.3.5 
(03/19,2017), git commit v1.3.5-6-g6fa32f2

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1712388

Title:
  Keepalived v1.3.5 failing to assign IP for HA interface

Status in neutron:
  New

Bug description:
  From the below syslog, I see keepalived is unable to read IP address
  from config filr to configure HA interface(though config file is
  properly configured). Need to check why this is happening. I also
  tested Keepalived v1.3.5 on ubuntu machine and saw the same error.

  
  Steps to reproduce:
  Install Keepalived v1.3.5 and run below functional test
  
neutron.tests.functional.sanity.test_sanity.SanityTestCaseRoot.test_keepalived_ipv6_support
 
  This test will fail and you can see the errors in syslog.


  Complete syslog -

  Aug 22 19:52:01 vagrant6 Keepalived[14752]: Starting Keepalived v1.3.5 
(03/19,2017), git commit v1.3.5-6-g6fa32f2
  Aug 22 19:52:01 vagrant6 Keepalived[14752]: Unable to resolve default script 
username 'keepalived_script' - ignoring
  Aug 22 19:52:01 vagrant6 Keepalived[14752]: Opening file 
'/tmp/tmpxxwEQH/tmpeVAVKR/router1/keepalived.conf'.
  Aug 22 19:52:01 vagrant6 Keepalived[14753]: Starting VRRP child process, 
pid=14754
  Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering Kernel netlink 
reflector
  Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering Kernel netlink 
command channel
  Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Registering gratuitous ARP 
shared channel
  Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: Opening file 
'/tmp/tmpxxwEQH/tmpeVAVKR/router1/keepalived.conf'.
  Aug 22 19:52:01 vagrant6 Keepalived_vrrp[14754]: (VR_1): Cannot find an IP 
address to use for interface ha-c896a1c
  Aug 22 19:52:02 vagrant6 Keepalived_vrrp[14754]: Stopped
  Aug 22 19:52:02 vagrant6 Keepalived[14753]: Keepalived_vrrp exited with 
permanent error CONFIG. Terminating
  Aug 22 19:52:02 vagrant6 Keepalived[14753]: Stopping
  Aug 22 19:52:07 vagrant6 Keepalived[14753]: Stopped Keepalived v1.3.5 
(03/19,2017), git commit v1.3.5-6-g6fa32f2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1712388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672701] [NEW] DetachedInstanceError on subnet delete

2017-03-14 Thread venkata anil
Public bug reported:

While trying to delete subnets, some failed with this trace


2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
[req-8a9aabc4-f02a-4533-aa09-e853daf82579 5227a890398042c5b769cd1ed632ec76 
e8ba07a25a7d43509b342265c7ac124b - - -] delete failed
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 532, in delete
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 554, in _delete
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 967, in 
delete_subnet
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource if a.port:
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 237, in 
__get__
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 578, in 
get
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource value = 
self.callable_(state, passive)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/strategies.py", line 502, in 
_load_for_state
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
(orm_util.state_str(state), self.key)
2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
DetachedInstanceError: Parent instance  is not bound 
to a Session; lazy load operation of attribute 'port' cannot proceed


complete log is here

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672701

Title:
  DetachedInstanceError on subnet delete

Status in neutron:
  New

Bug description:
  While trying to delete subnets, some failed with this trace

  
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 
[req-8a9aabc4-f02a-4533-aa09-e853daf82579 5227a890398042c5b769cd1ed632ec76 
e8ba07a25a7d43509b342265c7ac124b - - -] delete failed
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 532, in delete
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
  2017-03-09 17:23:56.953 3432 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1671709] [NEW] test_add_list_remove_router_on_l3_agent intermittently failing for DVR+HA gate job

2017-03-09 Thread venkata anil
Public bug reported:

This test is intermittently failing [1] for dvr+ha gate job.
DVR+HA gate job will be running in a 3 node devstack setup. Two nodes will be 
running l3 agent in "dvr_snat" node and another in "dvr" mode. 
In this job, for dvr routers, we call add_router_interface[2] and then add this 
router to l3 agent [3].
1) add_router_interface will by default add router to one of the dvr_snat 
agents, for example agent1
2) when the test is again trying to add explicitly the router to agent,
   a) if adding to same agent i.e agent1, then neutron skips this request [4], 
hence test passes 
   b) if adding to other agent i.e agent2, then neutron raises exception [5], 
and test fails 

Till now we have only two nodes(one dvr and another dvr-snat) in gate
for dvr multi node setup. So this test is passing as we are trying to
add to same snat agent(and neuron skips this request). As we are not
really testing any functionality for dvr, we should skip this test for
dvr routers.

[1] 
http://logs.openstack.org/33/383833/3/experimental/gate-tempest-dsvm-neutron-dvr-ha-multinode-full-ubuntu-xenial-nv/8862b07/logs/testr_results.html.gz
[2] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L84
[3] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L114
[4] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L154
[5] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L159

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671709

Title:
  test_add_list_remove_router_on_l3_agent intermittently failing for
  DVR+HA gate job

Status in neutron:
  New

Bug description:
  This test is intermittently failing [1] for dvr+ha gate job.
  DVR+HA gate job will be running in a 3 node devstack setup. Two nodes will be 
running l3 agent in "dvr_snat" node and another in "dvr" mode. 
  In this job, for dvr routers, we call add_router_interface[2] and then add 
this router to l3 agent [3].
  1) add_router_interface will by default add router to one of the dvr_snat 
agents, for example agent1
  2) when the test is again trying to add explicitly the router to agent,
 a) if adding to same agent i.e agent1, then neutron skips this request 
[4], hence test passes 
 b) if adding to other agent i.e agent2, then neutron raises exception [5], 
and test fails 

  Till now we have only two nodes(one dvr and another dvr-snat) in gate
  for dvr multi node setup. So this test is passing as we are trying to
  add to same snat agent(and neuron skips this request). As we are not
  really testing any functionality for dvr, we should skip this test for
  dvr routers.

  [1] 
http://logs.openstack.org/33/383833/3/experimental/gate-tempest-dsvm-neutron-dvr-ha-multinode-full-ubuntu-xenial-nv/8862b07/logs/testr_results.html.gz
  [2] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L84
  [3] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_l3_agent_scheduler.py#L114
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L154
  [5] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_agentschedulers_db.py#L159

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518392] Re: [RFE] Enable arp_responder without l2pop

2017-02-22 Thread venkata anil
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518392

Title:
  [RFE] Enable arp_responder without l2pop

Status in neutron:
  New

Bug description:
  Remove the dependency between arp_responder and l2_pop.
  arp_responder should be enabled without enabling l2_pop.
  setup arp_responder on OVS integration bridge

  To enable arp_responder, we only need port's MAC and IP Address and no tunnel 
ip(So no need for l2pop). 
  Currently agents use l2pop notifications to create ARP entries. With the new 
approach, agents can use
  port events(create, update and delete) to create ARP entry and without l2pop 
notifications.

  The advantages with this approach for both linuxbridge and OVS agent -
  1) Users can enable arp_responder without l2pop 
  2) Support ARP for distributed ports(DVR and HA).
 Currently, ARP is not added for these ports.
 This is a fix for https://bugs.launchpad.net/neutron/+bug/1661717

  This allows to create ARP entries on OVS integration bridge.

  Advantages for OVS agent, if ARP entries are setup on integration 
bridge(br-int) rather than on tunneling bridge(br-tun)
  1) It enables arp_responder for all network types(vlans, vxlan, etc)
 arp_responder based on l2pop is supported for only overlay networks.
  2) ARP can be resolved within br-int.
  3) ARP packets for local ports(ports connected to same br-int) will be 
resolved
 in br-int without broadcasting to actual ports connected to br-int.

  
  Already submitted https://review.openstack.org/#/c/248177/ for this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662804] [NEW] Agent is failing to process HA router if initialize() fails

2017-02-08 Thread venkata anil
t__
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 359, in call
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 744, 
in process
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
self._process_internal_ports(agent.pd)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 394, 
in _process_internal_ports
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
self.internal_network_added(p)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 275, in 
internal_network_added
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent 
self._disable_ipv6_addressing_on_interface(interface_name)
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 235, in 
_disable_ipv6_addressing_on_interface
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent if 
self._should_delete_ipv6_lladdr(ipv6_lladdr):
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 217, in 
_should_delete_ipv6_lladdr
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent if 
manager.get_process().active:
2017-02-06 18:34:18.549 26120 ERROR neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'get_process'

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662804

Title:
  Agent is failing to process HA router if initialize() fails

Status in neutron:
  New

Bug description:
  When HA router initialize() function fails for some reason(rabbitmq
  restart or no ha_port), keepalived_manager or KeepalivedInstance won't
  be configured. In this case, _process_router_if_compatible fails with
  exception, then _resync_router(update) will again try to process this
  router in loop. As we try initialize() only once(which was failed),
  retry of _process_router_if_compatible will always fail(no keepalived
  manager or instance) and router is never configured(see below trace).

  2017-02-06 18:34:18.539 26120 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-114a72fe-02ae-4b87-a2e7-70f962df0951', 'ip', '-o', 'link', 'show', 
'qr-e6
  3406e1-e7'] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:101
  2017-02-06 18:34:18.544 26120 DEBUG neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'exec', 
u'qrouter-114a72fe-02ae-4b87-a2e7-70f962df0951', 'ip', '-o', 'link', 'show', 
u'qr-e63406e1-e7']
  Exit code: 0
   execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:156
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info [-] 
'NoneType' object has no attribute 'get_process'
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 359, in call
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 744, 
in process
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self._process_internal_ports(agent.pd)
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 394, 
in _process_internal_ports
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self.internal_network_added(p)
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 275, in 
internal_network_added
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info 
self._disable_ipv6_addressing_on_interface(interface_name)
  2017-02-06 18:34:18.544 26120 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2

[Yahoo-eng-team] [Bug 1659019] [NEW] oepnstackclient is not allowing to set distributed=False for admin user

2017-01-24 Thread venkata anil
Public bug reported:

Admin can't use distributed=False while creating router through
openstackclient. This will be needed when router_distributed = True is
set in neutron.conf and admin wants to disable this flag for a
particular router,  But neutron client allows --distributed {True,False}
flags.

** Affects: python-openstackclient
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659019

Title:
  oepnstackclient is not allowing to set distributed=False for admin
  user

Status in python-openstackclient:
  New

Bug description:
  Admin can't use distributed=False while creating router through
  openstackclient. This will be needed when router_distributed = True is
  set in neutron.conf and admin wants to disable this flag for a
  particular router,  But neutron client allows --distributed
  {True,False} flags.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1659019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659019] Re: oepnstackclient is not allowing to set distributed=False for admin user

2017-01-24 Thread venkata anil
** Project changed: neutron => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659019

Title:
  oepnstackclient is not allowing to set distributed=False for admin
  user

Status in python-openstackclient:
  New

Bug description:
  Admin can't use distributed=False while creating router through
  openstackclient. This will be needed when router_distributed = True is
  set in neutron.conf and admin wants to disable this flag for a
  particular router,  But neutron client allows --distributed
  {True,False} flags.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1659019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659012] [NEW] "openstack router create" for non-admin user failing in newton

2017-01-24 Thread venkata anil
Public bug reported:

router create using OpenStackClient in newton(may be in mitaka and old
branches also) is failing for non-admin user.

In newton(and old branches), when non-admin user calls, "openstack
router create test",  {"distributed": false} attribute is sent as part
of POST request

REQ: curl -g -i -X POST http://192.168.1.4:9696/v2.0/routers -H "User-
Agent: openstacksdk/0.8.3 keystoneauth1/2.4.1 python-requests/2.10.0
CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token:
{SHA1}6c0fb8f5c2131c92084ad803595a08cf81e3" -d '{"router":
{"distributed": false, "name": "test-router-3", "admin_state_up":
true}}'

and then neutron is raising "HttpException: Forbidden 403" exception
http://paste.openstack.org/show/596239/


In master branch, for the same command, distributed flag is not
sent(because of [1]) i.e

REQ: curl -g -i -X POST http://192.168.121.33:9696/v2.0/routers -H
"User-Agent: openstacksdk/0.9.10 keystoneauth1/2.16.0 python-
requests/2.12.4 CPython/2.7.6" -H "Content-Type: application/json" -H "X
-Auth-Token: {SHA1}df87846ed616e8d5b0454caa3af88aecf54d011d" -d
'{"router": {"name": "test2", "admin_state_up": true}}'

and command is able to successfully create the router

[1] https://review.openstack.org/#/c/397085/

** Affects: neutron
     Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: newton-backport-potential

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659012

Title:
  "openstack router create" for non-admin user failing in newton

Status in neutron:
  New

Bug description:
  router create using OpenStackClient in newton(may be in mitaka and old
  branches also) is failing for non-admin user.

  In newton(and old branches), when non-admin user calls, "openstack
  router create test",  {"distributed": false} attribute is sent as part
  of POST request

  REQ: curl -g -i -X POST http://192.168.1.4:9696/v2.0/routers -H "User-
  Agent: openstacksdk/0.8.3 keystoneauth1/2.4.1 python-requests/2.10.0
  CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token:
  {SHA1}6c0fb8f5c2131c92084ad803595a08cf81e3" -d '{"router":
  {"distributed": false, "name": "test-router-3", "admin_state_up":
  true}}'

  and then neutron is raising "HttpException: Forbidden 403" exception
  http://paste.openstack.org/show/596239/


  In master branch, for the same command, distributed flag is not
  sent(because of [1]) i.e

  REQ: curl -g -i -X POST http://192.168.121.33:9696/v2.0/routers -H
  "User-Agent: openstacksdk/0.9.10 keystoneauth1/2.16.0 python-
  requests/2.12.4 CPython/2.7.6" -H "Content-Type: application/json" -H
  "X-Auth-Token: {SHA1}df87846ed616e8d5b0454caa3af88aecf54d011d" -d
  '{"router": {"name": "test2", "admin_state_up": true}}'

  and command is able to successfully create the router

  [1] https://review.openstack.org/#/c/397085/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1659012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644437] [NEW] Osprofiler init failing for neutron

2016-11-23 Thread venkata anil
Public bug reported:

Osprofiler initialization failing for neutron, resulting in not
capturing of neutron trace.

During the new OSprofiler initialization method introduction[1] there
was an issue introduced - OSprofiler expects context to be a dict, not
Context object. Need to change context to a dict.

[1] https://review.openstack.org/#/c/342505/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644437

Title:
  Osprofiler init failing for neutron

Status in neutron:
  New

Bug description:
  Osprofiler initialization failing for neutron, resulting in not
  capturing of neutron trace.

  During the new OSprofiler initialization method introduction[1] there
  was an issue introduced - OSprofiler expects context to be a dict, not
  Context object. Need to change context to a dict.

  [1] https://review.openstack.org/#/c/342505/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505316] Re: compute port lose fixed_ips on restart l3-agent if subnet is prefix delegated

2016-10-12 Thread venkata anil
https://bugs.launchpad.net/neutron/+bug/1570122 is not related to this bug. Bug 
1505316 is about
1) ipam deleting IPv4 address while adding IPV6 PD subnet to fixed_ips
2) ipam not adding PD subnet for ip allocation
So IMHO, we can close this bug(1505316), and can have separate bugs for any 
other IPv6 PD related issues.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505316

Title:
  compute port lose fixed_ips on restart l3-agent if subnet is prefix
  delegated

Status in neutron:
  Fix Released

Bug description:
  I've created two networks: net04_ext and slaac-auto. First is
  dualstack public external network and second is tenant network with
  static private IPv4 subnet and prefix delegated IPv6 subnet.

  I have ISC DHCPv6 prefix delegation server, that serves GUA's:
  prefix6 2001:0470:739e:fe00:: 2001:0470:739e:feff:: /64;

  | id   | name   | subnets 
  |
  | 848cb239-d423-40fb-89fb-3f22212a83f6 | net04_ext  | 
62a101e5-26fe-451c-a3a1-cf7ebdefc308 2001:470:739e::/64   |
  |  || 
1996fd93-d890-4500-a552-1602f5e1114e 172.16.0.0/24|
  | 147aa018-315d-4344-a26e-ca1bd3f22afe | slaac-auto | 
ebd9bb05-c180-4466-bb92-f6ab6efa58a8 2001:470:739e:fefe::/64  |
  |  || 
6031c529-12d8-4ba5-be4e-90b52330c4e0 192.168.114.0/24 |

  I have a router named slaac-router, that connects slaac-auto to
  net04_ext. Prefix 2001:470:739e:fefe::/64 was automatically delegated
  by prefix delegation.

  | id   | name | external_gateway_info 



   | distributed | ha|
  | 5c758a05-5771-4d8b-a60c-62549ca9fc34 | slaac-router | {"network_id": 
"848cb239-d423-40fb-89fb-3f22212a83f6", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "1996fd93-d890-4500-a552-1602f5e1114e", 
"ip_address": "172.16.0.143"}, {"subnet_id": 
"62a101e5-26fe-451c-a3a1-cf7ebdefc308", "ip_address": 
"2001:470:739e:0:f816:3eff:fefc:c99b"}]} | False   | False |

  I have an instance in slaac-auto network.

  When I've restarted neutron-l3-agent, router has rebind its GUA
  address, but instance lost both IPv4 and IPv6 fixed_ips.

  | id   | name | mac_address   | fixed_ips 

   |
  
+--+--+---+--+
  | be16f0fd-d185-4cc7-b4a5-129b7ed9537a |  | fa:16:3e:98:69:8e |   

   |
  | c559bdcd-76f0-4914-84b5-6dca82371668 |  | fa:16:3e:ba:ea:ea | 
{"subnet_id": "ebd9bb05-c180-4466-bb92-f6ab6efa58a8", "ip_address": 
"2001:470:739e:fefe::1"} |
  | f839c7d2-da54-40d4-96f7-28bf0c64b48b |  | fa:16:3e:df:c2:ea | 
{"subnet_id": "6031c529-12d8-4ba5-be4e-90b52330c4e0", "ip_address": 
"192.168.114.2"} |

  To reproduce:
  1. Setup OpenStack with prefix delegation
  2. Create public dualstack network
  3. Create private network with prefix delegated subnet
  4. Create router and hook it up to new subnets
  5. Create instance
  6. Restart neutron-l3-agent
  7. Check out instance fixed_ips (both in nova and neutron)

  Neutron from master
  https://github.com/openstack/neutron/commit/b060e5b

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630981] [NEW] Implement l2pop driver functionality in l2 agent

2016-10-06 Thread venkata anil
Public bug reported:

Existing functionality of l2pop:
L2pop driver on server side -
1) when port binding or status or ip_address(or mac) is updated, notify this 
port's FDB(i.e port's 
   ip address, mac, hosting agent's tunnel ip) to all agents
2) also checks if it is the first or last port on the hosting agent's network, 
if so
   a) notifies all agents to create/delete tunnels and flood flows to hosting 
agent
   b) notifies hosting agent about all the existing network ports on other 
agents(so that hosting 
  agent can create tunnels, flood and unicast flows to these ports).
L2pop on agent side, after receiving notification from l2pop driver, 
creates/deletes tunnel endpoints, flood, unicast and ARP ovs flows to other 
agent's ports.


New Implementation:
But the same functionality can be achieved without l2pop. Whenever a port is 
created/updated/deleted, l2 agents get that port(ip, mac and host_id) through 
port_update and port_delete RPC notifications. Agents can get get hostname[1] 
and tunnel_ip of other agents through tunnel_update(and agents can locally save 
this mapping).
As we are getting port's FDB trough these API, we can build the ovs flows 
without l2pop FDB.


L2pop driver uses port context's original_port and new_port, to identify 
changes to port's FDB. In the new implementation, agent can save port's 
FDB(only required fields), so that new port update can be always compared with 
saved FDB, and then identify changes to port's FDB.
We can use port's revision_number to maintain order of port's updates.


When l2 agent adds first port on a network(i.e Provisions a local VLAN for the 
network), it can request server with a new RPC call to provide all network 
port's FDB on other agents. Then it can create flows to all these existing 
ports.


DVR implications:
Currently when DVR router port is bound, it notifies all agents[2]. But server 
will set port's host_id  to ''[3]. Need to change it to calling host and check 
the implications.

HA implications:
Port's host_id will always be set to master host. This allows other agents to 
create flows to master host only. Need to update HA to use existing DVR 
multiple portbinding approach, to allow other agents to create flows to all HA 
agents.

Other TODO:
1) In existing implementation, port status updates(update_port_status) won't 
notify agent. Need to enhance it.


Advantages
1) Existing l2pop driver code to identify 'active port count' on agent with 
multiple servers can be
   buggy[4].
2) L2pop driver identifies first port on a agent and notifies it other port's 
FDB through RPC.
   If this RPC is not reaching the agent for any reason(example, that rpc 
worker dead),
   then agents can never establish tunnels and flows to other agents.
3) We can remove additional l2pop mechanism driver on the server. Also can 
avoid separate l2pop
   RPC consumer threads on agent and related concurrency issues[5].


[1] need to patch type_tunnel.py to send hostname as argument for tunnel_update.
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1564
[3] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L542
[4] Got stabilized with https://review.openstack.org/#/c/300539/
[5] https://bugs.launchpad.net/neutron/+bug/1611308

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2-pop linuxbridge ovs

** Tags added: l2-pop linuxbridge ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630981

Title:
  Implement l2pop driver functionality in l2 agent

Status in neutron:
  New

Bug description:
  Existing functionality of l2pop:
  L2pop driver on server side -
  1) when port binding or status or ip_address(or mac) is updated, notify this 
port's FDB(i.e port's 
 ip address, mac, hosting agent's tunnel ip) to all agents
  2) also checks if it is the first or last port on the hosting agent's 
network, if so
 a) notifies all agents to create/delete tunnels and flood flows to hosting 
agent
 b) notifies hosting agent about all the existing network ports on other 
agents(so that hosting 
agent can create tunnels, flood and unicast flows to these ports).
  L2pop on agent side, after receiving notification from l2pop driver, 
creates/deletes tunnel endpoints, flood, unicast and ARP ovs flows to other 
agent's ports.

  
  New Implementation:
  But the same functionality can be achieved without l2pop. Whenever a port is 
created/updated/deleted, l2 agents get that port(ip, mac and host_id) through 
port_update and port_delete RPC notifications. Agents can get get hostname[1] 
and tunnel_ip of other agents through tunnel_update(and agents can locally save 
this mapping).
  As we are getting port's FDB trough these API, we can build the ovs flows 
without l2pop FDB.

  
  L2pop driver uses port context's original_port and new_port, to identify 

[Yahoo-eng-team] [Bug 1628996] [NEW] subnet service types not working with sqlite3 version 3.7.17

2016-09-29 Thread venkata anil
Public bug reported:

subnet service types not working with sqlite3 version 3.7.17. But it
works from sqlite3 version 3.8.0 and above versions.

Because of this, subnet service type unit tests failing in sqlite3
version 3.7.17.

Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/base.py", line 125, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
245, in test_create_port_no_device_owner_no_fallback
self.test_create_port_no_device_owner(fallback=False)
  File "neutron/tests/base.py", line 125, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
242, in test_create_port_no_device_owner
self._assert_port_res(port, '', subnet, fallback)
  File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
173, in _assert_port_res
self.assertEqual(error, res['NeutronError']['type'])
KeyError: 'NeutronError'

_query_filter_service_subnets [1] is behaving differently in 3.7.17 and 3.8.0 
for these tests
[1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L597

I have seen this on centos7 setup, which by default uses sqlite3 version
3.7.17.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628996

Title:
  subnet service types not working with sqlite3 version 3.7.17

Status in neutron:
  New

Bug description:
  subnet service types not working with sqlite3 version 3.7.17. But it
  works from sqlite3 version 3.8.0 and above versions.

  Because of this, subnet service type unit tests failing in sqlite3
  version 3.7.17.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
245, in test_create_port_no_device_owner_no_fallback
  self.test_create_port_no_device_owner(fallback=False)
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
242, in test_create_port_no_device_owner
  self._assert_port_res(port, '', subnet, fallback)
File "neutron/tests/unit/extensions/test_subnet_service_types.py", line 
173, in _assert_port_res
  self.assertEqual(error, res['NeutronError']['type'])
  KeyError: 'NeutronError'

  _query_filter_service_subnets [1] is behaving differently in 3.7.17 and 3.8.0 
for these tests
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L597

  I have seen this on centos7 setup, which by default uses sqlite3
  version 3.7.17.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596976] Re: optimize refresh firewall on ipset member update

2016-07-11 Thread venkata anil
*** This bug is a duplicate of bug 1371435 ***
https://bugs.launchpad.net/bugs/1371435

** This bug has been marked a duplicate of bug 1371435
   Remove unnecessary iptables reload when L2 agent enable ipset

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596976

Title:
  optimize refresh firewall  on ipset member update

Status in neutron:
  New

Bug description:
  Before the ipset, a port was creating explicit firewall rule to other 
ports(member of the same security group) i.e
  port's firewall rules without ipset
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.17/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.18/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.15/32 -j RETURN
  with ipset
  -A neutron-openvswi-i92605eaf-b -m set –match-set ${ipset_name} src -j RETURN

  With ipset, when a new port is up on remote ovs agent, then on local
  ovs agent, only kernel ipset has to be updated and no need to update
  any firewall rules. When port on remote agent is deleted, then it has
  to be deleted from local agent's ipset, and corresponding connection
  tracking entries has to deleted. In both the above scenarios, ovs
  shouldn't update firewall rules.

   But current implementation is trying to update firewall rules(this
  will result in removing all in-memory firewall rules and again
  creating them, but still no iptable rules are updated on system). This
  is consuming lot of agent's time. We can optimize this by avoid
  updating in-memory firewall rules for this scenario, and make firewall
  refresh for securitygroup-member-update faster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598734] [NEW] Avoid duplicate ipset processing for security groups

2016-07-04 Thread venkata anil
Public bug reported:

While applying firewall rules for ports, existing implementation
iterates through each port and applies ipset for its security groups.
With this, when ports share the security group, ipset for same security 
group is called again and again while iterating through ports.
Instead, if we prepare list of security groups for all ports and apply 
ipset on them before applying firewall, we can avoid duplicate ipset
processing for security groups.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress


** Tags: loadimpact sg-fw

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: loadimpact sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598734

Title:
  Avoid duplicate ipset processing for security groups

Status in neutron:
  In Progress

Bug description:
  While applying firewall rules for ports, existing implementation
  iterates through each port and applies ipset for its security groups.
  With this, when ports share the security group, ipset for same security 
  group is called again and again while iterating through ports.
  Instead, if we prepare list of security groups for all ports and apply 
  ipset on them before applying firewall, we can avoid duplicate ipset
  processing for security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596976] [NEW] optimize refresh firewall on ipset member update

2016-06-28 Thread venkata anil
Public bug reported:

Before the ipset, a port was creating explicit firewall rule to other 
ports(member of the same security group) i.e
port's firewall rules without ipset
-A neutron-openvswi-i92605eaf-b -s 192.168.83.17/32 -j RETURN
-A neutron-openvswi-i92605eaf-b -s 192.168.83.18/32 -j RETURN
-A neutron-openvswi-i92605eaf-b -s 192.168.83.15/32 -j RETURN
with ipset
-A neutron-openvswi-i92605eaf-b -m set –match-set ${ipset_name} src -j RETURN

With ipset, when a new port is up on remote ovs agent, then on local ovs
agent, only kernel ipset has to be updated and no need to update any
firewall rules. When port on remote agent is deleted, then it has to be
deleted from local agent's ipset, and corresponding connection tracking
entries has to deleted. In both the above scenarios, ovs shouldn't
update firewall rules.

 But current implementation is trying to update firewall rules(this will
result in removing all in-memory firewall rules and again creating them,
but still no iptable rules are updated on system). This is consuming lot
of agent's time. We can optimize this by avoid updating in-memory
firewall rules for this scenario, and make firewall refresh for
securitygroup-member-update faster.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596976

Title:
  optimize refresh firewall  on ipset member update

Status in neutron:
  New

Bug description:
  Before the ipset, a port was creating explicit firewall rule to other 
ports(member of the same security group) i.e
  port's firewall rules without ipset
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.17/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.18/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.15/32 -j RETURN
  with ipset
  -A neutron-openvswi-i92605eaf-b -m set –match-set ${ipset_name} src -j RETURN

  With ipset, when a new port is up on remote ovs agent, then on local
  ovs agent, only kernel ipset has to be updated and no need to update
  any firewall rules. When port on remote agent is deleted, then it has
  to be deleted from local agent's ipset, and corresponding connection
  tracking entries has to deleted. In both the above scenarios, ovs
  shouldn't update firewall rules.

   But current implementation is trying to update firewall rules(this
  will result in removing all in-memory firewall rules and again
  creating them, but still no iptable rules are updated on system). This
  is consuming lot of agent's time. We can optimize this by avoid
  updating in-memory firewall rules for this scenario, and make firewall
  refresh for securitygroup-member-update faster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595043] [NEW] Make DVR portbinding implementation useful for HA ports

2016-06-21 Thread venkata anil
Public bug reported:

Make DVR portbinding implementation generic so that it will be useful
for all distributed router ports(for example, HA router ports).

Currently HA interface port binding is implemented as a normal port
binding i.e it uses only ml2_port_bindings table, with host set to
master host. When a new host becomes master, this binding will be
updated. But this approach has issues as explained in
https://bugs.launchpad.net/neutron/+bug/1522980

As HA router ports(DEVICE_OWNER_HA_REPLICATED_INT, DEVICE_OWNER_ROUTER_SNAT for 
DVR+HA) are distributed ports like DVR, we will follow DVR approach of port 
binding for HA router ports.
So we make DVR port binding generic, so that it can be used for all distributed 
router ports.

To make DVR port binding generic for all distributed router ports, we need to
1) rename ml2_dvr_port_bindings table to ml2_distributed_port_bindings 
2) rename functions updating/accessing this table
3) Replace 'if condition' for dvr port with distributed port, for example, 
replace
   if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
  with
   if distributed_router_port(port):

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress


** Tags: l2-pop l3-dvr-backlog l3-ha

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-dvr-backlog

** Tags added: l2-pop l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595043

Title:
  Make DVR portbinding implementation useful for HA ports

Status in neutron:
  In Progress

Bug description:
  Make DVR portbinding implementation generic so that it will be useful
  for all distributed router ports(for example, HA router ports).

  Currently HA interface port binding is implemented as a normal port
  binding i.e it uses only ml2_port_bindings table, with host set to
  master host. When a new host becomes master, this binding will be
  updated. But this approach has issues as explained in
  https://bugs.launchpad.net/neutron/+bug/1522980

  As HA router ports(DEVICE_OWNER_HA_REPLICATED_INT, DEVICE_OWNER_ROUTER_SNAT 
for DVR+HA) are distributed ports like DVR, we will follow DVR approach of port 
binding for HA router ports.
  So we make DVR port binding generic, so that it can be used for all 
distributed router ports.

  To make DVR port binding generic for all distributed router ports, we need to
  1) rename ml2_dvr_port_bindings table to ml2_distributed_port_bindings 
  2) rename functions updating/accessing this table
  3) Replace 'if condition' for dvr port with distributed port, for example, 
replace
 if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
with
 if distributed_router_port(port):

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583519] [NEW] Add tempest to test-requirements.txt

2016-05-19 Thread venkata anil
Public bug reported:

Neutron is updated to use tempest instead of tempest-lib with the below
commit

https://github.com/openstack/neutron/commit/e3210bc880c1cda8a883cb5da05b279cd87aecd4

With this, neutron's tempest tests depend on "tempest" package, but this
package is not added to test-requirements.txt

Need to add tempest to test-requirements.txt

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583519

Title:
  Add tempest to test-requirements.txt

Status in neutron:
  New

Bug description:
  Neutron is updated to use tempest instead of tempest-lib with the
  below commit

  
https://github.com/openstack/neutron/commit/e3210bc880c1cda8a883cb5da05b279cd87aecd4

  With this, neutron's tempest tests depend on "tempest" package, but
  this package is not added to test-requirements.txt

  Need to add tempest to test-requirements.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583028] [NEW] [fullstack] Add new tests for router functionality

2016-05-18 Thread venkata anil
Public bug reported:

 Add fullstack tests for following router(legacy, HA, DVR, HA with DVR)
use cases

 1) test east west traffic
 2) test snat and floatingip

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583028

Title:
  [fullstack] Add new tests for router functionality

Status in neutron:
  New

Bug description:
   Add fullstack tests for following router(legacy, HA, DVR, HA with
  DVR) use cases

   1) test east west traffic
   2) test snat and floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581580] [NEW] Heavy cpu load seen when keepalived state change server gets wsi_default_pool_size requests at same time

2016-05-13 Thread venkata anil
Public bug reported:

With wsgi_default_pool_size=100[1], if the keepalived state change
server gets 100 requests at the same time, while processing the requests
heavy load is seen on cpu, making the network node unresponsive. For
each request, keepalived state change server spawns a new meta data
proxy process(i.e neutron-ns-metadata-proxy). During heavy cpu load,
with "top" command, I can see many metadata proxy processes in "running"
state at same time(see the attachment).

When wsgi_default_pool_size=8, I see state change server spawning 8
metadata proxy processes at a time("top" command shows 8 meta data proxy
processes in "running" state at a time), cpu load is less and metadata
proxy processes(for example, 100) spawned for all requests without
failures.

We can keep wsgi_default_pool_size=100 for neutron API server, and use
seperate configuration for UnixDomainWSGIServer(for example
CONF.unix_domain_wsgi_default_pool_size).

neutron/agent/linux/utils.py
class UnixDomainWSGIServer(wsgi.Server):

def _run(self, application, socket):
"""Start a WSGI service in a new green thread."""
logger = logging.getLogger('eventlet.wsgi.server') 
eventlet.wsgi.server(socket, 
 application,
 max_size=CONF.unix_domain_wsgi_default_pool_size,
 protocol=UnixDomainHttpProtocol,
 log=logger)

[1]
https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-ha

** Attachment added: "top.gif"
   https://bugs.launchpad.net/bugs/1581580/+attachment/4662237/+files/top.gif

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581580

Title:
  Heavy cpu load seen when keepalived state change server gets
  wsi_default_pool_size requests at same time

Status in neutron:
  New

Bug description:
  With wsgi_default_pool_size=100[1], if the keepalived state change
  server gets 100 requests at the same time, while processing the
  requests heavy load is seen on cpu, making the network node
  unresponsive. For each request, keepalived state change server spawns
  a new meta data proxy process(i.e neutron-ns-metadata-proxy). During
  heavy cpu load, with "top" command, I can see many metadata proxy
  processes in "running" state at same time(see the attachment).

  When wsgi_default_pool_size=8, I see state change server spawning 8
  metadata proxy processes at a time("top" command shows 8 meta data
  proxy processes in "running" state at a time), cpu load is less and
  metadata proxy processes(for example, 100) spawned for all requests
  without failures.

  We can keep wsgi_default_pool_size=100 for neutron API server, and use
  seperate configuration for UnixDomainWSGIServer(for example
  CONF.unix_domain_wsgi_default_pool_size).

  neutron/agent/linux/utils.py
  class UnixDomainWSGIServer(wsgi.Server):

  def _run(self, application, socket):
  """Start a WSGI service in a new green thread."""
  logger = logging.getLogger('eventlet.wsgi.server') 
  eventlet.wsgi.server(socket, 
   application,
   max_size=CONF.unix_domain_wsgi_default_pool_size,
   protocol=UnixDomainHttpProtocol,
   log=logger)

  [1]
  
https://github.com/openstack/neutron/commit/9d573387f1e33ce85269d3ed9be501717eed4807

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575033] [NEW] iptables-restore fails with RuntimeError for ipset

2016-04-26 Thread venkata anil
ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.force_reraise()
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
six.reraise(self.type_, self.value, self.tb)
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 538, in 
_apply_synchronized
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
run_as_root=True)
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 137, in execute
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent raise 
RuntimeError(msg)
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent RuntimeError: 
Exit code: 2; Stdin: # Generated by iptables_manager
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent *filter
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
:ovs_agent.py-ib272c437-0 - [0:0]

2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent -I 
ovs_agent.py-PREROUTING 2 -m physdev --physdev-in tapb272c437-05 -j CT --zone 1
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent COMMIT
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent # Completed by 
iptables_manager
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; 
Stderr: iptables-restore v1.4.21: Set NIPv42a80b4e9-6d6f-4847-9e62- doesn't 
exist.
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Error occurred 
at line: 10
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Try 
`iptables-restore -h' or 'iptables-restore --help' for more information.
2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1575033

Title:
  iptables-restore fails with RuntimeError for ipset

Status in neutron:
  New

Bug description:
  The following Trace is seen ovs_neutron_agent while running functional
  tests

  http://logs.openstack.org/59/307159/5/check/gate-neutron-dsvm-
  fullstack/e1f25d4/logs/TestDVRL3Agent.test_snat_and_floatingip
  /neutron-openvswitch-agent--2016-04-22--12-33-51-032511.log.txt.gz


  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-f775b822-c40d-4d94-a2ec-005fb8b038fb - - - - -] Error while processing VIF 
ports
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1992, in rpc_loop
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1623, in process_network_ports
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 292, in 
setup_port_filters
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2016-04-22 12:34:33.670 27936 ERROR 
neutron.plugins.ml2.drivers.open

[Yahoo-eng-team] [Bug 1571544] [NEW] continuous warning in q-svc "Dictionary resource_versions for agent is invalid"

2016-04-18 Thread venkata anil
Public bug reported:

neutron server continuously giving below warnings
2016-04-18 08:27:41.546 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent Metadata agent on host vagrant is invalid.
2016-04-18 08:27:41.546 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent DHCP agent on host vagrant is invalid.
2016-04-18 08:27:41.547 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent L3 agent on host vagrant is invalid.

It can be reproducible with single node setup on latest upstream
neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed


** Tags: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571544

Title:
  continuous warning in q-svc "Dictionary resource_versions for agent is
  invalid"

Status in neutron:
  Confirmed

Bug description:
  neutron server continuously giving below warnings
  2016-04-18 08:27:41.546 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent Metadata agent on host vagrant is invalid.
  2016-04-18 08:27:41.546 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent DHCP agent on host vagrant is invalid.
  2016-04-18 08:27:41.547 WARNING neutron.db.agents_db 
[req-49efe7cf-91fa-4a0a-b036-86aac35e09aa None None] Dictionary 
resource_versions for agent L3 agent on host vagrant is invalid.

  It can be reproducible with single node setup on latest upstream
  neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569850] [NEW] Trace in update_dhcp_port when subnet is deleted concurrently

2016-04-13 Thread venkata anil
Public bug reported:

Looks like patches in https://bugs.launchpad.net/neutron/+bug/1253344
not handling Trace in update_dhcp_port when subnet is deleted
concurrently.

Trace http://logs.openstack.org/16/281116/16/gate/gate-tempest-dsvm-
neutron-
full/e5974dd/logs/screen-q-svc.txt.gz?level=WARNING#_2016-04-12_18_58_48_195
is seen on gate while running tempest tests.

plugin and agent should be both updated to handle this trace.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569850

Title:
  Trace in update_dhcp_port  when subnet is deleted concurrently

Status in neutron:
  New

Bug description:
  Looks like patches in https://bugs.launchpad.net/neutron/+bug/1253344
  not handling Trace in update_dhcp_port when subnet is deleted
  concurrently.

  Trace http://logs.openstack.org/16/281116/16/gate/gate-tempest-dsvm-
  neutron-
  full/e5974dd/logs/screen-q-svc.txt.gz?level=WARNING#_2016-04-12_18_58_48_195
  is seen on gate while running tempest tests.

  plugin and agent should be both updated to handle this trace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566676] [NEW] PD: Optimize get_ports query by filtering on subnet

2016-04-06 Thread venkata anil
Public bug reported:

Current code for prefix delegation gets all ports from DB and then for
each port it checks if it has a PD subnet. With this, we get unnecessary
ports(ports without PD subnet) and again we need to check for subnet in
these ports.

We can optimize this by querying the DB by filtering on subnet.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566676

Title:
  PD: Optimize get_ports query by filtering on subnet

Status in neutron:
  In Progress

Bug description:
  Current code for prefix delegation gets all ports from DB and then for
  each port it checks if it has a PD subnet. With this, we get
  unnecessary ports(ports without PD subnet) and again we need to check
  for subnet in these ports.

  We can optimize this by querying the DB by filtering on subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566194] [NEW] Make sure resources for HA router exists before the router creation

2016-04-05 Thread venkata anil
Public bug reported:

Before HA rouer is used by agent,
1) HA network should be created
2) vr_id has to be allocated
3) HA router should able to create sufficient number of ports on HA network

If scheduler(from rpc worker) process the HA router(as router is available in 
DB) before these resources are created, then the following races(between api 
and rpc workers) can happen
1) Race for creating HA network
2) vr_id not avialable for agent, so can't spawn HA proxy process
3) If creating router ports in api worker is failed, router is deleted. So rpc 
worker will have races as router is deleted while it is binding router's ha 
ports to agent.


To avoid this, l3 scheduler should skip this router(while syncing for the 
agent) if above resources are not yet created.

To facilitate this, new status("ALLOCATING") is proposed for HA router in 
https://review.openstack.org/#/c/257059/
In this patch, first router is created and set status as ALLOCATING. And once 
all the above resources are created, its status is changed back to ACTIVE. 
Added proper checks(in the code) to skip using Router if it's status is 
ALLOCATING.
So with this patch
1) we are creating a new router status 
2) carefully identify where router can be accessed before its resources are 
created.
3) How code behaves(during its acess to router) when status transitioned from 
ALLOCATING to ACTIVE
Alternatively, if we are able to create HA router's resources before HA router 
creation, we can avoid a new status and new checks, but same functionality as 
https://review.openstack.org/#/c/257059/.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566194

Title:
  Make sure resources for HA router exists before the router creation

Status in neutron:
  New

Bug description:
  Before HA rouer is used by agent,
  1) HA network should be created
  2) vr_id has to be allocated
  3) HA router should able to create sufficient number of ports on HA network

  If scheduler(from rpc worker) process the HA router(as router is available in 
DB) before these resources are created, then the following races(between api 
and rpc workers) can happen
  1) Race for creating HA network
  2) vr_id not avialable for agent, so can't spawn HA proxy process
  3) If creating router ports in api worker is failed, router is deleted. So 
rpc worker will have races as router is deleted while it is binding router's ha 
ports to agent.

  
  To avoid this, l3 scheduler should skip this router(while syncing for the 
agent) if above resources are not yet created.

  To facilitate this, new status("ALLOCATING") is proposed for HA router in 
https://review.openstack.org/#/c/257059/
  In this patch, first router is created and set status as ALLOCATING. And once 
all the above resources are created, its status is changed back to ACTIVE. 
Added proper checks(in the code) to skip using Router if it's status is 
ALLOCATING.
  So with this patch
  1) we are creating a new router status 
  2) carefully identify where router can be accessed before its resources are 
created.
  3) How code behaves(during its acess to router) when status transitioned from 
ALLOCATING to ACTIVE
  Alternatively, if we are able to create HA router's resources before HA 
router creation, we can avoid a new status and new checks, but same 
functionality as https://review.openstack.org/#/c/257059/.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561040] [NEW] RuntimeError while deleting linux bridge by linux bridge agent

2016-03-23 Thread venkata anil
Public bug reported:

http://logs.openstack.org/14/275614/7/check/gate-neutron-dsvm-
fullstack/efae851/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VLANs_
/neutron-linuxbridge-agent--2016-03-23--04-07-30-395169.log.txt.gz

Linux bridge is not handling RuntimeError exception when it is trying to
delete network's bridge, which is deleted by nova in parallel. Fullstack
test has similar scenario, it creates network's bridge for agent and
deletes the bridge after the test, like nova. Linux bridge agent has to
ignore RuntimeError exception if the bridge doesn't exist.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561040

Title:
  RuntimeError while deleting linux bridge by linux bridge agent

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/14/275614/7/check/gate-neutron-dsvm-
  
fullstack/efae851/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VLANs_
  /neutron-linuxbridge-agent--2016-03-23--04-07-30-395169.log.txt.gz

  Linux bridge is not handling RuntimeError exception when it is trying
  to delete network's bridge, which is deleted by nova in parallel.
  Fullstack test has similar scenario, it creates network's bridge for
  agent and deletes the bridge after the test, like nova. Linux bridge
  agent has to ignore RuntimeError exception if the bridge doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555600] [NEW] with l2pop sometimes agents fail to create flood flows with multiple workers

2016-03-10 Thread venkata anil
Public bug reported:

When multiple api and rpc workers enabled for neutron-server, sometimes
ovs agents fail to create flood flows to other  ovs agents. This is
frequently reproducible when multiple api and rpc workers enabled during
migrations and also during evacuations of instances  from one node to
other. Some times tunnel ports are also not created.

In these scenarios, l2pop driver is not notifying agent to create tunnel
ports and flood flows, hence agent is unable to create flood flows to
other agents.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555600

Title:
  with l2pop sometimes agents fail to create flood flows with multiple
  workers

Status in neutron:
  In Progress

Bug description:
  When multiple api and rpc workers enabled for neutron-server,
  sometimes ovs agents fail to create flood flows to other  ovs agents.
  This is frequently reproducible when multiple api and rpc workers
  enabled during migrations and also during evacuations of instances
  from one node to other. Some times tunnel ports are also not created.

  In these scenarios, l2pop driver is not notifying agent to create
  tunnel ports and flood flows, hence agent is unable to create flood
  flows to other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554519] [NEW] seperate device owner flag for HA router interface port

2016-03-08 Thread venkata anil
Public bug reported:

Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as
device owner(like normal router interface). So to check if a port is a
HA router interface port, we have to perform a DB operation.


Neutron server at many places(functions in plugin.py, rpc.py, mech_driver.py 
[1]) may need check if a port is HA router interface port and perform different 
set of operations, then it has to access DB for this. Instead if this 
information is available as port's device owner, we can avoid DB access every 
time.


[1] ml2_db.is_ha_port(session, port) in below files
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/plugin.py
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/drivers/l2pop/mech_driver.py

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554519

Title:
  seperate device owner flag for HA router interface port

Status in neutron:
  New

Bug description:
  Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as
  device owner(like normal router interface). So to check if a port is a
  HA router interface port, we have to perform a DB operation.

  
  Neutron server at many places(functions in plugin.py, rpc.py, mech_driver.py 
[1]) may need check if a port is HA router interface port and perform different 
set of operations, then it has to access DB for this. Instead if this 
information is available as port's device owner, we can avoid DB access every 
time.

  
  [1] ml2_db.is_ha_port(session, port) in below files
  https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/plugin.py
  
https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/drivers/l2pop/mech_driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554414] [NEW] Avoid calling _get_subnet(s) multiple times in ipam driver

2016-03-08 Thread venkata anil
Public bug reported:

While allocating or updating ips for port, _get_subnet and _get_subents
are called multiple times.

For example, if update_port is called with below fixed_ips
fixed_ips = [{'subnet_id': 'subnet1'},
 {'subnet_id': 'v6_dhcp_stateless_subnet'}
 {'subnet_id': 'v6_slaac_subnet'}
 {'subnet_id': 'v6_pd_subnet'}
 {'subnet_id': 'subnet4', 'ip_address': '30.0.0.3'}
 {'ip_address': '40.0.0.4'}
 {'ip_address': '50.0.0.5'}}
then through _test_fixed_ips_for_port(fixed_ips),  "_get_subnet"[1] is called 4 
times for subnet1, v6_dhcp_stateless_subnet, v6_slaac_subnet, v6_pd_subnet, 
subnet4. "_get_subnets" [2] is called 2 times for ip_address 40.0.0.4 and 
50.0.0.5.


If in case of _test_fixed_ips_for_port called for _allocate_ips_for_port, then 
_get_subnets is already called at[3] (so increase call count). So incase of 
_allocate_ips_for_port, if we save subnets from [3] saved in local variable and 
use that in-memory subnets in further calls, we can avoid above calls to DB.


Sometimes when subnet is updated, update_subnet may trigger 
update_port(fixed_ips)[4] for all ports on the subnet. And in each port's 
fixed_ips, if we have multiple subnets and ip_addresses then _get_subnet and 
_get_subnets will be called for multiple times for each port like above 
example. For example in above case if we have 10 ports on the subnet, then 
update_subnet will result in (10*6=60) 60 times DB access instead of 10 DB 
access. 


When port_update is called for PD subnet, it again calls get_subnet for each 
fixed_ip[5], to check if subnet is PD subnet or not(after get_subnet and 
get_subnets called many times in _test_fixed_ips_for_port).


In all above cases, for _get_subnet and _get_subnets, we are accessing DB many 
times.
 So instead of calling get_subnet or get_subnets for each fixed_ip of a port(at 
multiple places), we can call get_subnets of a network at begining of 
_allocate_ips_for_port(for create port) and _update_ips_for_port(during update) 
and use the in-memory subnets in subsequent private functions.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L311
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L331
[3] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py#L192
[4] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L785
[5] 
https://review.openstack.org/#/c/241227/11/neutron/db/ipam_non_pluggable_backend.py
 Lines 284 and 334.

** Affects: neutron
 Importance: Undecided
     Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: ipv6 l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: ipv6 l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554414

Title:
  Avoid calling _get_subnet(s) multiple times in ipam driver

Status in neutron:
  New

Bug description:
  While allocating or updating ips for port, _get_subnet and
  _get_subents are called multiple times.

  For example, if update_port is called with below fixed_ips
  fixed_ips = [{'subnet_id': 'subnet1'},
   {'subnet_id': 'v6_dhcp_stateless_subnet'}
   {'subnet_id': 'v6_slaac_subnet'}
   {'subnet_id': 'v6_pd_subnet'}
   {'subnet_id': 'subnet4', 'ip_address': '30.0.0.3'}
   {'ip_address': '40.0.0.4'}
   {'ip_address': '50.0.0.5'}}
  then through _test_fixed_ips_for_port(fixed_ips),  "_get_subnet"[1] is called 
4 times for subnet1, v6_dhcp_stateless_subnet, v6_slaac_subnet, v6_pd_subnet, 
subnet4. "_get_subnets" [2] is called 2 times for ip_address 40.0.0.4 and 
50.0.0.5.

  
  If in case of _test_fixed_ips_for_port called for _allocate_ips_for_port, 
then _get_subnets is already called at[3] (so increase call count). So incase 
of _allocate_ips_for_port, if we save subnets from [3] saved in local variable 
and use that in-memory subnets in further calls, we can avoid above calls to DB.

  
  Sometimes when subnet is updated, update_subnet may trigger 
update_port(fixed_ips)[4] for all ports on the subnet. And in each port's 
fixed_ips, if we have multiple subnets and ip_addresses then _get_subnet and 
_get_subnets will be called for multiple times for each port like above 
example. For example in above case if we have 10 ports on the subnet, then 
update_subnet will result in (10*6=60) 60 times DB access instead of 10 DB 
access. 

  
  When port_update is called for PD subnet, it again calls get_subnet for each 
fixed_ip[5], to check if subnet is PD subnet or not(after get_subnet and 
get_subnets called many times in _test_fixed_ips_for_port).

  
  In all above cases, for _get_subnet and _get_subnets, we are accessi

[Yahoo-eng-team] [Bug 1541859] Re: Router namespace missing after neutron L3 agent restarted

2016-02-11 Thread venkata anil
unable to reproduce the bug on neutron.

** Changed in: neutron
 Assignee: venkata anil (anil-venkata) => (unassigned)

** No longer affects: neutron

** Changed in: networking-ovn
 Assignee: venkata anil (anil-venkata) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541859

Title:
  Router namespace missing after neutron L3 agent restarted

Status in networking-ovn:
  Confirmed

Bug description:
  After restarting the neutron L3 agent, the router namespace is
  deleted, but not recreated.

  Recreate scenario:
  1) Deploy OVN with the neutron L3 agent instead of the native L3 support.
  2) Follow http://docs.openstack.org/developer/networking-ovn/testing.html to 
test the deployment.
  3) Setup a floating IP address for one of the VMs.
  4) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  5) SSH to the VM via the floating IP address.

  $ ssh -i id_rsa_demo cirros@172.24.4.3
  $ exit
  Connection to 172.24.4.3 closed.

  6) Use screen to stop and then restart q-l3.
  7) Check the namespaces.

  $ sudo ip netns
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  8) Disassociate the floating IP address from the VM.  This seems to receate 
the namespace.  It's possible that other operations would do the same.
  9) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  10) Associate the floating IP address to the VM again and connectivity
  is restored to the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1541859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541859] Re: Router namespace missing after neutron L3 agent restarted

2016-02-11 Thread venkata anil
** Changed in: networking-ovn
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541859

Title:
  Router namespace missing after neutron L3 agent restarted

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  After restarting the neutron L3 agent, the router namespace is
  deleted, but not recreated.

  Recreate scenario:
  1) Deploy OVN with the neutron L3 agent instead of the native L3 support.
  2) Follow http://docs.openstack.org/developer/networking-ovn/testing.html to 
test the deployment.
  3) Setup a floating IP address for one of the VMs.
  4) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  5) SSH to the VM via the floating IP address.

  $ ssh -i id_rsa_demo cirros@172.24.4.3
  $ exit
  Connection to 172.24.4.3 closed.

  6) Use screen to stop and then restart q-l3.
  7) Check the namespaces.

  $ sudo ip netns
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  8) Disassociate the floating IP address from the VM.  This seems to receate 
the namespace.  It's possible that other operations would do the same.
  9) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  10) Associate the floating IP address to the VM again and connectivity
  is restored to the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1541859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533216] [NEW] item allocator should return same value for same key

2016-01-12 Thread venkata anil
Public bug reported:

When ItemAllocator.allocate[1] called with same key again,
it is returning different value.

So in dvr, if allocate_rule_priority() called again with same key, it is 
returning different priority value.
So trying to  add same ip rule again is succeeding as we got different priority 
from  allocate_rule_priority(for same key). 
As a consequence we will have same ip rule in router namespace but with 
different priorities.

[1]
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/item_allocator.py#L51

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533216

Title:
  item allocator should return same value for same key

Status in neutron:
  In Progress

Bug description:
  When ItemAllocator.allocate[1] called with same key again,
  it is returning different value.

  So in dvr, if allocate_rule_priority() called again with same key, it is 
returning different priority value.
  So trying to  add same ip rule again is succeeding as we got different 
priority from  allocate_rule_priority(for same key). 
  As a consequence we will have same ip rule in router namespace but with 
different priorities.

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/item_allocator.py#L51

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-01-04 Thread venkata anil
I tried with one controller( and api workers =10) and two compute nodes,
with l2pop on all the three nodes.

With both "nova migrate" and "nova live-migration", I see l2pop working
properly(i.e tunnels are removed from old host after migration).

Note: "nova migrate" is a two step process.  So, only after "nova 
resize-confirm", l2pop is deleting tunnels from previous host.
http://osdir.com/ml/openstack-cloud-computing/2013-01/msg00522.html
  

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  Invalid
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:

  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller. 

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-01-04 Thread venkata anil
Looks like some corner case needs to be handled, investigating further.

** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible liberty series:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:

  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller. 

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509626] [NEW] prefix delegation deletes addresses for ports

2015-10-24 Thread venkata anil
Public bug reported:

If a network has IPv4 and IPv6(prefix delegated) subnets,
prefix delegation is deleting port's IPv4 address and not properly updating 
IPv6 address.

1) create network
2) create ipv4 subnet with CIDR for the above network
3) create ipv6 slaac subnet with prefix delegation for above network
4) create router
5) add gateway to router
6) add above ipv6 subnet to router
Check if IPv6 subnet got new prefix.
You can see fixed_ip attribute as blank for dhcp port
(i.e "ipallocations" table has no entry for this dhcp port).

7) Boot a vm on this network.
"neutron port-show" will show ipv4 and ipv6 address for this network.
But this vm can't aquire ipv4 address as dhcp packets are dropped in linux 
bridge(because security groups related to dhcp are deleted along with dhcp 
port's fixed_ip delete)

8) delete above ipv6 subnet from router through router-interface-delete.
This deletes ipv4 address for vm's port (check this with "neutron port-show").

Expected behavior:
Prefix delegation should only update IPv6 address if the network has both IPv4 
and IPv6 subnets.
It should update IPv6 address with proper prefix.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: ipv6 l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509626

Title:
  prefix delegation deletes addresses for ports

Status in neutron:
  New

Bug description:
  If a network has IPv4 and IPv6(prefix delegated) subnets,
  prefix delegation is deleting port's IPv4 address and not properly updating 
IPv6 address.

  1) create network
  2) create ipv4 subnet with CIDR for the above network
  3) create ipv6 slaac subnet with prefix delegation for above network
  4) create router
  5) add gateway to router
  6) add above ipv6 subnet to router
  Check if IPv6 subnet got new prefix.
  You can see fixed_ip attribute as blank for dhcp port
  (i.e "ipallocations" table has no entry for this dhcp port).

  7) Boot a vm on this network.
  "neutron port-show" will show ipv4 and ipv6 address for this network.
  But this vm can't aquire ipv4 address as dhcp packets are dropped in linux 
bridge(because security groups related to dhcp are deleted along with dhcp 
port's fixed_ip delete)

  8) delete above ipv6 subnet from router through router-interface-delete.
  This deletes ipv4 address for vm's port (check this with "neutron port-show").

  Expected behavior:
  Prefix delegation should only update IPv6 address if the network has both 
IPv4 and IPv6 subnets.
  It should update IPv6 address with proper prefix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489268] Re: [VPNaaS] DVR unit tests in VPNaaS failing

2015-08-27 Thread venkata anil
There is a fix already in progress.
I297f550a824785061d43237a98f079d7b0fa99ab,n,z
target=_blankI297f550a824785061d43237a98f079d7b0fa99ab,n,z
target=_blankhttps://review.openstack.org/#q,I297f550a824785061d43237a98f079d7b0fa99ab,n,z

so marking this bug as invalid.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489268

Title:
  [VPNaaS] DVR unit tests in VPNaaS failing

Status in neutron:
  Invalid

Bug description:
  VPNaaS unit tests for DVR are failing with below error

  AttributeError: 'DvrEdgeRouter' object has no attribute
  'create_snat_namespace'

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
  ipsec_process)
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
  self._make_dvr_edge_router_info_for_test()
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
  router.create_snat_namespace()
  AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'

  
  The following 12 test cases related to dvr_edge_router are failing

  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489268] [NEW] [VPNaaS] DVR unit tests in VPNaaS failing

2015-08-26 Thread venkata anil
Public bug reported:

VPNaaS unit tests for DVR are failing with below error

AttributeError: 'DvrEdgeRouter' object has no attribute
'create_snat_namespace'

Captured traceback:
~~~
Traceback (most recent call last):
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
ipsec_process)
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
self._make_dvr_edge_router_info_for_test()
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
router.create_snat_namespace()
AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'


The following 12 test cases related to dvr_edge_router are failing

failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489268

Title:
  [VPNaaS] DVR unit tests in VPNaaS failing

Status in neutron:
  New

Bug description:
  VPNaaS unit tests for DVR are failing with below error

  AttributeError: 'DvrEdgeRouter' object has no attribute
  'create_snat_namespace'

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
  ipsec_process)
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
  self._make_dvr_edge_router_info_for_test()
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
  router.create_snat_namespace()
  AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'

  
  The following 12 test cases related to dvr_edge_router are failing

  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure

[Yahoo-eng-team] [Bug 1482622] [NEW] functional test fails with RuntimeError: Second simultaneous read on fileno 6

2015-08-07 Thread venkata anil
Public bug reported:

Neutron functional test is failing with below error in gate


2015-07-28 17:14:04.164 | 2015-07-28 17:14:04.154 | 2015-07-28 17:13:58.550 
6878 ERROR neutron.callbacks.manager evtype, fileno, evtype, cb, 
bucket[fileno]))
2015-07-28 17:14:04.166 | 2015-07-28 17:14:04.156 | 2015-07-28 17:13:58.550 
6878 ERROR neutron.callbacks.manager RuntimeError: Second simultaneous read on 
fileno 6 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False) - MY THREAD=built-in 
method switch of greenlet.greenlet object at 0x7f52be93c5f0; THAT 
THREAD=FdListener('read', 6, built-in method switch of greenlet.greenlet 
object at 0x7f52bba02cd0, built-in method throw of greenlet.greenlet object 
at 0x7f52bba02cd0)


For full log
http://logs.openstack.org/36/200636/4/check/gate-neutron-vpnaas-dsvm-functional-sswan/1dcd044/console.html.gz

The functional test uses two l3 agents. During test cleanup, both the
agents try to clean up the resources simultaneously, and failing with
the above error.

To avoid this issue I am using  root_helper instead of root_helper_daemon in 
the test 
https://review.openstack.org/#/c/200636/5/neutron_vpnaas/tests/functional/common/test_scenario.py
Line 173

To reproduce this issue, please enable root_helper_daemon in
test(disable line 173 in above file).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1482622

Title:
  functional test fails with RuntimeError: Second simultaneous read on
  fileno 6 

Status in neutron:
  New

Bug description:
  Neutron functional test is failing with below error in gate

  
  2015-07-28 17:14:04.164 | 2015-07-28 17:14:04.154 | 2015-07-28 
17:13:58.550 6878 ERROR neutron.callbacks.manager evtype, fileno, evtype, 
cb, bucket[fileno]))
  2015-07-28 17:14:04.166 | 2015-07-28 17:14:04.156 | 2015-07-28 
17:13:58.550 6878 ERROR neutron.callbacks.manager RuntimeError: Second 
simultaneous read on fileno 6 detected.  Unless you really know what you're 
doing, make sure that only one greenthread can read any particular socket.  
Consider using a pools.Pool. If you do know what you're doing and want to 
disable this error, call eventlet.debug.hub_prevent_multiple_readers(False) - 
MY THREAD=built-in method switch of greenlet.greenlet object at 
0x7f52be93c5f0; THAT THREAD=FdListener('read', 6, built-in method switch of 
greenlet.greenlet object at 0x7f52bba02cd0, built-in method throw of 
greenlet.greenlet object at 0x7f52bba02cd0)

  
  For full log
  
http://logs.openstack.org/36/200636/4/check/gate-neutron-vpnaas-dsvm-functional-sswan/1dcd044/console.html.gz

  The functional test uses two l3 agents. During test cleanup, both the
  agents try to clean up the resources simultaneously, and failing with
  the above error.

  To avoid this issue I am using  root_helper instead of root_helper_daemon in 
the test 
  
https://review.openstack.org/#/c/200636/5/neutron_vpnaas/tests/functional/common/test_scenario.py
Line 173

  To reproduce this issue, please enable root_helper_daemon in
  test(disable line 173 in above file).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1482622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478012] [NEW] VPNaaS: Support VPNaaS with L3 HA

2015-07-24 Thread venkata anil
Public bug reported:

Problem:   Currently VPNaaS is not supported with L3 HA.
1)  When user tries to create ipsec site connection, vpn agent tries to run 
ipsec process on both HA master and backup routers. Running ipsec process on 
backup router fails as it's router interfaces will be down. 

2) Running two separate ipsec processes for the same side of connection(
East or West) is not allowed.

3) During HA router state transitions( master to backup and backup to
master), spawning and terminating of vpn process is not handled. For
example, when master transitioned to backup, that vpn connection will be
lost forever(unless both the agents hosting HA routers restarted).


Solution:   When VPN process is created for HA router, it should run only on HA 
master node. On transition from master to backup router,  vpn process should be 
shutdown (same like disabling radvd/metadata proxy) on that agent.  On 
transition from backup to master, vpn process should be enabled and running on 
that agent. 


Advantages:Through this we will have the advantages of L3 HA router i.e No 
need for user intervention for reestablishing vpn connection when the router is 
down. When existing master router is down, same vpn connection will be 
established automatically on the new master router.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha rfe vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478012

Title:
   VPNaaS: Support VPNaaS with L3 HA

Status in neutron:
  New

Bug description:
  Problem:   Currently VPNaaS is not supported with L3 HA.
  1)  When user tries to create ipsec site connection, vpn agent tries to run 
ipsec process on both HA master and backup routers. Running ipsec process on 
backup router fails as it's router interfaces will be down. 

  2) Running two separate ipsec processes for the same side of
  connection( East or West) is not allowed.

  3) During HA router state transitions( master to backup and backup to
  master), spawning and terminating of vpn process is not handled. For
  example, when master transitioned to backup, that vpn connection will
  be lost forever(unless both the agents hosting HA routers restarted).

  
  Solution:   When VPN process is created for HA router, it should run only on 
HA master node. On transition from master to backup router,  vpn process should 
be shutdown (same like disabling radvd/metadata proxy) on that agent.  On 
transition from backup to master, vpn process should be enabled and running on 
that agent. 

  
  Advantages:Through this we will have the advantages of L3 HA router i.e 
No need for user intervention for reestablishing vpn connection when the router 
is down. When existing master router is down, same vpn connection will be 
established automatically on the new master router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476714] Re: Invalid test condition in test_ha_router_failover

2015-07-21 Thread venkata anil
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476714

Title:
  Invalid test condition in test_ha_router_failover

Status in neutron:
  Invalid

Bug description:
  In test_ha_router_failover function, w have the below test conditions

  
  utils.wait_until_true(lambda: router1.ha_state == 'master')
  utils.wait_until_true(lambda: router2.ha_state == 'backup')

  self.fail_ha_router(router1)

  utils.wait_until_true(lambda: router2.ha_state == 'master')
  utils.wait_until_true(lambda: router1.ha_state != 'backup')

  
  I think the last test condition is incorrect and it should be

  utils.wait_until_true(lambda: router1.ha_state == 'backup')

  instead of

  utils.wait_until_true(lambda: router1.ha_state != 'backup')

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476714] [NEW] Invalid test condition in test_ha_router_failover

2015-07-21 Thread venkata anil
Public bug reported:

In test_ha_router_failover function, w have the below test conditions


utils.wait_until_true(lambda: router1.ha_state == 'master')
utils.wait_until_true(lambda: router2.ha_state == 'backup')

self.fail_ha_router(router1)

utils.wait_until_true(lambda: router2.ha_state == 'master')
utils.wait_until_true(lambda: router1.ha_state != 'backup')


I think the last test condition is incorrect and it should be

utils.wait_until_true(lambda: router1.ha_state == 'backup')

instead of

utils.wait_until_true(lambda: router1.ha_state != 'backup')

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha low-hanging-fruit

** Tags added: low-hanging-fruit

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476714

Title:
  Invalid test condition in test_ha_router_failover

Status in neutron:
  New

Bug description:
  In test_ha_router_failover function, w have the below test conditions

  
  utils.wait_until_true(lambda: router1.ha_state == 'master')
  utils.wait_until_true(lambda: router2.ha_state == 'backup')

  self.fail_ha_router(router1)

  utils.wait_until_true(lambda: router2.ha_state == 'master')
  utils.wait_until_true(lambda: router1.ha_state != 'backup')

  
  I think the last test condition is incorrect and it should be

  utils.wait_until_true(lambda: router1.ha_state == 'backup')

  instead of

  utils.wait_until_true(lambda: router1.ha_state != 'backup')

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466326] [NEW] stack.sh fails while installing PyMySQL

2015-06-18 Thread venkata anil
Public bug reported:

This is the error message when I run ./stack.sh

[ERROR] /home/ubuntu/devstack/inc/python:142 Can't find package PyMySQL
in requirements


complete error log

2015-06-18 06:01:24.757 | [ERROR] /home/ubuntu/devstack/inc/python:142 Can't 
find package PyMySQL in requirements
2015-06-18 06:01:25.760 | + local 'clean_name=[Call Trace]
2015-06-18 06:01:25.761 | ./stack.sh:726:install_database_python
2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/lib/database:114:install_database_python_mysql
2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/lib/databases/mysql:165:pip_install_gr
2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/inc/python:63:get_from_global_requirements
2015-06-18 06:01:25.761 | /home/ubuntu/devstack/inc/python:142:die'
2015-06-18 06:01:25.761 | + pip_install '[Call' 'Trace]' 
./stack.sh:726:install_database_python 
/home/ubuntu/devstack/lib/database:114:install_database_python_mysql 
/home/ubuntu/devstack/lib/databases/mysql:165:pip_install_gr 
/home/ubuntu/devstack/inc/python:63:get_from_global_requirements 
/home/ubuntu/devstack/inc/python:142:die
2015-06-18 06:01:26.207 | + sudo -H http_proxy= https_proxy= no_proxy= 
PIP_FIND_LINKS=file:///opt/stack/.wheelhouse /usr/local/bin/pip install '[Call' 
'Trace]' ./stack.sh:726:install_database_python 
/home/ubuntu/devstack/lib/database:114:install_database_python_mysql 
/home/ubuntu/devstack/lib/databases/mysql:165:pip_install_gr 
/home/ubuntu/devstack/inc/python:63:get_from_global_requirements 
/home/ubuntu/devstack/inc/python:142:die
2015-06-18 06:01:27.141 | Exception:
2015-06-18 06:01:27.142 | Traceback (most recent call last):
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 223, in main
2015-06-18 06:01:27.142 | status = self.run(options, args)
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/commands/install.py, line 268, in 
run
2015-06-18 06:01:27.142 | wheel_cache
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/basecommand.py, line 268, in 
populate_requirement_set
2015-06-18 06:01:27.142 | wheel_cache=wheel_cache
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 207, in 
from_line
2015-06-18 06:01:27.142 | wheel_cache=wheel_cache)
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py, line 66, in 
__init__
2015-06-18 06:01:27.142 | req = pkg_resources.Requirement.parse(req)
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2960, in parse
2015-06-18 06:01:27.142 | reqs = list(parse_requirements(s))
2015-06-18 06:01:27.142 |   File 
/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py, 
line 2891, in parse_requirements
2015-06-18 06:01:27.142 | raise ValueError(Missing distribution spec, 
line)
2015-06-18 06:01:27.142 | ValueError: ('Missing distribution spec', '[Call')
2015-06-18 06:01:27.142 | 
2015-06-18 06:01:27.172 | + exit_trap
2015-06-18 06:01:27.172 | + local r=2
2015-06-18 06:01:27.172 | ++ jobs -p
2015-06-18 06:01:27.174 | + jobs=
2015-06-18 06:01:27.174 | + [[ -n '' ]]
2015-06-18 06:01:27.174 | + kill_spinner
2015-06-18 06:01:27.174 | + '[' '!' -z '' ']'
2015-06-18 06:01:27.174 | + [[ 2 -ne 0 ]]
2015-06-18 06:01:27.174 | + echo 'Error on exit'
2015-06-18 06:01:27.174 | Error on exit
2015-06-18 06:01:27.174 | + [[ -z /opt/stack/logs ]]
2015-06-18 06:01:27.174 | + /home/ubuntu/devstack/tools/worlddump.py -d 
/opt/stack/logs
2015-06-18 06:01:28.122 | sh: 1: kill: Usage: kill [-s sigspec | -signum | 
-sigspec] [pid | job]... or
2015-06-18 06:01:28.122 | kill -l [exitstatus]
2015-06-18 06:01:28.128 | + exit 2

** Affects: devstack
 Importance: Undecided
 Status: New

** Project changed: neutron = devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466326

Title:
  stack.sh fails while installing PyMySQL

Status in devstack - openstack dev environments:
  New

Bug description:
  This is the error message when I run ./stack.sh

  [ERROR] /home/ubuntu/devstack/inc/python:142 Can't find package
  PyMySQL in requirements

  
  complete error log

  2015-06-18 06:01:24.757 | [ERROR] /home/ubuntu/devstack/inc/python:142 Can't 
find package PyMySQL in requirements
  2015-06-18 06:01:25.760 | + local 'clean_name=[Call Trace]
  2015-06-18 06:01:25.761 | ./stack.sh:726:install_database_python
  2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/lib/database:114:install_database_python_mysql
  2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/lib/databases/mysql:165:pip_install_gr
  2015-06-18 06:01:25.761 | 
/home/ubuntu/devstack/inc/python:63:get_from_global_requirements
  2015-06-18 06:01:25.761 | /home/ubuntu/devstack/inc/python:142:die'
  2015-06-18 06:01:25.761 | + pip_install 

[Yahoo-eng-team] [Bug 1456206] Re: Instance is not assigned with an IP address (Version 4 or 6) when the network attached to it have two subnets - IPv4 and IPv6 (IPv6 can be stateful or stateless)

2015-06-18 Thread venkata anil
Bug reporter was using dnsmasq version 2.66, which has dnsmasq address
resolving issues when the same mac is used for both IPv4 and IPv6
addresses. I was able to reproduce the issue he reported with dnsmasq
2.66. With dnsmasq 2.67 the issue is not seen and this is the openstack
neutron recommended version for IPv6.

Please use  dnsmasq 2.67 to resolve this issue.

I see the same issue when a port created with extra_dhcp_opts on a network 
having IPv6 stateless subnet and IPv4 subnet. 
I have raised a seperated bug https://bugs.launchpad.net/neutron/+bug/1466144 
to track issue with  extra_dhcp_opts

** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: neutron
 Assignee: venkata anil (anil-venkata) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456206

Title:
  Instance is not assigned with an IP address (Version 4 or 6) when the
  network attached to it have two subnets  - IPv4 and IPv6 (IPv6 can be
  stateful or stateless)

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When create network with multiple  subnets  one IPv4 subnet   second 
IPv6-stateful or IPv6 stateless
  Instances that assign to this network does not  get 2 IP address  (IPv46).
  Reproduce all the time , sometime instance get only one IP address IPv4 or  
IPv6 .

  version :
  OSP 7 on rhel 7.1
  # rpm -qa |grep neutron 
  openstack-neutron-fwaas-2015.1.0-2.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.3.11-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-1.el7ost.noarch
  python-neutron-fwaas-2015.1.0-2.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  [root@puma15 ~(keystone_redhatadmin)]# rpm -qa |grep dnsmasq
  dnsmasq-2.66-13.el7_1.x86_64
  dnsmasq-utils-2.66-13.el7_1.x86_64

  step to reproduce : 
  1.Create network
 #neutron net-create internal_ipv4_6 --shared
  2.. Create 2 subnets  and assign to the network 
  # neutron subnet-create  internal_ipv4_6_id 192.168.1.0/24 --name 
internal_ipv4_subnet 
 # neutron subnet-create  internal_ipv4_6_id 2001:db2:0::2/64 --name 
dhcpv6_stateless_subnet --ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode 
dhcpv6-stateless --ip-version 6 
  3. Router creation
  # neutron router-create router
  # neutron router-interface-add Router internal_ipv4_subnet
  # neutron router-interface-add Router dhcpv6_stateless_subnet
   4.  launce VM with network that you created .  
 #nova boot --flavor 3 --image image-ID VM1 --nic net-id=net-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466144] [NEW] dhcp fails if extra_dhcp_opts for stateless subnet enabled

2015-06-17 Thread venkata anil
Public bug reported:

vm on a network having IPv4 and IPv6 dhcpv6 stateless subnets,
fails to get IPv4 address, when vm uses a port with extra_dhcp_opts.

neutron creates entries in dhcp host file for each subnet of a port.
Each of these entries will have same mac address as first field,
and may have client_id, fqdn, ipv4/ipv6 address for dhcp/dhcpv6 stateful,
or tag as other fields.
For dhcpv6 stateless subnet with extra_dhcp_opts,
host file will have only mac address and tag.

If the last entry in host file for the port with extra_dhcp_opts,
is for dhcpv6 stateless subnet, then dhclient tries to use this entry,
to resolve dhcp request even for IPv4, treats as 'no address found'
and fails to send DHCPOFFER.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466144

Title:
  dhcp fails if extra_dhcp_opts for stateless subnet enabled

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  vm on a network having IPv4 and IPv6 dhcpv6 stateless subnets,
  fails to get IPv4 address, when vm uses a port with extra_dhcp_opts.

  neutron creates entries in dhcp host file for each subnet of a port.
  Each of these entries will have same mac address as first field,
  and may have client_id, fqdn, ipv4/ipv6 address for dhcp/dhcpv6 stateful,
  or tag as other fields.
  For dhcpv6 stateless subnet with extra_dhcp_opts,
  host file will have only mac address and tag.

  If the last entry in host file for the port with extra_dhcp_opts,
  is for dhcpv6 stateless subnet, then dhclient tries to use this entry,
  to resolve dhcp request even for IPv4, treats as 'no address found'
  and fails to send DHCPOFFER.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465330] [NEW] _read_hosts_file_leases shouldn't parse stateless IPv6

2015-06-15 Thread venkata anil
Public bug reported:

Error when _read_hosts_file_leases tries to parse stateless IPv6 entry
in hosts file
TRACE neutron.agent.dhcp.agent ip = host[2].strip('[]')
TRACE neutron.agent.dhcp.agent IndexError: list index out of range

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465330

Title:
  _read_hosts_file_leases shouldn't parse stateless IPv6

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Error when _read_hosts_file_leases tries to parse stateless IPv6 entry
  in hosts file
  TRACE neutron.agent.dhcp.agent ip = host[2].strip('[]')
  TRACE neutron.agent.dhcp.agent IndexError: list index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1465330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460632] [NEW] [LBaaS] Create load balancer for each fixed_ip of a port

2015-06-01 Thread venkata anil
Public bug reported:

The spec

http://specs.openstack.org/openstack/neutron-specs/specs/juno-incubator/lbaas-api-and-objmodel-improvement.html

says - If a neutron port is encountered that has many fixed_ips then a load 
balancer should be created for each fixed_ip with each being a deep copy of 
each other.

currently neutron LBaaS v2 is only creating one loadbalncer though the port is 
having many fixed_ip.
Enhance neutron LBaaS v2, to create load balancer for each fixed_ip of a port, 
as specified in spec.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460632

Title:
  [LBaaS] Create load balancer for each fixed_ip of a port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The spec

  
http://specs.openstack.org/openstack/neutron-specs/specs/juno-incubator/lbaas-api-and-objmodel-improvement.html
  
  says - If a neutron port is encountered that has many fixed_ips then a load 
balancer should be created for each fixed_ip with each being a deep copy of 
each other.

  currently neutron LBaaS v2 is only creating one loadbalncer though the port 
is having many fixed_ip.
  Enhance neutron LBaaS v2, to create load balancer for each fixed_ip of a 
port, as specified in spec.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458838] [NEW] [IPV6][LBaaS] Adding default gw to vip port and clearing router arp cache failing for IPv6

2015-05-26 Thread venkata anil
Public bug reported:

LBaaS adding default gw to vip port and clearing router arp cache failing for 
ipv6.
This code is 
 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/haproxy/namespace_driver.py#L305
  
is IPv4 specific and need to be enhanced to support IPv6

Error log, when trying to create listener for IPv6 subnet


2015-05-26 10:57:13.235 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running comm
and (rootwrap daemon): ['ip', 'netns', 'exec', 
'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 'gw', 
'4001::1'] from (pid=19
994) execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-05-26 10:57:13.303 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 
'gw', u'4001::1']
Exit code: 6
Stdin:
Stdout:
Stderr: 4001::1: Unknown host

2015-05-26 10:57:13.303 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', 
'-I', 'tap36377f20-9a', '-c', '3', '4001::f816:3eff:feed:1e66'] from 
(pid=19994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-05-26 10:57:13.382 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', '-I', 
u'tap36377f20-9a', '-c', 3, u'4001::f816:3eff:feed:1e66']
Exit code: 2
Stdin:
Stdout:
Stderr: arping: unknown host 4001::f816:3eff:feed:1e66

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: ipv6 lb lbaas

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458838

Title:
  [IPV6][LBaaS] Adding default gw to vip port and clearing router arp
  cache failing for IPv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  LBaaS adding default gw to vip port and clearing router arp cache failing for 
ipv6.
  This code is 
   
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/haproxy/namespace_driver.py#L305
  
  is IPv4 specific and need to be enhanced to support IPv6

  Error log, when trying to create listener for IPv6 subnet

  
  2015-05-26 10:57:13.235 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running comm
  and (rootwrap daemon): ['ip', 'netns', 'exec', 
'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 'gw', 
'4001::1'] from (pid=19
  994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
  2015-05-26 10:57:13.303 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
  Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'route', 'add', 'default', 
'gw', u'4001::1']
  Exit code: 6
  Stdin:
  Stdout:
  Stderr: 4001::1: Unknown host

  2015-05-26 10:57:13.303 DEBUG neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', 
'-I', 'tap36377f20-9a', '-c', '3', '4001::f816:3eff:feed:1e66'] from 
(pid=19994) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
  2015-05-26 10:57:13.382 ERROR neutron.agent.linux.utils 
[req-2630492d-a319-44ed-94b2-71533bdd19c4 admin 
11babcdeb88542c38da7d02b34df3093]
  Command: ['ip', 'netns', 'exec', 
u'qlbaas-abeb6446-38db-4f9d-8645-b0e284175bab', 'arping', '-U', '-I', 
u'tap36377f20-9a', '-c', 3, u'4001::f816:3eff:feed:1e66']
  Exit code: 2
  Stdin:
  Stdout:
  Stderr: arping: unknown host 4001::f816:3eff:feed:1e66

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457796] [NEW] gate-neutron-vpnaas-pep8 failing for test_cisco_ipsec.py

2015-05-22 Thread venkata anil
Public bug reported:

2015-05-22 06:10:39.386 |   /home/jenkins/workspace/gate-neutron-vpnaas-pep8$ 
/home/jenkins/workspace/gate-neutron-vpnaas-pep8/.tox/pep8/bin/flake8 
2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:261:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.192 | self.query_mock.return_value = [
2015-05-22 06:10:40.192 | ^
2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:266:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.192 | self.query_mock.return_value = [
2015-05-22 06:10:40.192 | ^
2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:271:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.192 | self.query_mock.return_value = [
2015-05-22 06:10:40.192 | ^
2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:290:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.192 | self.query_mock.return_value = [
2015-05-22 06:10:40.193 | ^
2015-05-22 06:10:40.193 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:295:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.193 | self.query_mock.return_value = [
2015-05-22 06:10:40.193 | ^
2015-05-22 06:10:40.193 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:300:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
2015-05-22 06:10:40.193 | self.query_mock.return_value = [
2015-05-22 06:10:40.193 | ^
2015-05-22 06:10:40.220 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-neutron-vpnaas-pep8/.tox/pep8/bin/flake8'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457796

Title:
  gate-neutron-vpnaas-pep8 failing for test_cisco_ipsec.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  2015-05-22 06:10:39.386 |   /home/jenkins/workspace/gate-neutron-vpnaas-pep8$ 
/home/jenkins/workspace/gate-neutron-vpnaas-pep8/.tox/pep8/bin/flake8 
  2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:261:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.192 | self.query_mock.return_value = [
  2015-05-22 06:10:40.192 | ^
  2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:266:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.192 | self.query_mock.return_value = [
  2015-05-22 06:10:40.192 | ^
  2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:271:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.192 | self.query_mock.return_value = [
  2015-05-22 06:10:40.192 | ^
  2015-05-22 06:10:40.192 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:290:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.192 | self.query_mock.return_value = [
  2015-05-22 06:10:40.193 | ^
  2015-05-22 06:10:40.193 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:295:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.193 | self.query_mock.return_value = [
  2015-05-22 06:10:40.193 | ^
  2015-05-22 06:10:40.193 | 
./neutron_vpnaas/tests/unit/services/vpn/service_drivers/test_cisco_ipsec.py:300:9:
 N325  Do not use xrange. Use range, or six.moves.range for large loops.
  2015-05-22 06:10:40.193 | self.query_mock.return_value = [
  2015-05-22 06:10:40.193 | ^
  2015-05-22 06:10:40.220 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-neutron-vpnaas-pep8/.tox/pep8/bin/flake8'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452205] Re: VPNaaS: ipsec addconn failed

2015-05-11 Thread venkata anil
As Wei Hu explained, please enable libreswan driver
https://github.com/openstack/neutron-vpnaas/blob/master/etc/vpn_agent.ini#L16
Please see this patch
https://review.openstack.org//#/c/174299

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452205

Title:
  VPNaaS: ipsec addconn failed

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When create an ipsec-connection

  2015-05-05 14:06:41.875 4555 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-a9e53c63-23fa-4544-9ad4-cdaa480eb5de', 'ipsec', 
'addconn', '--ctlbase', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/var/run/pluto.ctl',
 '--defaultroutenexthop', '10.62.72.1', '--config', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/etc/ipsec.conf', 
'94a916ff-375f-46e8-8c58-8231ce0eea1c'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:46
  2015-05-05 14:06:41.973 4555 ERROR neutron.agent.linux.utils [-] 
  2015-05-05 14:06:41.974 4555 ERROR neutron.services.vpn.device_drivers.ipsec 
[-] Failed to enable vpn process on router a9e53c63-23fa-4544-9ad4-cdaa480eb5de
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 242, in enable
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   self.restart()
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 342, in restart
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   self.start()
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 395, in start
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   ipsec_site_conn['id']
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
/usr/lib/python2.7/site-packages/neutron/services/vpn/device_drivers/ipsec.py,
 line 314, in _execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   check_exit_code=check_exit_code)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File /usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 
550, in execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
 File /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 84, 
in execute
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec  
   raise RuntimeError(m)
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
RuntimeError: 
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-a9e53c63-23fa-4544-9ad4-cdaa480eb5de', 'ipsec', 
'addconn', '--ctlbase', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/var/run/pluto.ctl',
 '--defaultroutenexthop', '10.62.72.1', '--config', 
'/var/lib/neutron/ipsec/a9e53c63-23fa-4544-9ad4-cdaa480eb5de/etc/ipsec.conf', 
'94a916ff-375f-46e8-8c58-8231ce0eea1c']
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Exit code: 255
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Stdout: ''
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec 
Stderr: 'connect(pluto_ctl) failed: No such file or directory\n'
  2015-05-05 14:06:41.974 4555 TRACE neutron.services.vpn.device_drivers.ipsec

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450094] [NEW] ipsec-site-connection-list showing status PENDING_CREATE though tunnel is up

2015-04-29 Thread venkata anil
Public bug reported:

ipsec-site-connection-list showing status PENDING_CREATE for strongswan
driver, though tunnel is up

The tunnel is up, and see that the packets are having esp as protocol.
ipsec status also show Security Associations
 ip xfrm policy   ip xfrm state also showing valid info.
Still ipsec-site-connection-list showing status as PENDING_CREATE.


Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-44872765-4b50-4ac9-badf-8d41432975ed', 'neutron-vpn-netns-wrapper', 
'--mount_paths=/etc:/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/etc,/var/run:/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/var/run',
 '--cmd=ipsec,status']
Exit code: 0
Stdin: 
Stdout: Command: ['mount', '--bind', 
'/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/etc', 
'/etc'] Exit code: 0 Stdout:  Stderr: Command: ['mount', '--bind', 
'/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/var/run', 
'/var/run'] Exit code: 0 Stdout:  Stderr: Command: ['ipsec', 'status'] Exit 
code: 0 Stdout: Routed Connections:
a044ebee-24e7-40a9-966a-42f348f36b30{1}:  ROUTED, TUNNEL
a044ebee-24e7-40a9-966a-42f348f36b30{1}:   10.2.0.0/24 === 10.1.0.0/24 
Security Associations (1 up, 0 connecting):
a044ebee-24e7-40a9-966a-42f348f36b30[3]: ESTABLISHED 36 minutes ago, 
172.24.4.6[172.24.4.6]...172.24.4.5[172.24.4.5]
a044ebee-24e7-40a9-966a-42f348f36b30{1}:  INSTALLED, TUNNEL, ESP SPIs: 
c5ac2539_i cdc26f87_o
a044ebee-24e7-40a9-966a-42f348f36b30{1}:   10.2.0.0/24 === 10.1.0.0/24 

 ubuntu@stack:~$ sudo ip netns exec 
qrouter-52e07469-908a-4d09-8c7e-118d447a76b4 ip xfrm policy
src 10.2.0.0/24 dst 10.1.0.0/24 
dir fwd priority 1859 
tmpl src 172.24.4.6 dst 172.24.4.5
proto esp reqid 1 mode tunnel
src 10.2.0.0/24 dst 10.1.0.0/24 
dir in priority 1859 
tmpl src 172.24.4.6 dst 172.24.4.5
proto esp reqid 1 mode tunnel
src 10.1.0.0/24 dst 10.2.0.0/24 
dir out priority 1859 
tmpl src 172.24.4.5 dst 172.24.4.6
proto esp reqid 1 mode tunnel

ubuntu@stack:~$ sudo ip netns exec qrouter-52e07469-908a-4d09-8c7e-118d447a76b4 
ip xfrm state
src 172.24.4.5 dst 172.24.4.6
proto esp spi 0xca3c62ad reqid 1 mode tunnel
replay-window 32 flag af-unspec
auth-trunc hmac(sha1) 0x16b3e73abbdf33710c85c83ffa3387b2152c771e 96
enc cbc(aes) 0xcbecf8d670e502367b71b202daafebde
src 172.24.4.6 dst 172.24.4.5
proto esp spi 0xc158abb3 reqid 1 mode tunnel
replay-window 32 flag af-unspec
auth-trunc hmac(sha1) 0x13a7135db1eb5b8debc47ece4ff98b2ff7fba2e8 96
enc cbc(aes) 0x76bee8300a87b65325bd6b5add956e39

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450094

Title:
  ipsec-site-connection-list showing status PENDING_CREATE though tunnel
  is up

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ipsec-site-connection-list showing status PENDING_CREATE for
  strongswan driver, though tunnel is up

  The tunnel is up, and see that the packets are having esp as protocol.
  ipsec status also show Security Associations
   ip xfrm policy   ip xfrm state also showing valid info.
  Still ipsec-site-connection-list showing status as PENDING_CREATE.

  
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-44872765-4b50-4ac9-badf-8d41432975ed', 'neutron-vpn-netns-wrapper', 
'--mount_paths=/etc:/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/etc,/var/run:/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/var/run',
 '--cmd=ipsec,status']
  Exit code: 0
  Stdin: 
  Stdout: Command: ['mount', '--bind', 
'/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/etc', 
'/etc'] Exit code: 0 Stdout:  Stderr: Command: ['mount', '--bind', 
'/opt/stack/data/neutron/ipsec/44872765-4b50-4ac9-badf-8d41432975ed/var/run', 
'/var/run'] Exit code: 0 Stdout:  Stderr: Command: ['ipsec', 'status'] Exit 
code: 0 Stdout: Routed Connections:
  a044ebee-24e7-40a9-966a-42f348f36b30{1}:  ROUTED, TUNNEL
  a044ebee-24e7-40a9-966a-42f348f36b30{1}:   10.2.0.0/24 === 10.1.0.0/24 
  Security Associations (1 up, 0 connecting):
  a044ebee-24e7-40a9-966a-42f348f36b30[3]: ESTABLISHED 36 minutes ago, 
172.24.4.6[172.24.4.6]...172.24.4.5[172.24.4.5]
  a044ebee-24e7-40a9-966a-42f348f36b30{1}:  INSTALLED, TUNNEL, ESP SPIs: 
c5ac2539_i cdc26f87_o
  a044ebee-24e7-40a9-966a-42f348f36b30{1}:   10.2.0.0/24 === 10.1.0.0/24 

   ubuntu@stack:~$ sudo ip netns exec 
qrouter

[Yahoo-eng-team] [Bug 1449405] [NEW] Allow VPNaaS service provider and device driver to be configurable in devstack

2015-04-28 Thread venkata anil
Public bug reported:

Add devstack plugin for neutron-vpnaas like neutron-lbaas
https://github.com/openstack/neutron-lbaas/commit/facaaf9470efe06d305df59bc28cab1cfabd2fed

Add devstack scripts(devstack/plugin.sh and devstack/settings) in
neutron-vpnaas like in neutron-lbaas, which devstack uses to configure
VPNaaS service provider and device driver during devstack setup.

Looks like devstack won't allow any changes to it's repo for selecting
service provider/device driver settings for neutron advanced services.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449405

Title:
  Allow VPNaaS service provider and device driver  to be configurable in
  devstack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Add devstack plugin for neutron-vpnaas like neutron-lbaas
  
https://github.com/openstack/neutron-lbaas/commit/facaaf9470efe06d305df59bc28cab1cfabd2fed

  Add devstack scripts(devstack/plugin.sh and devstack/settings) in
  neutron-vpnaas like in neutron-lbaas, which devstack uses to configure
  VPNaaS service provider and device driver during devstack setup.

  Looks like devstack won't allow any changes to it's repo for selecting
  service provider/device driver settings for neutron advanced services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444017] [NEW] [VPNaas] NSS init failing for libreswan

2015-04-14 Thread venkata anil
Public bug reported:

I am running devstack on Fedora. VPNaas is not working on Fedora/centos
devstack.

neutron ipsec-site-connection-create command is failing

q-vpn log -
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-250faac2-167b-4861-9d0c-b5710bf02ee2', 'ipsec', 
'pluto', '--ctlbase', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/var/run/pluto',
 '--ipsecdir', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/etc', 
'--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/etc/ipsec.secrets',
 '--virtual_private', '%v4:10.1.0.0/24,%v4:10.2.0.0/24', '--stderrlog']

FATAL: NSS readonly initialization
(/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-
b5710bf02ee2/etc) failed (err -8015)

Because of this error,  pluto daemon is not running.
So VPNaas is not working on Fedora/centos devstack.

Fedora/centos uses Libreswan for ipsec.

From the wiki - Libreswan is a fork of the Openswan IPSEC VPN
implementation created by almost all of the openswan developers after a
lawsuit about the ownership of the Openswan name was filed against Paul
Wouters, then release manager of Openswan, in December 2012.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444017

Title:
  [VPNaas] NSS init failing for libreswan

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running devstack on Fedora. VPNaas is not working on
  Fedora/centos devstack.

  neutron ipsec-site-connection-create command is failing

  q-vpn log -
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-250faac2-167b-4861-9d0c-b5710bf02ee2', 'ipsec', 
'pluto', '--ctlbase', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/var/run/pluto',
 '--ipsecdir', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/etc', 
'--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', 
'/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-b5710bf02ee2/etc/ipsec.secrets',
 '--virtual_private', '%v4:10.1.0.0/24,%v4:10.2.0.0/24', '--stderrlog']

  FATAL: NSS readonly initialization
  (/opt/stack/data/neutron/ipsec/250faac2-167b-4861-9d0c-
  b5710bf02ee2/etc) failed (err -8015)

  Because of this error,  pluto daemon is not running.
  So VPNaas is not working on Fedora/centos devstack.

  Fedora/centos uses Libreswan for ipsec.

  From the wiki - Libreswan is a fork of the Openswan IPSEC VPN
  implementation created by almost all of the openswan developers after
  a lawsuit about the ownership of the Openswan name was filed against
  Paul Wouters, then release manager of Openswan, in December 2012.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426151] Re: PING health-monitor in LBaaS Haproxy sends a TCP request to members

2015-04-06 Thread venkata anil
Agree with Raseel. HAproxy doesn't support ICMP based health monitoring.
So modifying the bug as invalid. 

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: venkata anil (anil-venkata) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426151

Title:
  PING health-monitor in LBaaS Haproxy sends a TCP request to members

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  There are different health-monitors in LBaaS:
  1) PING 2) TCP 3) HTTP and 4) HTTPS

  I was trying PING health-monitor with Haproxy, but it seem to be
  sending TCP requests to the members instead of PING requests.

  
  varunlodaya@ubuntu:~$ neutron lb-healthmonitor-show 
fb5d0e4b-5763-4f38-bf2c-09f9f7ab2e49
  
++-+
  | Field  | Value  
 |
  
++-+
  | admin_state_up | True   
 |
  | delay  | 30 
 |
  | id | fb5d0e4b-5763-4f38-bf2c-09f9f7ab2e49   
 |
  | max_retries| 2  
 |
  | pools  | {status: ACTIVE, status_description: null, 
pool_id: 93d03b4e-05a5-4691-b36f-1416b96c3751} |
  | tenant_id  | 6d560cf5767d4f17a61ebc11c14bc1cc   
 |
  | timeout| 5  
 |
  | type   | PING   
 |
  
++-+

  
  The Haproxy config it generates for backend is:

  *
  backend 93d03b4e-05a5-4691-b36f-1416b96c3751
  mode http
  balance roundrobin
  option forwardfor
  timeout check 5s
  server 07285f2e-9e9f-41ad-a343-2f1d7296f2d9 10.0.0.4:80 weight 1 
check inter 30s fall 2

  On the member, I opened tcpdump to check whats being received:

  sudo tcpdump -i eth0 -v icmp
  sudo: unable to resolve host ubuntu-vm
  tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 
bytes
  ^C
  0 packets captured
  0 packets received by filter
  0 packets dropped by kernel

  ubuntu@ubuntu-vm:~$ sudo tcpdump -i eth0 -n src host 10.0.0.5 -vv
  sudo: unable to resolve host ubuntu-vm
  tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 
bytes
  23:03:53.434592 IP (tos 0x0, ttl 64, id 4614, offset 0, flags [DF], proto TCP 
(6), length 60)
  10.0.0.5.55697  10.0.0.4.80: Flags [S], cksum 0xc228 (correct), seq 
3491668946, win 29200, options [mss 1460,sackOK,TS val 1552852 ecr 0,nop,wscale 
7], length 0
  23:03:58.441968 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.0.4 
tell 10.0.0.5, length 28
  23:04:23.439097 IP (tos 0x0, ttl 64, id 30112, offset 0, flags [DF], proto 
TCP (6), length 60)
  10.0.0.5.55704  10.0.0.4.80: Flags [S], cksum 0x3862 (correct), seq 
635615873, win 29200, options [mss 1460,sackOK,TS val 1560353 ecr 0,nop,wscale 
7], length 0
  ^C
  3 packets captured
  3 packets received by filter
  0 packets dropped by kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436864] [NEW] [IPv6] [VPNaaS] Remove obsolete --defaultroutenexthop for ipsec addconn command

2015-03-26 Thread venkata anil
Public bug reported:

To load the connection into pluto daemon, neutron is calling ipsec
addconn command.

When ipv6 address is passed for --defaultroutenexthop option, for this
command, like below,

'ipsec', 'addconn', '--defaultroutenexthop',
u'1001::f816:3eff:feb4:a2db'

we are getting following error
ignoring invalid defaultnexthop: non-ipv6 address may not contain `:'

As --defaultroutenexthop is obsolete(http://ftp.libreswan.org/CHANGES ),
we should avoid passing this for ipv6 subnet.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436864

Title:
  [IPv6] [VPNaaS] Remove obsolete --defaultroutenexthop for  ipsec
  addconn command

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  To load the connection into pluto daemon, neutron is calling ipsec
  addconn command.

  When ipv6 address is passed for --defaultroutenexthop option, for this
  command, like below,

  'ipsec', 'addconn', '--defaultroutenexthop',
  u'1001::f816:3eff:feb4:a2db'

  we are getting following error
  ignoring invalid defaultnexthop: non-ipv6 address may not contain `:'

  As --defaultroutenexthop is obsolete(http://ftp.libreswan.org/CHANGES
  ), we should avoid passing this for ipv6 subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436890] [NEW] [IPv6] [VPNaaS]Error when %defaultroute assigned to leftnexthop and rightnexthop for ipv6

2015-03-26 Thread venkata anil
Public bug reported:

In template/openswan/ipsec.conf.template, both leftnexthop and rightnexthop 
connection parameters are assigned like below,
leftnexthop=%defaultroute
rightnexthop=%defaultroute

With this settings, ipsec addconn command is failing for ipv6 addresses
like below

2015-03-26 15:09:32.006 ERROR neutron.agent.linux.utils 
[req-ef46a8a3-75b9-4452-83df-051d49dc263d admin 
4546bfa7704845bf874241f1fb3a376b] 
Command: ['ip', 'netns', 'exec', 
u'qrouter-7f361721-74a6-4734-b021-388b4b64762e', 'ipsec', 'addconn', 
'--ctlbase', u'/opt/stack/data/neutron/ipsec/7f3
61721-74a6-4734-b021-388b4b64762e/var/run/pluto.ctl', '--defaultroutenexthop', 
u'1001::f816:3eff:feb4:a2db', '--config', u'/opt/stack/data/neutron/ips
ec/7f361721-74a6-4734-b021-388b4b64762e/etc/ipsec.conf', 
u'ef7409c5-395d-44eb-91d5-875059a3b3eb']
Exit code: 37
Stdin: 
Stdout: 023 address family inconsistency in this connection=10 host=10/nexthop=0
037 attempt to load incomplete connection

Looks like with IKEv1, parsing defaultroute for ipv6 addresses has
problems.

When addresses are given for  leftnexthop, instead of %defaultroute, ipsec 
addconn is working for ipv6. 
i.e modified the template like below
leftnexthop={{vpnservice.external_ip}}
#rightnexthop (i.e not using rightnexthop)

So, neutron shouldn't use   %defaultroute for leftnexthop and
rightnexthop and instead assign ip6 addresses from vpnservice object.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436890

Title:
   [IPv6] [VPNaaS]Error when %defaultroute assigned to leftnexthop and
  rightnexthop for ipv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In template/openswan/ipsec.conf.template, both leftnexthop and rightnexthop 
connection parameters are assigned like below,
  leftnexthop=%defaultroute
  rightnexthop=%defaultroute

  With this settings, ipsec addconn command is failing for ipv6
  addresses like below

  2015-03-26 15:09:32.006 ERROR neutron.agent.linux.utils 
[req-ef46a8a3-75b9-4452-83df-051d49dc263d admin 
4546bfa7704845bf874241f1fb3a376b] 
  Command: ['ip', 'netns', 'exec', 
u'qrouter-7f361721-74a6-4734-b021-388b4b64762e', 'ipsec', 'addconn', 
'--ctlbase', u'/opt/stack/data/neutron/ipsec/7f3
  61721-74a6-4734-b021-388b4b64762e/var/run/pluto.ctl', 
'--defaultroutenexthop', u'1001::f816:3eff:feb4:a2db', '--config', 
u'/opt/stack/data/neutron/ips
  ec/7f361721-74a6-4734-b021-388b4b64762e/etc/ipsec.conf', 
u'ef7409c5-395d-44eb-91d5-875059a3b3eb']
  Exit code: 37
  Stdin: 
  Stdout: 023 address family inconsistency in this connection=10 
host=10/nexthop=0
  037 attempt to load incomplete connection

  Looks like with IKEv1, parsing defaultroute for ipv6 addresses has
  problems.

  When addresses are given for  leftnexthop, instead of %defaultroute, ipsec 
addconn is working for ipv6. 
  i.e modified the template like below
  leftnexthop={{vpnservice.external_ip}}
  #rightnexthop (i.e not using rightnexthop)

  So, neutron shouldn't use   %defaultroute for leftnexthop and
  rightnexthop and instead assign ip6 addresses from vpnservice object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436263] [NEW] [IPv6] [VPNaaS]Add connaddrfamily to ipsec.conf.template

2015-03-25 Thread venkata anil
Public bug reported:

To set up an IPv6 based tunnel between two security gateways, 
connaddrfamily=ipv6 should be set in ipsec.conf.
Without this, whack will refuse to recognize the given IP addresses as IPv6 
address.

So 'connaddrfamily' config parameter should be added to
template/openswan/ipsec.conf.template. This parameter will take either
'ipv4' or 'ipv6' as values. And ipsec device driver should set
connaddrfamily to either ipv4 or ipv6 based on the addresses used for
the connection.

Also need the same change for template/strongswan/ipsec.conf.template

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436263

Title:
   [IPv6] [VPNaaS]Add connaddrfamily to ipsec.conf.template

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  To set up an IPv6 based tunnel between two security gateways, 
connaddrfamily=ipv6 should be set in ipsec.conf.
  Without this, whack will refuse to recognize the given IP addresses as IPv6 
address.

  So 'connaddrfamily' config parameter should be added to
  template/openswan/ipsec.conf.template. This parameter will take either
  'ipv4' or 'ipv6' as values. And ipsec device driver should set
  connaddrfamily to either ipv4 or ipv6 based on the addresses used for
  the connection.

  Also need the same change for template/strongswan/ipsec.conf.template

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399525] Re: Juno:port update with no security-group makes tenant VM's not accessible.

2015-02-12 Thread venkata anil
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399525

Title:
  Juno:port update with no security-group makes tenant VM's not
  accessible.

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Setup:
  +
  Ubuntu 14.04

  Steps to reproduce:
  

  teps to reproduce:

  1. create working juno setup(single node dev-stack)ubuntu(14.04 server).
  2. create custom security-group test with icmp ingress allowed.
  3. create network with subnet to spawn tenant VM.
  4. spawn a tenant vm with created security-group and network.
  5. Ensure Vm able to ping from dhcp namespace.
  5. Create floatingip and associate to the VM port.
  6. Try to ping the VM from public network(i.e. floating subnet) == VM able 
to ping since ufw disabled and icmp rule associated to the port.
  7. Update the VM port with no-security-groups and then try to ping VM's 
floatingip.
  8. VM ip not pinging, but it should ping because VM port unplugged from the 
ovs-firewall driver and it falls under system iptabel

  expected: it should ping because the compute ufw disabled.

  Reference:
  +
  port_id:bd89a24b-eeaf-41f6-a97b-54d65263052d
  VM_id:392b62a1-dd75-4d23-9296-978ef4630caf
  Sec_group:d6c08ecf-eb66-410d-a763-75f9a707fd89

  IP-TABLE:
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401004] Re: policy.json prevents regular user from creating HA router

2015-02-04 Thread venkata anil
** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401004

Title:
  policy.json prevents regular user from creating HA router

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This line in the policy.json prevents a regular user from creating an
  HA router.

  create_router:ha: rule:admin_only,

  There was a bug related to creating l3ha as regular user, but that was
  fixed and backported to juno 2014.2.1. I have tested and confirm that
  updating the policy.json I can successfully create an l3ha router as a
  user.

  Related-Bug: #1388716

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-01-20 Thread venkata anil
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411312] Re: tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_multiple_nics_order intermittent fails in the gate

2015-01-16 Thread venkata anil
change https://review.openstack.org/#/c/147775/ submitted.

** Changed in: nova
   Status: Confirmed = In Progress

** Project changed: nova = tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1411312

Title:
  
tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_multiple_nics_order
  intermittent fails in the gate

Status in Tempest:
  In Progress

Bug description:
  http://logs.openstack.org/36/147336/1/gate/gate-tempest-dsvm-neutron-
  full/a695425/console.html#_2015-01-15_01_13_40_304

  2015-01-15 01:40:06.862 | Traceback (most recent call last):
  2015-01-15 01:40:06.862 |   File 
tempest/api/compute/servers/test_create_server.py, line 181, in 
test_verify_multiple_nics_order
  2015-01-15 01:40:06.863 | self.assertEqual(expected_addr, addr)
  2015-01-15 01:40:06.863 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 348, in 
assertEqual
  2015-01-15 01:40:06.863 | self.assertThat(observed, matcher, message)
  2015-01-15 01:40:06.863 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 433, in 
assertThat
  2015-01-15 01:40:06.863 | raise mismatch_error
  2015-01-15 01:40:06.863 | MismatchError: ['19.80.0.2', '19.86.0.2'] != 
[u'19.80.0.3', u'19.86.0.3']

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdC5hcGkuY29tcHV0ZS5zZXJ2ZXJzLnRlc3RfY3JlYXRlX3NlcnZlci5TZXJ2ZXJzVGVzdE1hbnVhbERpc2sudGVzdF92ZXJpZnlfbXVsdGlwbGVfbmljc19vcmRlclwiIEFORCBtZXNzYWdlOlwiRkFJTEVEXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNS0wMS0wMVQxNjozNjowOSswMDowMCIsInRvIjoiMjAxNS0wMS0xNVQxNjozNjowOSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDIxMzM5ODE0ODE5fQ==

  14 hits in the last 10 days, going back to 1/8 for the first one.
  check, gate and experimental queues, all failures, looks like
  neutronv2 API only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1411312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406438] Re: Failures in test_create_list_show_delete_interfaces

2015-01-05 Thread venkata anil
This is a tempest bug and a fix also released for tempest. So setting
the status to invalid for neutron bug.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1406438

Title:
  Failures in test_create_list_show_delete_interfaces

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  logstash showed several dozen examples of this in the last week,
  searching for

  u'port_state': u'BUILD'

  
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces[gate,network,smoke]
  2014-12-29 23:02:14.022 | 
---
  2014-12-29 23:02:14.022 | 
  2014-12-29 23:02:14.022 | Captured traceback:
  2014-12-29 23:02:14.022 | ~~~
  2014-12-29 23:02:14.022 | Traceback (most recent call last):
  2014-12-29 23:02:14.022 |   File tempest/test.py, line 112, in wrapper
  2014-12-29 23:02:14.022 | return f(self, *func_args, **func_kwargs)
  2014-12-29 23:02:14.022 |   File 
tempest/api/compute/servers/test_attach_interfaces.py, line 128, in 
test_create_list_show_delete_interfaces
  2014-12-29 23:02:14.023 | self._test_show_interface(server, ifs)
  2014-12-29 23:02:14.023 |   File 
tempest/api/compute/servers/test_attach_interfaces.py, line 81, in 
_test_show_interface
  2014-12-29 23:02:14.023 | self.assertEqual(iface, _iface)
  2014-12-29 23:02:14.023 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 348, in 
assertEqual
  2014-12-29 23:02:14.023 | self.assertThat(observed, matcher, message)
  2014-12-29 23:02:14.023 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 433, in 
assertThat
  2014-12-29 23:02:14.023 | raise mismatch_error
  2014-12-29 23:02:14.023 | MismatchError: !=:
  2014-12-29 23:02:14.023 | reference = {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
  2014-12-29 23:02:14.023 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
  2014-12-29 23:02:14.023 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
  2014-12-29 23:02:14.023 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
  2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
  2014-12-29 23:02:14.024 |  u'port_state': u'ACTIVE'}
  2014-12-29 23:02:14.024 | actual= {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
  2014-12-29 23:02:14.024 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
  2014-12-29 23:02:14.024 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
  2014-12-29 23:02:14.024 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
  2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
  2014-12-29 23:02:14.024 |  u'port_state': u'BUILD'}
  2014-12-29 23:02:14.024

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1406438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406438] Re: Failures in test_create_list_show_delete_interfaces

2014-12-30 Thread venkata anil
Submitted change https://review.openstack.org/#/c/144434/

** Changed in: neutron
   Status: New = In Progress

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = venkata anil (anil-venkata)

** Changed in: tempest
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1406438

Title:
  Failures in test_create_list_show_delete_interfaces

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  In Progress

Bug description:
  logstash showed several dozen examples of this in the last week,
  searching for

  u'port_state': u'BUILD'

  
  
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesTestJSON.test_create_list_show_delete_interfaces[gate,network,smoke]
  2014-12-29 23:02:14.022 | 
---
  2014-12-29 23:02:14.022 | 
  2014-12-29 23:02:14.022 | Captured traceback:
  2014-12-29 23:02:14.022 | ~~~
  2014-12-29 23:02:14.022 | Traceback (most recent call last):
  2014-12-29 23:02:14.022 |   File tempest/test.py, line 112, in wrapper
  2014-12-29 23:02:14.022 | return f(self, *func_args, **func_kwargs)
  2014-12-29 23:02:14.022 |   File 
tempest/api/compute/servers/test_attach_interfaces.py, line 128, in 
test_create_list_show_delete_interfaces
  2014-12-29 23:02:14.023 | self._test_show_interface(server, ifs)
  2014-12-29 23:02:14.023 |   File 
tempest/api/compute/servers/test_attach_interfaces.py, line 81, in 
_test_show_interface
  2014-12-29 23:02:14.023 | self.assertEqual(iface, _iface)
  2014-12-29 23:02:14.023 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 348, in 
assertEqual
  2014-12-29 23:02:14.023 | self.assertThat(observed, matcher, message)
  2014-12-29 23:02:14.023 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 433, in 
assertThat
  2014-12-29 23:02:14.023 | raise mismatch_error
  2014-12-29 23:02:14.023 | MismatchError: !=:
  2014-12-29 23:02:14.023 | reference = {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
  2014-12-29 23:02:14.023 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
  2014-12-29 23:02:14.023 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
  2014-12-29 23:02:14.023 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
  2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
  2014-12-29 23:02:14.024 |  u'port_state': u'ACTIVE'}
  2014-12-29 23:02:14.024 | actual= {u'fixed_ips': [{u'ip_address': 
u'10.100.0.4',
  2014-12-29 23:02:14.024 |  u'subnet_id': 
u'14eede71-3e7c-45d0-ba9a-1e862971e73a'}],
  2014-12-29 23:02:14.024 |  u'mac_addr': u'fa:16:3e:b4:a6:5a',
  2014-12-29 23:02:14.024 |  u'net_id': 
u'5820024b-ce9d-4175-a922-2fc197f425e9',
  2014-12-29 23:02:14.024 |  u'port_id': 
u'49bc5869-1716-49a6-812a-90a603e4f8f3',
  2014-12-29 23:02:14.024 |  u'port_state': u'BUILD'}
  2014-12-29 23:02:14.024

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1406438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403823] [NEW] tests in OpenDaylight CI failing for past 6 days

2014-12-18 Thread venkata anil
Public bug reported:

Last successful build on OpenDaylight CI(
https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6
days back. After that this   OpenDaylight CI Jenkins job is failing for
all the patches.

Please modify this job to non-voting category.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: opendaylight

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403823

Title:
  tests in OpenDaylight CI failing for past 6 days

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Last successful build on OpenDaylight CI(
  https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6
  days back. After that this   OpenDaylight CI Jenkins job is failing
  for all the patches.

  Please modify this job to non-voting category.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394962] Re: Incorrect IP on Router Interface to External Net

2014-12-16 Thread venkata anil
** Changed in: neutron
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394962

Title:
  Incorrect IP on Router Interface to External Net

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  A user with sufficient permissions creates a new router through
  Dashboard. Instead of assigning a gateway as normal, the user chooses
  to add a new interface connected to the external network. The user is
  given the option to enter an IP, but it can be left black so that the
  system chooses “the first host IP address in the subnet” according to
  [1]. But instead of the expected behavior, OpenStack chooses an IP
  address neither included in the subnet pool, nor the first IP in the
  network.

  [1] http://docs.openstack.org/user-
  guide/content/dashboard_create_networks.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394962] Re: Incorrect IP on Router Interface to External Net

2014-12-09 Thread venkata anil
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394962

Title:
  Incorrect IP on Router Interface to External Net

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  A user with sufficient permissions creates a new router through
  Dashboard. Instead of assigning a gateway as normal, the user chooses
  to add a new interface connected to the external network. The user is
  given the option to enter an IP, but it can be left black so that the
  system chooses “the first host IP address in the subnet” according to
  [1]. But instead of the expected behavior, OpenStack chooses an IP
  address neither included in the subnet pool, nor the first IP in the
  network.

  [1] http://docs.openstack.org/user-
  guide/content/dashboard_create_networks.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399788] Re: neutron doesn't log tenant_id and user_id along side req-id in logs

2014-12-08 Thread venkata anil
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399788

Title:
  neutron doesn't log tenant_id and user_id along side req-id in logs

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron logs:  [req-94a39f87-e470-4032-82af-9a6b429b60fa None]
  while nova logs: [req-c0b4dfb9-8af3-40eb-b0dd-7b576cfd1d55 
AggregatesAdminTestJSON-917687995 AggregatesAdminTestJSON-394398414]

  
  Nova uses the format: #logging_context_format_string=%(asctime)s.%(msecs)03d 
%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] 
%(instance)s%(message)s

  Without knowing the user and tenant its hard to understand what the
  logs are doing when multiple tenants are using the cloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp