[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2016-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385917
Committed: 
https://git.openstack.org/cgit/openstack/masakari/commit/?id=03369d1ef45f0a2d3bbc43fcf4be43c251fdce60
Submitter: Jenkins
Branch:master

commit 03369d1ef45f0a2d3bbc43fcf4be43c251fdce60
Author: Deepak 
Date:   Thu Oct 13 16:53:34 2016 +0530

Drop MANIFEST.in - it's not needed by pbr

masakari already uses PBR:-
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.

Change-Id: I494f44d8358511bf80ccedc3043277d7c9a8ea9f
Closes-Bug: #1608980


** Changed in: masakari
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in craton:
  In Progress
Status in ec2-api:
  In Progress
Status in gce-api:
  Fix Released
Status in Karbor:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in Kosmos:
  New
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in Solum:
  In Progress
Status in Swift Authentication:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  In Progress
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/craton/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436864] Re: [IPv6] [VPNaaS] Remove obsolete --defaultroutenexthop for ipsec addconn command

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436864

Title:
  [IPv6] [VPNaaS] Remove obsolete --defaultroutenexthop for  ipsec
  addconn command

Status in neutron:
  Expired

Bug description:
  To load the connection into pluto daemon, neutron is calling ipsec
  addconn command.

  When ipv6 address is passed for --defaultroutenexthop option, for this
  command, like below,

  'ipsec', 'addconn', '--defaultroutenexthop',
  u'1001::f816:3eff:feb4:a2db'

  we are getting following error
  ignoring invalid defaultnexthop: non-ipv6 address may not contain `:'

  As --defaultroutenexthop is obsolete(http://ftp.libreswan.org/CHANGES
  ), we should avoid passing this for ipv6 subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434158] Re: snat_idx and FIP Rules may overlap

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1434158

Title:
  snat_idx and FIP Rules may overlap

Status in neutron:
  Expired

Bug description:
  FIP rules in agent/l3/dvr_fip_ns.py are given the range:
  FIP_PR_START = 32768
  FIP_PR_END = FIP_PR_START + 4

  And snat_idx in agent/l3/dvr_router.py used for ip rules as well is computed 
using:
  if snat_idx < 32768:
  snat_idx = snat_idx + MASK_30

  So that the FIP rule range could overlap the snat_idx range in rare
  cases.

  The obvious solution is "if snat_idx <32768+40001"(I think) but
  there's probably a better solution than hard coding 4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1434158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440650] Re: VPNaas-IPsec site connection is still active evenif IPsec service on Host OS is stopped and VM across the site are still able to ping each other

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440650

Title:
  VPNaas-IPsec site connection is still  active evenif IPsec service on
  Host OS is stopped and VM across the site are still able to ping each
  other

Status in neutron:
  Expired

Bug description:
  In the devstack setup with VPNaas enabled:

  1. Establish a IPsec site connection between 2 devstack clouds.
  2. Verify that the connection is active from both ends.
  3. Now run "service ipsec stop" on either of the cloud.
  4. Now check the status of IPsec site connection, it will still show active 
on both ends, and the VMs launched on both clouds are still accessible using 
the private IP. -issue 1
  5. If we kill Pluto process also, then the IPsec site connection goes down.
  6. If before creating the IPsec site connection IPsec service was stopped, 
after that if we create  IPsec site connection it doesnot become active even 
after starting the IPsec service.-issue 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1146586] Re: Return 501 for not implemented action

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1146586

Title:
  Return 501 for not implemented action

Status in neutron:
  Expired

Bug description:
  We haven't conistent return code for non-implemented action.

  QuotaSetsController.create returns 500
  AgentPluginBase.create_agent returns 404
  ExtensionController.delete/create return 404
  NetworkSchedulerController.update/show return 500
  RouterSchedulerController.update/show return 500
  DhcpAgentsHostingNetworkController.create/delete/update/show return 500
  L3AgentsHostingRouterController.create/delete/update/show return 500

  As discusstion at
  https://review.openstack.org/#/c/23406/5/quantum/extensions/quotasv2.py

  We prefer use 501 for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1146586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510415] Re: Linuxbridge agent failed to create some bridges after the os was rebooted

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510415

Title:
  Linuxbridge agent failed to create some bridges after the os was
  rebooted

Status in neutron:
  Expired

Bug description:
  When I rebooted the operation system which hosted the Linuxbridge
  agent ,  Linuxbridge agent will try to recreate all the bridges
  according to the updated tap devices.

  But I have found that some bridge were not created,  and I found the
  weird log message:

  In l3-agent.log:  The tap device seems already created at 2015-10-27
  00:53:15

  2015-10-27 00:53:15.002 5135 DEBUG neutron.agent.linux.utils [-]
  Running command: ['sudo', '/usr/bin/neutron-rootwrap',
  '/etc/neutron/rootwrap.conf', 'ip', 'link', 'add', 'tap5f17438b-6e',
  'type', 'veth', 'peer', 'name', 'qr-5f17438b-6e', 'netns', 'qrouter-
  f3f17112-aadc-4649-9662-6bf25cad569d'] create_process
  /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:84

  But in linuxbridge-agent.log:   the linuxbridge agent think the tap
  device was not created yet and give up to create the related bridge at
  2015-10-27 00:53:27

  2015-10-27 00:53:27.774 5088 DEBUG 
neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent 
[req-c156ba1e-aafb-4e4e-ae15-ba9384bb1673 ] Port tap5f17438b-6e added 
treat_devices_added_updated 
/usr/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py:865
  2015-10-27 00:53:27.774 5088 INFO 
neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent 
[req-c156ba1e-aafb-4e4e-ae15-ba9384bb1673] Device tap5f17438b-6e not defined on 
plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161907] Re: port created with {"security_groups": null} is associated with the default security_group

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1161907

Title:
  port created with {"security_groups": null} is associated with the
  default security_group

Status in neutron:
  Expired

Bug description:
  I specified {"security_groups": null} when port-create, but the port created 
is associated with the default security group.
  If the attribute security_groups is None (i.e.. null), is_attr_set() in 
attribute.py returns False, so default_sg is applied to the port.

  DEBUG: quantumclient.client
  REQ: curl -i http://10.56.51.252:9696/v2.0/ports.json -X POST -H "User-Agent: 
python-quantumclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 2893b4a0abcb401689cff3a7dd610b20" -d 
'{"port": {"network_id": "0a6930c9-8bcb-4c7b-9c85-32bcf364cd59", 
"security_groups": null, "admin_state_up": true}}'

  DEBUG: quantumclient.client RESP:{'date': 'Fri, 29 Mar 2013 14:36:55
  GMT', 'status': '201', 'content-length': '443', 'content-type':
  'application/json; charset=UTF-8'} {"port": {"status": "DOWN", "name":
  "", "admin_state_up": true, "network_id": "0a6930c9-8bcb-4c7b-
  9c85-32bcf364cd59", "tenant_id": "ffc9febde5ae48c9a7c50ab3d8a35706",
  "device_owner": "", "mac_address": "fa:16:3e:c3:6a:bd", "fixed_ips":
  [{"subnet_id": "b16fce6a-30e7-4084-be67-8c05f73b9172", "ip_address":
  "10.0.0.5"}], "id": "421cc09c-c13b-444d-a2cb-5c835231005d",
  "security_groups": ["6d1f7746-88c1-416c-8bc0-83bbd559b389"],
  "device_id": ""}}

  Created a new port:
  
+-+-+
  | Field   | Value 
  |
  
+-+-+
  | admin_state_up  | True  
  |
  | device_id   |   
  |
  | device_owner|   
  |
  | fixed_ips   | {"subnet_id": "b16fce6a-30e7-4084-be67-8c05f73b9172", 
"ip_address": "10.0.0.5"} |
  | id  | 421cc09c-c13b-444d-a2cb-5c835231005d  
  |
  | mac_address | fa:16:3e:c3:6a:bd 
  |
  | name|   
  |
  | network_id  | 0a6930c9-8bcb-4c7b-9c85-32bcf364cd59  
  |
  | security_groups | 6d1f7746-88c1-416c-8bc0-83bbd559b389  
  |
  | status  | DOWN  
  |
  | tenant_id   | ffc9febde5ae48c9a7c50ab3d8a35706  
  |
  
+-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1161907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495584] Re: VPNaaS: Help reduce cross project breakage

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495584

Title:
  VPNaaS: Help reduce cross project breakage

Status in neutron:
  Expired

Bug description:
  One issue that has been occurring is that a neutron project commit may
  change a method/attribute that the neutron-vpnaas project uses,
  resulting in breakage in the neutron-vpnaas project (which may not be
  detected for days).

  To help reduce this probability, there is a desire to have neutron
  commits run VPN unit and functional tests. This bug is to document the
  need for running VPN functional tests on neutron commits.

  Code review 203201 upstreamed  (before this bug was created) a pair of
  neutron jobs that will run VPN functional tests in the experimental
  queue. These jobs need to move to check, and eventually gate queues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416427] Re: VPNaaS: Create functional tests for OpenSwan implementation

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416427

Title:
  VPNaaS: Create functional tests for OpenSwan implementation

Status in neutron:
  Expired

Bug description:
  Currently, there are no functional tests for OpenSwan reference
  implementation of VPNaaS. Should develop tests to exercise this
  implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316731] Re: VPNAAS: Updating the peer id from ip address to email id making the ipsec site connection forever down vm across the sites not able to ping each other

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316731

Title:
  VPNAAS: Updating the peer id from ip address to email id making the
  ipsec site connection forever down vm across the sites not able to
  ping each other

Status in neutron:
  Expired

Bug description:
  Steps to Reproduce: 
  1.Create two site with vpn service,vpn ike policy,ipsec policy and ipsec site 
connection.
  2. Make sure the vm across the sit are able to ping each other with 
successfull tunnel creation .
  3.Check the status of the operation on both the sites:
  neutron ipsec-site-connection-list
  
+--++---+++---++
  | id   | name   | peer_address  | 
peer_cidrs | route_mode | auth_mode | status |
  
+--++---+++---++
  | 8af2322c-aaac-4de1-b026-d5a2afdc3845 | vpnconnection1 | $peer_address2 | 
"11.11.1.0/24" | static | psk   | ACTIVE |
  
+--++---+++---++
  neutron vpn-service-list
  
+--++--++
  | id   | name   | router_id   
 | status |
  
+--++--++
  | 58caaf89-ecc2-4cf4-a86c-374b2d22dc35 | myvpn1 | 
336c444b-22d1-40a8-ad9c-540635e2 | ACTIVE |
  
+--++--++
  neutron vpn-service-list
  
+--++--++
  | id   | name   | router_id   
 | status |
  
+--++--++
  | 9408fed3-35e3-48c6-ae1c-23324eb9b108 | myvpn1 | 
cfd9c896-c56f-4da1-93b5-3591fc0a7841 | ACTIVE |
  
+--++--++
  neutron ipsec-site-connection-list
  
+--++---+++---++
  | id   | name   | peer_address  | 
peer_cidrs | route_mode | auth_mode | status |
  
+--++---+++---++
  | 465cca84-49a4-4170-b15b-64d9a9664e90 | vpnconnection1 | $peer_address1 | 
"10.10.1.0/24" | static | psk   | ACTIVE |
  
+--++---+++---++
  neutron vpn-service- show 465cca84-49a4-4170-b15b-64d9a9664e90
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | auth_mode  | psk|
  | description||
  | dpd| {"action": "hold", "interval": 30, "timeout": 120} |
  | id | 465cca84-49a4-4170-b15b-64d9a9664e90   |
  | ikepolicy_id   | 6159a86b-38f2-415e-b583-bca27b6b8c15   |
  | initiator  | bi-directional |
  | ipsecpolicy_id | e63d8cef-56a0-4b13-9094-940256ce7cc8   |
  | mtu| 1500   |
  | name   | vpnconnection1 |
  | peer_address   | $peer_address1  |
  | peer_cidrs | 10.10.1.0/24   |
  | peer_id| $peer_address1  |
  | psk| secret |
  | route_mode | static |
  | status | ACTIVE |
  | tenant_id  | d209c7ac08304ff48c59a53c2c47516c   |
  | vpnservice_id  | 9408fed3-35e3-48c6-ae1c-23324eb9b108   |
  +++
  Make sure the VM across the site pinging each other.

  4. Now update the peer id onto one of the 

[Yahoo-eng-team] [Bug 1441789] Re: VPNaaS: Confirm OpenSwan <--> StrongSwan interop

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441789

Title:
  VPNaaS: Confirm OpenSwan <--> StrongSwan interop

Status in neutron:
  Expired

Bug description:
  Some early testing was showing a problem getting VPN IPSec connection
  up and passing traffic, when using StrongSwan on one end and OpenSwan
  on the other end (using the same, default, configuration). Worked
  fine, when the same Swan flavor was used on each end.

  Need to investigate into whether or not this works, and if it does not
  work, research into the root cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499224] Re: lb not deployable but still add into instance_mapping when lbaas agent restart

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499224

Title:
  lb not deployable but still add into instance_mapping when lbaas agent
  restart

Status in neutron:
  Expired

Bug description:
  lb not deployable but still add into instance_mapping when lbaas agent
  restart and reload loadbalancer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478778] Re: VPNaas: strongswan: cannnot add more than one subnet to ipsec

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478778

Title:
  VPNaas: strongswan: cannnot add more than one subnet to ipsec

Status in neutron:
  Expired

Bug description:
  I used this patch (VPNaaS: Fedora support for StrongSwan) for vpnaas on 
centos referring this bug
  https://bugs.launchpad.net/neutron/+bug/1441788

  1. I used a single node with 2 routers, create ike/ipsec/vpn-service/site 
vpn, the tunnels came
  up fine
  kilo-vpnaas-centos71

  
  10.10.10.x/24R1-R2-20.20.20.x/24

  R1 to R2 on 192.168.122.202, 192.168.122.203.

  2. When i added one more interface to r1 and r2, 30.30.30.x and 40.40.40.x 
respectively, created
  ike/ipsec/vpn-service/site-vpn, it did not create a new conn in ipsec.conf 
file, rather, it 
  over wrote the existing(10.10.10.x) conn in ipsec.conf file.

  [root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
  # Configuration for vpn10
  config setup

  conn %default
  ikelifetime=60m
  keylife=20m
  rekeymargin=3m
  keyingtries=1
  authby=psk
  mobike=no

  conn 221c6d37-e7a1-4afc-8d0f-4de32df3818b   this for 10.10.10.x
  keyexchange=ikev2
  left=192.168.122.202
  leftsubnet=10.10.10.0/24
  leftid=192.168.122.202
  leftfirewall=yes
  right=192.168.122.203
  rightsubnet=20.20.20.0/24
  rightid=192.168.122.203
  auto=route

  ### added 1 more subnet 30.30.30.x

  [root@ceos71 ~]# cat 
/var/lib/neutron/ipsec/70e88c46-c6b2-4c8d-afad-76ebd77b55cb/etc/strongswan/ipsec.conf
 
  # Configuration for vpn30
  config setup

  conn %default
  ikelifetime=60m
  keylife=20m
  rekeymargin=3m
  keyingtries=1
  authby=psk
  mobike=no

  conn 7b57fc83-3581-4e86-a193-e14474eef295 ### this is for 30.30.30.x, it over 
wrote the 10.10.10.x conn 
  keyexchange=ikev2
  left=192.168.122.202
  leftsubnet=30.30.30.0/24 <
  leftid=192.168.122.202
  leftfirewall=yes
  right=192.168.122.203
  rightsubnet=40.40.40.0/24
  rightid=192.168.122.203
  auto=route

  3. My understanding is that, it should add new conn to ipsec.conf
  file, than overwriting the existing conn. am i right ???

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449286] Re: lb's operating_status is not in DISABLED state when an user creates a loadbalancer with admin_state_up field as 'False'

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449286

Title:
  lb's operating_status is not in DISABLED state when an user creates a
  loadbalancer with admin_state_up field as 'False'

Status in neutron:
  Expired

Bug description:
  when I create a loadbalancer with the following body, I could see the
  'operating_status' is still showing 'ONLINE', it should be 'DISABLED'.

  Steps can be reproduced:

  POST http://:9696/v2.0/lbaas/loadbalancers with the required
  headers:

  Body:

  {
  "loadbalancer": {
  "vip_subnet_id": "",
  "admin_state_up": false
  }
  }

  GET: http://:9696/v2.0/lbaas/loadbalancers/

  Response:

  {
    "loadbalancers": [
  {
    "description": "",
    "admin_state_up": false,
    "tenant_id": "aad7bae2df174c1291bf994a8b8fac89",
    "provisioning_status": "ACTIVE",
    "listeners": [],
    "vip_address": "10.0.0.5",
    "vip_port_id": "59104203-e503-4d67-93ff-70a8df3c53c4",
    "provider": "haproxy",
    "vip_subnet_id": "94672fbb-0f7e-4c54-a538-a9826bd616d1",
    "id": "e1976562-b1f6-45cd-8e32-5b961f80fa24",
    "operating_status": "ONLINE",
    "name": ""
  }
    ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484637] Re: security groups are not applied to instance till a new instance is launched

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484637

Title:
  security groups are not applied to instance till a new instance is
  launched

Status in neutron:
  Expired

Bug description:
  After creating a new security group in horizon, and applying it to
  existing instances, I notice it is not permitting traffic still till a
  new instance is launched. The deployment is done using packstack with
  Juno release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1484637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493524] Re: IPv6 support for DVR routers

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493524

Title:
  IPv6 support for DVR routers

Status in neutron:
  Expired

Bug description:
  This bug would capture all the IPv6 related work on DVR routers going
  forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489183] Re: Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489183

Title:
  Port is unbound from a compute node, the DVR scheduler needs to check
  whether the router can be deleted on the L3-agent

Status in neutron:
  Expired

Bug description:
  In my environment where there is a compute node and a controller node.
  On the compute node the L3-agent mode is 'dvr' on the controller node
  the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
  compute node.

  Start: the compute node has no VMs running, there are no namespaces on
  the compute node.

  1. Created a network and a router
 neutron net-create demo-net
 neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
 neutron router-create demo-router
 neutron router-interface-add demo-router sb-demo-net
 neutron router-gateway-set demo-router public

  my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
  my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

  2. Created a port: 
  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
  The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

  Note: the port is not associated with a floating IP.

  3. Boot up a VM using the port:
  nova boot --flavor 1 --image  --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

  Wait for the VM to come up on the compute node.

  4. Deleted the VM.

  5. The port still exists and is now unbound from the compute node (device 
owner and binding:host_id are now None):
  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"b45d41ca-134f-4274-bb05-50fab100315e", "ip_address": "10.1.2.4"} |
  | id| 278743d7-b057-4797-8b2b-faaf5fe13a4a
|
  | mac_address   | fa:16:3e:a6:f7:d1   
|
  | name  | 
|
  | network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297
|
  | port_security_enabled | True
|
  | security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246
|
  | status| DOWN
|
  | tenant_id | a7950bd5a61548ee8b03145cacf90a53
|
  
+---+-+

  The Router is still scheduled on the compute node.

  stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
  
+--+-++---+--+
  | id   | host| admin_state_up | alive 
| ha_state |
  
+--+-++---+--+
  | 2fc1f65b-4c05-4cec-95eb-93dda39a6eec | Dvr-Ctrl2   | True   | :-)   
|  |
  | dae065fb-b140-4ece-8824-779cf6426337 | DVR-Compute 

[Yahoo-eng-team] [Bug 1496201] Re: DVR: router namespace can't be deleted if bulk delete VMs

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496201

Title:
  DVR: router namespace can't be deleted if bulk delete VMs

Status in neutron:
  Expired

Bug description:
  With DVR router,  if we bulk delete VMs on from a compute node, the router 
namespace will remain(not always happen, but for most part)
  reproduce steps:
  1. create a DVR router,  add a subnet to this router
  2. create two VMs on one compute node, note that these are only these two VMs 
on this compute 
  3. bulk delete these two VMs through Nova API

  the router namespace will remain for the most part.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494098] Re: Add devstack manual into devref

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494098

Title:
  Add devstack manual into devref

Status in neutron:
  Expired

Bug description:
  Currently, neutron has devstack script in tree but the rule is
  unclear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441790] Re: Simplify and modernize model_query()

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441790

Title:
  Simplify and modernize model_query()

Status in neutron:
  Expired

Bug description:
  From zzzeek on IRC, 2015-04-08:

  this thing:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L486

  this is in a few places.   the model_query() for neutron is broken up
  into these three awkward phases, and seveal of these pliugins put an
  unnecessary and expensive OUTER JOIN on all queries

  this should be an INNER JOIN and only when the filter_hook is actually
  in use

  now its hard for me to change this b.c. everyone will be like, it works great 
and nobody uses that thing so who cares
  but i really want to fix up how we build queries to be cleaner, using newer 
techniques

  there’s a quick cahnge we can make right there that will probably
  corect the outerjoin, we can do query.join() right in the
  _ml2_port_result_filter_hook for now

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505354] Re: oslo.db dependency changes breaks testing in neutron-lbaas

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505354

Title:
  oslo.db dependency changes breaks testing in neutron-lbaas

Status in neutron:
  Expired

Bug description:
  oslo.db updates removes testresources from their requirements. We must
  now import this ourselves.

  https://bugs.launchpad.net/nova/+bug/1503501

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513758] Re: dhcp-agent with reserved_dhcp_port raise cannot find tap device error

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513758

Title:
  dhcp-agent with reserved_dhcp_port raise cannot find tap device error

Status in neutron:
  Expired

Bug description:
  =
  my env
  =
  upstream code.
  2 dhcp-agents, setting dhcp_agents_per_network to 2.
  optional: checkout [1] https://review.openstack.org/#/c/239264/ .

  ===
  steps to reproduce
  ===
  1, create a private net and its subnet, enable_dhcp(True) by default.
  2, verify both two dhcp-agents host net, by "ip netns", and we can find 
dhcp-port tapA is used by dhcp-agent-1, and dhcp-port tapB is used by 
dhcp-agent-2.
  3, stop/kill two dhcp-agnets.
  4, update two dhcp-ports device_id from previous one to "reserved_dhcp_port"
  >>neutron port-update --device_id='reserved_dhcp_port' PORT-ID
  5, start two dhcp-agents again, when dhcp-agent-1 try to setup tapB and 
dhcp-agent-2 try to setup tapA, error like 'Cannot find device "tapX" ' will 
raise.

  ---
  explanation
  ---
  1. step 4 is try to simulate case remove_networks_from_down_agents, when we 
stop/kill a dhcp-agent, even we can check it's no longer alive by "neutron 
agent-status", the dhcp-port it used will still not update its device_id to 
"reserved_dhcp_port" for a while. manually modify it will make things quick.
  2, about patch in [1], it's optional, even without that patch, this issue can 
still raise. But sometime for stale ports existing, this issue will not raise, 
but that's not a good reason to keep stale dhcp-port. That patch will help to 
cleanup stale ports, and make this issue easier to be seen.

  ===
  TRACE log
  ===
  2015-11-06 05:46:41.634 DEBUG neutron.agent.linux.dhcp 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Reloading allocations for network: 
79673257-aa5e-4d19-91b5-225391b2691c from (pid=20965) reload_allocations 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:466
  2015-11-06 05:46:41.635 DEBUG neutron.agent.linux.utils 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Running command (rootwrap daemon): ['ip', 
'netns', 'exec', 'qdhcp-79673257-aa5e-4d19-91b5-225391b2691c', 'ip', 'route', 
'list', 'dev', 'tapbcd64879-be'] from (pid=20965) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:99
  2015-11-06 05:46:41.664 ERROR neutron.agent.linux.utils 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] 
  Command: ['ip', 'netns', 'exec', 
u'qdhcp-79673257-aa5e-4d19-91b5-225391b2691c', 'ip', 'route', 'list', 'dev', 
'tapbcd64879-be']
  Exit code: 1
  Stdin: 
  Stdout: 
  Stderr: Cannot find device "tapbcd64879-be"

  2015-11-06 05:46:41.665 ERROR neutron.agent.dhcp.agent 
[req-6e9631c6-84b4-4283-a975-cc40819b638d admin 
b7adf07ab24c40cc98f0f4835bb2e43d] Unable to reload_allocations dhcp for 
79673257-aa5e-4d19-91b5-225391b2691c.
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 467, in 
reload_allocations
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent 
self.device_manager.update(self.network, self.interface_name)
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1227, in update
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent 
self._set_default_route(network, device_name)
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1005, in 
_set_default_route
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent gateway = 
device.route.get_gateway()
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 710, in get_gateway
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent route_list_lines = 
self._run(options, tuple(args)).split('\n')
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 303, in _run
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent return 
self._parent._run(options, self.COMMAND, args)
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 67, in _run
  2015-11-06 05:46:41.665 TRACE neutron.agent.dhcp.agent return 

[Yahoo-eng-team] [Bug 1441982] Re: Localization framework in neutron is not working correctly.

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441982

Title:
  Localization framework in neutron is not working correctly.

Status in neutron:
  Expired

Bug description:
  Currently only neutron-log-info.po file is present for neutron code but the 
code is using neutron as domain.
  Also locale directory to use is being fetched from environment which is not 
set, hence locale files are not located using gettext utility successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510411] Re: neutron-sriov-nic-agent raises UnsupportedVersion security_groups_provider_updated

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510411

Title:
  neutron-sriov-nic-agent raises UnsupportedVersion
  security_groups_provider_updated

Status in neutron:
  Expired

Bug description:
  neutron-sriov-nic-agent raises following exception:
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 1.3. 
Attempted method: security_groups_provider_updated
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _dispatch
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 1.3. Attempted 
method: security_groups_provider_updated
  2015-10-26 04:46:18.297 116015 ERROR oslo_messaging.rpc.dispatcher

  This VM is build with SRIOV port(macvtap). 
   jenkins@cnt-14:~$ sudo virsh list --all
   IdName   State
  
   10instance-0003  paused

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516296] Re: Use constants for protocol values in tests in load balancer comonents

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516296

Title:
  Use constants for protocol values in tests in load balancer comonents

Status in neutron:
  Expired

Bug description:
  I see direct values for protocols are used while testing listener and pool.
  It would be good to use constants which are already defined. in functional 
and unit tests.

  I was trying to implement its case insensitivity but since protocol value is 
hardcoded
  in most tests so all the test were breaking.

  The changes will be easy if we use already defined constants.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1516296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438321] Re: Fix process management for neutron-server

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438321

Title:
  Fix process management for neutron-server

Status in neutron:
  Expired

Bug description:
  The following commit to oslo-incubator [1], that was supposed to optimize 
waiting for children processes
  to exit, will break neutron-server behavior (i.e. signal handling).

  1. In neutron-server eventlet monkey-patching (including patching os
  module) is done in parent process. That is why os.waitpid(0, 0) in
  _wait_child method also gets monkey-patched and eventlet goes
  crazy. Connecting to parent process with strace shows that
  os.waitpid(0, os.WNOHANG) is called, yet it is difficult to say what
  is really happening because the process does not react on termination
  signals (SIGTERM, SIGHUP, SIGINT).

  2. Due to the fact that neutron-server initializes two instances of
  ProcessLauncher in one parent process, calling
  eventlet.greenthread.sleep(self.wait_interval) seems to be the only
  way for the process to switch contexts and allow another instance of
  ProcessLauncher to call _wait_child. It is important to mention that
  ProcessLauncher is not supposed to be used in this way (2 instances in
  one parent process) at all.

  This bug is intended to track fixing the outlined problems on Neutron
  side.

  [1] https://github.com/openstack/oslo-
  incubator/commit/bf92010cc9d4c2876eaf6092713aafa94dcc8b35

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437996] Re: LBV2: when the number of listeners is large, count method of quota executes very slow

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437996

Title:
  LBV2: when the number of listeners is large, count method of quota
  executes very slow

Status in neutron:
  Expired

Bug description:
  when I created 800+ listeners, it will take about 1 minute to create a new 
listener.
  CAUSE: the count method of quota implement slow since get_listeners_count in 
lb plugin is not implemented, so it will get the list of all listeners.
  The same problem with loadbalancer, pool and healthmonitor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] Re: cascade in orm relationships shadows ON DELETE CASCADE

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in neutron:
  Expired

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452529] Re: Lbaas object query doesn't support sorting or paging

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452529

Title:
  Lbaas object query doesn't support sorting or paging

Status in neutron:
  Expired

Bug description:
  Lbaas object query doesn't support sorting or paging, while query of
  the basic Neutron objects like network, subnet and port does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451492] Re: DELETE /v2.0/ports/uuid.json causes SQL error in log

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451492

Title:
  DELETE /v2.0/ports/uuid.json causes SQL error in log

Status in neutron:
  Expired

Bug description:
  
  I'm running 20 tempest test_server_basic_ops scenario tests at same time and 
after a few iterations it will fail during teardown with this stack:

  2015-05-04 08:57:25.769 20904 ERROR neutron.api.v2.resource 
[req-15065032-6513-4433-81b8-89bc53ea8c6a None] delete failed
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 476, in delete
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1019, in 
delete_port
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource context, id, 
do_notify=False)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_dvr_db.py", line 203, in 
disassociate_floatingips
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
do_notify=do_notify)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 1185, in 
disassociate_floatingips
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource context, 
port_id)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_db.py", line 915, in 
disassociate_floatingips
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 'router_id': 
None})
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 470, in 
__exit__
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self.rollback()
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in 
__exit__
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 467, in 
__exit__
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource self.commit()
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 377, in 
commit
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self._prepare_impl()
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 357, in 
_prepare_impl
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self.session.flush()
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1919, in 
flush
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
self._flush(objects)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2037, in 
_flush
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in 
__exit__
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2001, in 
_flush
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
flush_context.execute()
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource 
rec.execute(self)
  2015-05-04 08:57:25.769 20904 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 526, in 
execute
  2015-05-04 

[Yahoo-eng-team] [Bug 1449775] Re: Got server fault when set admin_state_up=false for health monitor

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449775

Title:
  Got server fault when set admin_state_up=false for health monitor

Status in neutron:
  Expired

Bug description:
  This happens in ddt tempest test for neutron_lbaas api v2,

  The error happens when the amdin_state_up for health_monitor sets to
  false, it will cause errors:

  {0}
  
neutron_lbaas.tests.tempest.v2.ddt.test_health_monitor_admin_state_up.CreateHealthMonitorAdminStateTest.test_create_health_monitor_with_scenarios(lb_T,listener_T,pool_T,healthmonitor_F)
  [105.469598s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/v2/ddt/test_health_monitor_admin_state_up.py", 
line 122, in test_create_health_monitor_with_scenarios
  self.check_operating_status()
File "neutron_lbaas/tests/tempest/v2/ddt/base_ddt.py", line 203, in 
check_operating_status
  (self.load_balancer_id))
File "neutron_lbaas/tests/tempest/v2/clients/load_balancers_client.py", 
line 76, in get_load_balancer_status_tree
  resp, body = self.get(url)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 274, in get
  return self.request('GET', url, extra_headers, headers)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 646, in request
  resp, resp_body)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 760, in _error_checker
  message=message)
  tempest_lib.exceptions.ServerFault: Got server fault
  Details: Request Failed: internal server error while processing your 
request.

  
  {0} 
neutron_lbaas.tests.tempest.v2.ddt.test_health_monitor_admin_state_up.UpdateHealthMonitorAdminStateTest.test_update_health_monitor_with_admin_state_up(healthmonitor_to_flag_F,lb_T,listener_T,pool_T,healthmonitor_T)
 [117.025847s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/v2/ddt/test_health_monitor_admin_state_up.py", 
line 194, in test_update_health_monitor_with_admin_state_up
  self.check_operating_status()
File "neutron_lbaas/tests/tempest/v2/ddt/base_ddt.py", line 203, in 
check_operating_status
  (self.load_balancer_id))
File "neutron_lbaas/tests/tempest/v2/clients/load_balancers_client.py", 
line 76, in get_load_balancer_status_tree
  resp, body = self.get(url)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 274, in get
  return self.request('GET', url, extra_headers, headers)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 646, in request
  resp, resp_body)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 760, in _error_checker
  message=message)
  tempest_lib.exceptions.ServerFault: Got server fault
  Details: Request Failed: internal server error while processing your 
request.

  
  {0} 
neutron_lbaas.tests.tempest.v2.ddt.test_health_monitor_admin_state_up.UpdateHealthMonitorAdminStateTest.test_update_health_monitor_with_admin_state_up(healthmonitor_to_flag_F,lb_T,listener_T,pool_T,healthmonitor_F)
 [116.775858s] ... FAILED
  Captured traceback-1:
  ~
  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/v2/ddt/test_health_monitor_admin_state_up.py", 
line 194, in test_update_health_monitor_with_admin_state_up
  self.check_operating_status()
File "neutron_lbaas/tests/tempest/v2/ddt/base_ddt.py", line 203, in 
check_operating_status
  (self.load_balancer_id))
File "neutron_lbaas/tests/tempest/v2/clients/load_balancers_client.py", 
line 76, in get_load_balancer_status_tree
  resp, body = self.get(url)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 274, in get
  return self.request('GET', url, extra_headers, headers)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 646, in request
  resp, resp_body)
File 
"/usr/local/lib/python2.7/dist-packages/tempest_lib/common/rest_client.py", 
line 760, in _error_checker
  message=message)
  tempest_lib.exceptions.ServerFault: Got server fault
  Details: Request Failed: internal server error while processing your 
request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449775/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1515232] Re: LBaaS v2 Radware driver fails to provision when no private key passphrase supplied for TLS certificate

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515232

Title:
  LBaaS v2 Radware driver fails to provision when no private key
  passphrase supplied for TLS certificate

Status in neutron:
  Expired

Bug description:
  When no passphrase exist for TLS certificate associated to listener,
  Radware provider fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465837] Re: Linux bridge: Dnsmasq is being passed None as an interface

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465837

Title:
  Linux bridge: Dnsmasq is being passed None as an interface

Status in neutron:
  Expired

Bug description:
  Command: ['ip', 'netns', 'exec', u'qdhcp-8faea227-4ae4-498d-
  bfc4-d6e656c77f81', 'dnsmasq', '--no-hosts', '--no-resolv', '--strict-
  order', '--bind-interfaces', '--interface=None', '--except-
  interface=lo', u'--pid-file=/opt/stack/data/neutron/dhcp/8faea227-4ae4
  -498d-bfc4-d6e656c77f81/pid', u'--dhcp-
  hostsfile=/opt/stack/data/neutron/dhcp/8faea227-4ae4-498d-
  bfc4-d6e656c77f81/host', u'--addn-
  hosts=/opt/stack/data/neutron/dhcp/8faea227-4ae4-498d-
  bfc4-d6e656c77f81/addn_hosts', u'--dhcp-
  optsfile=/opt/stack/data/neutron/dhcp/8faea227-4ae4-498d-
  bfc4-d6e656c77f81/opts', u'--dhcp-
  leasefile=/opt/stack/data/neutron/dhcp/8faea227-4ae4-498d-
  bfc4-d6e656c77f81/leases', '--dhcp-
  range=set:tag0,10.100.0.0,static,86400s', '--dhcp-
  range=set:tag1,2003::,static,64,86400s', '--dhcp-lease-max=16777216',
  '--conf-file=', '--domain=openstacklocal']

  http://logs.openstack.org/35/187235/3/experimental/check-tempest-dsvm-
  neutron-linuxbridge/35c6dac/logs/screen-q-dhcp.txt.gz?level=ERROR

  
  DevStack Configuration:

  http://git.openstack.org/cgit/openstack-infra/project-
  config/tree/jenkins/jobs/neutron.yaml#n182

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1465837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473194] Re: Grenade tests fail for *aaS migrations

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473194

Title:
  Grenade tests fail for *aaS migrations

Status in neutron:
  Expired

Bug description:
  I found that when Grenade runs (check-grenade-dsvm-function), it runs
  the migration for *aaS repos BEFORE upgrade, but not part of upgrade.
  In my case, the migration modified a table and the migration never
  ran. This fails a bunch of tests as a result.

  http://logs.openstack.org/70/199670/2/check/check-grenade-dsvm-
  neutron/8000a62/

  Applies to all the advanced services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489085] Re: Inconsistent path naming convention in API calls from neutronclient

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489085

Title:
  Inconsistent path naming convention in API calls from neutronclient

Status in neutron:
  Expired

Bug description:
  Some of the path name calls from the neutronclient use underscores
  (bandwidth_limit_rules), and some use a hypen/dash (rbac-policies, for
  example). They need to be made consistent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475096] Re: Host and device info need to get migrated to the VM host paired port that is found on the FIP table

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475096

Title:
  Host and device info need to get migrated to the VM host paired port
  that is found on the FIP table

Status in neutron:
  Expired

Bug description:
  When a VM host is created, a port is bound to this VM. Later on,
  if an FIP agent gateway port gets paired with this VM host port,
  it is bound to this VM. However, this FIP port's host and device
  information remains empty as of today. Moreover, while performing
  port disassociation on FIP table, this FIP port would get deleted
  as it can't be recognized as a DVR serviceable port.

  Host and device info needs to get migrated during the assigning
  process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489650] Re: Prefix delegation testing issues

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489650

Title:
  Prefix delegation testing issues

Status in neutron:
  Expired

Bug description:
  The pd, dibbler and agent side changes lack functional tests. There is
  no test that validates that the entire feature works (Full stack or
  Tempest).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472634] Re: Improve netns_cleanup functional test

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472634

Title:
  Improve netns_cleanup functional test

Status in neutron:
  Expired

Bug description:
  Currently functional test for netns_cleanup utility does not verify
  that processes spawned by DHCP agent are get killed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463589] Re: rules referencing security group members expose VMs in overlapping IP scenarios

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463589

Title:
  rules referencing security group members expose VMs in overlapping IP
  scenarios

Status in neutron:
  Expired

Bug description:
  create SG1 an SG2 that only allow traffic to members of their own group
  create two networks with same 10.0.0.0/24 CIDR
  create port1 in SG1 on net1 with IP 10.0.0.1
  create port2 in SG1 on net2 with IP 10.0.0.2
  create port3 in SG2 on net1 with IP 10.0.0.2

  port1 can communicate with port3 because of the allow rule for port2's
  IP

  This violates the constraints of the configured security groups.

  Another incarnation of the bug happens if you:

  (graphic representation: 
https://bugs.launchpad.net/neutron/+bug/1463589/+attachment/4416693/+files/sg-disjoint-networks-bug-with-router.png)
  create SG1 and SG2, that only allow traffic to members of their own group
  create two network (N1, N2) segments
  create another network segment (N3)
  add a router R that connects the N1 to N3

  then add IPa, IPb to SG1 on N1
  add IPc, IPd to SG1 on N2

  then add IPc and IPd to SG2 on N3

  IPa, and IPb will accept traffic from ports with IPc and IPd on SG2
  even if they should not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480039] Re: User cannot delete the port which is used by lb vip, but user can update it device_owner field

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480039

Title:
  User cannot delete the port which is used by lb vip,but user can
  update it device_owner field

Status in neutron:
  Expired

Bug description:
  When exist a port which used by loadbalancer vip, and now as my env didn't 
setup the lb-agent,so I just want to delete the port.
  The server showed "cannot be deleted directly via the port API: has device 
owner neutron:LOADBALANCER".But user can update this port's device_owner to 
anything.And rerun port-delete,server showed "Request Failed: internal server 
error while processing your request".

  And the err log from (neutron)server.log is :
   from (pid=4719) _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:224
  2015-07-31 11:20:50.238 DEBUG neutron.plugins.ml2.managers 
[req-9f8e0ef0-14b8-418f-9213-c5fd1df46cdb admin 
e921522145ec4c4082844237991d5d01] Extended port dict for driver 'port_security' 
from (pid=4719) extend_port_dict 
/opt/stack/neutron/neutron/plugins/ml2/managers.py:821
  2015-07-31 11:20:50.239 INFO neutron.wsgi 
[req-9f8e0ef0-14b8-418f-9213-c5fd1df46cdb admin 
e921522145ec4c4082844237991d5d01] 10.250.10.88 - - [31/Jul/2015 11:20:50] "GET 
/v2.0/ports.json?fields=id=d7c270ef-2a37-413f-99a3-8299aa96dc01 HTTP/1.1" 
200 272 0.094210
  2015-07-31 11:20:50.262 DEBUG neutron.plugins.ml2.managers 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Extended port dict for driver 'port_security' 
from (pid=4719) extend_port_dict 
/opt/stack/neutron/neutron/plugins/ml2/managers.py:821
  2015-07-31 11:20:50.263 DEBUG neutron.plugins.ml2.plugin 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Deleting port 
d7c270ef-2a37-413f-99a3-8299aa96dc01 from (pid=4719) _pre_delete_port 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:1253
  2015-07-31 11:20:50.263 DEBUG neutron.callbacks.manager 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Notify callbacks for port, before_delete from 
(pid=4719) _notify_loop /opt/stack/neutron/neutron/callbacks/manager.py:135
  2015-07-31 11:20:50.264 DEBUG neutron.callbacks.manager 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Calling callback 
neutron_lbaas.db.loadbalancer.loadbalancer_db._prevent_lbaas_port_delete_callback
 from (pid=4719) _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:141
  2015-07-31 11:20:50.280 DEBUG neutron.callbacks.manager 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Calling callback 
neutron.db.l3_db._prevent_l3_port_delete_callback from (pid=4719) _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:141
  2015-07-31 11:20:50.307 DEBUG neutron.plugins.ml2.managers 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Extended port dict for driver 'port_security' 
from (pid=4719) extend_port_dict 
/opt/stack/neutron/neutron/plugins/ml2/managers.py:821
  2015-07-31 11:20:50.324 DEBUG neutron.plugins.ml2.managers 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Extended network dict for driver 
'port_security' from (pid=4719) extend_network_dict 
/opt/stack/neutron/neutron/plugins/ml2/managers.py:807
  2015-07-31 11:20:50.328 DEBUG neutron.plugins.ml2.db 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] For port 
d7c270ef-2a37-413f-99a3-8299aa96dc01, host allinone, got binding levels [] from 
(pid=4719) get_binding_levels /opt/stack/neutron/neutron/plugins/ml2/db.py:177
  2015-07-31 11:20:50.333 DEBUG neutron.plugins.ml2.plugin 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] Calling delete_port for 
d7c270ef-2a37-413f-99a3-8299aa96dc01 owned by bzhao from (pid=4719) delete_port 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:1317
  2015-07-31 11:20:50.337 ERROR neutron.api.v2.resource 
[req-fdb7d324-b110-4123-b48f-d390071a21fd admin 
e921522145ec4c4082844237991d5d01] delete failed
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-31 11:20:50.337 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1486150] Re: Neutron port-update fails to roll back the binding:profile data in ports table in neutron DB if a MechanismDriverError thrown by a mechanism driver

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486150

Title:
  Neutron port-update fails to roll back the binding:profile data in
  ports table in neutron DB if a MechanismDriverError thrown by a
  mechanism driver

Status in neutron:
  Expired

Bug description:
  Steps to reproduce:
  1.Inherit the update_port_precommit(self, context) in the vendor 
mechanism driver 
  2.Throw MechanismDriverError in this method based on validation

  Expected output:
  None of the updated port parameters should be added in DB for the ports table 
as well as ml2_port_bindings table.

  Actual output:
  Eventhough the MechanismDriverError thrown by the vendor mechanism driver , 
ml2 plugin updates the value in the DB.
  Attached the logs in paste site:
  http://paste.openstack.org/show/420233/

  Neutron port-update REST API output:

  Earlier neutron port was created with local_link_information:
  “port_id” as Ten-GigabitEthernet1/0/36 using port-create command

  sdn@IronicVM:/opt/stack/logs$ curl -g -i -X PUT 
http://localhost:9696/v2.0/ports/"a3f10e8b-ee32-4e39-85cc-9ae34335302f; -H 
"User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: fab9f04cb1eb47b48848fe25b61678cf" 
-d '{"port": { "binding:profile": {"local_link_information":[{ "switch_id" : 
"44:31:92:61:89:d2",  "port_id" :"Ten-GigabitEthernet1/0/38"}], 
"bind_requested": false}, "binding:vnic_type" : "baremetal",  "binding:host_id" 
: "baremetal", "name": "P1", "admin_state_up": true}}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 108
  X-Openstack-Request-Id: req-2d6a5ea7-1389-40f3-abb9-86b898c256e5
  Date: Fri, 15 Apr 2016 11:30:30 GMT

  {"NeutronError": {"message": "update_port_precommit failed.", "type": 
"MechanismDriverError", "detail": ""}}sdn@IronicVM:/opt/stack/logs$
  sdn@IronicVM:/opt/stack/logs$

  sdn@IronicVM:/opt/stack/logs$ neutron port-show p1
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:host_id   | baremetal   
  |
  | binding:profile   | {"local_link_information": [{"port_id": 
"Ten-GigabitEthernet1/0/38", "switch_id": "44:31:92:61:89:d2"}], 
"bind_requested": false} |
  | binding:vif_details   | {}  
  |
  | binding:vif_type  | unbound 
  |
  | binding:vnic_type | baremetal   
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | 
  |
  | id| a3f10e8b-ee32-4e39-85cc-9ae34335302f
  |
  | mac_address   | fa:16:3e:60:c7:b8   
  |
  | name  | P1  
  |
  | network_id| 

[Yahoo-eng-team] [Bug 1469500] Re: lbaasV2. kill haproxy process - process is not recover automatically

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469500

Title:
  lbaasV2. kill haproxy process - process is not recover automatically

Status in neutron:
  Expired

Bug description:
  I killed the haproxy process and the process did not recover automatically. 
  We believe we should make it recover automatically. 

  
  kilo +rhel7.1
   openstack-neutron-common-2015.1.0-10.el7ost.noarch
  python-neutron-lbaas-2015.1.0-5.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-10.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-10.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490900] Re: Update onlink routes when subnet is added to an external network

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490900

Title:
  Update onlink routes when subnet is added to an external network

Status in neutron:
  Expired

Bug description:
  When adding a new subnet to an external network that is connected to a router 
the onlink route is not added.
  After restarting Neutron L3 agent - the route is added.

  Please refer to 
  https://bugs.launchpad.net/neutron/+bug/1312467 
  for additional information regarding the onlink routes.

  When adding an external network with multiple subnets to a router the
  routes are added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486199] Re: Fullstack tests sometimes crash OVS, causing subsequent tests to fail

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486199

Title:
  Fullstack tests sometimes crash OVS, causing subsequent tests to fail

Status in neutron:
  Expired

Bug description:
  I've observed both in the gate and locally, both in Ubuntu 14.04 with
  OVS 2.0 and Fedora 22 with OVS 2.3.1, that sometimes a full stack test
  can crash OVS. Subsequent tests in the same run will obviously fail.

  To get around this issue locally I restart the OVS service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488771] Re: multiple deletes in firewall tempest case: "test_create_show_delete_firewall" cause l3-agent throws unexpected exception: "FirewallNotFound".

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488771

Title:
  multiple deletes in firewall tempest case:
  "test_create_show_delete_firewall" cause l3-agent throws unexpected
  exception: "FirewallNotFound".

Status in neutron:
  Expired

Bug description:
  In kilo or icehouse release: multiple deletes in firewall tempest
  case: "test_create_show_delete_firewall" cause l3-agent throws
  unexpected exception: "FirewallNotFound".

  I am running tempest against kilo release, after running the neutron
  case: "test_create_show_delete_firewall", my l3-agent reports the
  following errors and exceptions:

  In this tempest case:
  I found delete firewall will be called twice, the second delete_firewall(in 
the method addCleanup), will be called immediately after the 
first(self.client.delete_firewall).
  This looks like an async call locking problem, I don't know if the current 
log/implementation/behavior is expected or unexpected.

  ==
  Tempest test case in the file: tempest/api/network/test_fwaas_extensions.py:
  ==
  def test_create_show_delete_firewall(self):
  ...
  self.addCleanup(self._try_delete_firewall, firewall_id)
  ...
  self.client.delete_firewall(firewall_id)

  ==
  my l3-agent log:
  ==
  2015-08-25 08:34:00.420 31255 INFO neutron.wsgi 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] 10.133.5.167 - - [25/Aug/2015 
08:34:00] "DELETE /v2.0/fw/firewalls/2b3102d9-1925-47b3-bca3-a8cd0296cc8c 
HTTP/1.1" 204 168 0.237354  <- First Delete FW call
  ...
  2015-08-25 08:34:00.725 31255 INFO neutron.wsgi 
[req-795bcbcf-5fde-43d6-8a66-5e2b3fdad44f ] 10.133.5.167 - - [25/Aug/2015 
08:34:00] "DELETE /v2.0/fw/firewalls/2b3102d9-1925-47b3-bca3-a8cd0296cc8c 
HTTP/1.1" 204 168 0.299331  <- Second Delete FW call
  ...
  2015-08-25 08:34:01.069 31255 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] delete_firewall() called 
delete_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:318  
<- First Delete FW database operation
  ...
  2015-08-25 08:34:01.098 31255 ERROR oslo_messaging.rpc.dispatcher 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] Exception during message handling: 
Firewall 2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be found.  <-- Second 
Delete FW throw exception
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py",
 line 67, in firewall_deleted
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher fw_db = 
self.plugin._get_firewall(context, firewall_id)
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py", 
line 101, in _get_firewall
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher raise 
fw_ext.FirewallNotFound(firewall_id=id)
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
FirewallNotFound: Firewall 2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be 
found.
  2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher
  2015-08-25 08:34:01.098 31255 ERROR oslo_messaging._drivers.common 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] Returning exception Firewall 
2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be found. to caller
  2015-08-25 08:34:01.099 31255 ERROR oslo_messaging._drivers.common 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply\nexecutor_callback))\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch\nexecutor_callback)\n', '  File 

[Yahoo-eng-team] [Bug 1469498] Re: LbaasV2 session persistence- Create and update

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469498

Title:
  LbaasV2 session persistence- Create and update

Status in neutron:
  Expired

Bug description:
  When we create a Lbaas pool with session persistence it configured OK

  neutron lbaas-pool-create --session-persistence type=HTTP_COOKIE  
--lb-algorithm LEAST_CONNECTIONS --listener 
4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | a626dc28-0126-48f7-acd3-f486827a89c1   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "HTTP_COOKIE"}   |
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |

  BUT, when we create a pool without session persistence and update it
  to do session persistence, the action is different and not user
  friendly.

  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-create --lb-algorithm 
LEAST_CONNECTIONS --listener 4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol 
HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | b9048a69-461a-4503-ba6b-8a2df281f804   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence ||
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |
  +-++
  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-update 
b9048a69-461a-4503-ba6b-8a2df281f804 --session-persistence type=HTTP_COOKIE
  name 'HTTP_COOKIE' is not defined
  [root@puma09 ~(keystone_redhat)]# 


  we need to configure it in the following way- 
  neutron lbaas-pool-update b9048a69-461a-4503-ba6b-8a2df281f804 
--session-persistence type=dict type=HTTP_COOKIE
  Updated pool: b9048a69-461a-4503-ba6b-8a2df281f804

  The config and update should be done in same way.

  Kilo+ rhel 7.1
  openstack-neutron-common-2015.1.0-10.el7ost.noarch
  python-neutron-lbaas-2015.1.0-5.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-10.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-10.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463375] Re: Use fanout RPC message to nofity the security group's change

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463375

Title:
  Use fanout RPC message to nofity the security group's change

Status in neutron:
  Expired

Bug description:
  when a security group members or rules change, if it just notify the l2 
agents with 'security_groups_member_updated' or 'security_groups_rule_updated', 
the all related l2 agents need to get the security group details through RPC 
from
  neutron-server, when the number of l2 agents is large, the load of 
neutron-server is heavy.
  we can use fanout RPC message with the changed sg details to notify the l2 
agents, then l2 agents which has the related devices update the sg information 
in their memory, they do not need to get the sg details through RPC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482519] Re: Missing functional tests for ovs_lib ofctl calls

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1482519

Title:
  Missing functional tests for ovs_lib ofctl calls

Status in neutron:
  Expired

Bug description:
  ovs_lib functions related to ofctl calls do not have functional tests.
  There are unit tests to mock these functions, but separate functional
  tests are required to verify functions related to flow management (Eg:
  add/delete/dump flows).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1482519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498151] Re: neutron-server on restart is triggering AVCs on a number of files

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498151

Title:
  neutron-server on restart is triggering AVCs on a number of files

Status in neutron:
  Expired

Bug description:
  using delorian installed via packstack --allinone I noticed that if we
  restart neutron-server it generates a number of AVCs for getattr on 87
  files

  neutron==7.0.0.0b4.dev223

  sample of a few entries from /var/log/audit.log from centos 7

  type=AVC msg=audit(1442855709.922:10594): avc:  denied  { getattr } for  
pid=16273 comm="neutron-server" path="/usr/bin/hostname" dev="dm-0" 
ino=67231056 scontext=system_u:system_r:neutron_t:s0 
tcontext=system_u:object_r:hostname_exec_t:s0 tclass=file
  type=AVC msg=audit(1442855709.922:10595): avc:  denied  { getattr } for  
pid=16273 comm="neutron-server" path="/usr/bin/fusermount" dev="dm-0" 
ino=70253714 scontext=system_u:system_r:neutron_t:s0 
tcontext=system_u:object_r:fusermount_exec_t:s0 tclass=file
  type=AVC msg=audit(1442855709.922:10596): avc:  denied  { getattr } for  
pid=16273 comm="neutron-server" path="/usr/bin/glance-api" dev="dm-0" 
ino=69439463 scontext=system_u:system_r:neutron_t:s0 
tcontext=system_u:object_r:glance_api_exec_t:s0 tclass=file
  type=AVC msg=audit(1442855709.922:10597): avc:  denied  { getattr } for  
pid=16273 comm="neutron-server" path="/usr/bin/glance-registry" dev="dm-0" 
ino=69439474 scontext=system_u:system_r:neutron_t:s0 
tcontext=system_u:object_r:glance_registry_exec_t:s0 tclass=file
  type=AVC msg=audit(1442855709.923:10598): avc:  denied  { getattr } for  
pid=16273 comm="neutron-server" path="/usr/bin/glance-scrubber" dev="dm-0" 
ino=69439476 scontext=system_u:system_r:neutron_t:s0 
tcontext=system_u:object_r:glance_scrubber_exec_t:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494959] Re: OVS driver.plug() execution time is O(n)

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494959

Title:
  OVS driver.plug() execution time is O(n)

Status in neutron:
  Expired

Bug description:
  router_info's looping through new_ports takes an increasing amount of
  time as the number of routers scheduled to a network node increases.
  Ideally, this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493662] Re: Xen ovs-agent-plugin polling manager is reported as not active

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493662

Title:
  Xen ovs-agent-plugin polling manager is reported as not active

Status in neutron:
  Expired

Bug description:
  My environment is XenServer + Neutron with ML2 plugin, OVS as driver, VLan 
type.
  This is a single box environment installed by devstack.
  When it began to run, I found q-agt always had error logs like this:

  2015-09-09 05:15:23.653 ERROR neutron.agent.linux.ovsdb_monitor [req-
  2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] Interface monitor is
  not active

  Dig deep into the code, I found the callstack is trace from
  OVSNeutronAgent.rpc_loop() to neutron/agent/linux/utils.py
  remove_abs_path(). So I temporarily added debug log and found

  2015-09-09 05:15:23.653 DEBUG neutron.agent.linux.utils 
[req-2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] cmd_matches_expected, 
cmd:['/usr/bin/python', '/usr/local/bin/neutron-rootwrap-xen-dom0', 
'/etc/neutron/rootwrap.conf', 'ovsdb-client', 'monitor', 'Interface', 
'name,ofport,external_ids', '--format=json'], expect:['ovsdb-client', 
'monitor', 'Interface', 'name,ofport,external_ids', '--format=json'] from 
(pid=11595) cmd_matches_expected 
/opt/stack/neutron/neutron/agent/linux/utils.py:303
   
  So, it's clear that after remove absolute path, the command still cannot 
match, so it will lead to the ERROR log of "Interface monitor is not active"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492398] Re: VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's IP address

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492398

Title:
  VXLAN Overlay ping issue when Gateway IP is set to one of local NIC's
  IP address

Status in neutron:
  Expired
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  There's an issue when a VXLAN overlay VM tries to ping an overlay IP
  address that is also the same as one of the host machine's local IP
  addresses. In my setup, I've tried pinging the overlay VM's router's
  IP address. Here are the details:

  VXLAN Id is 100 (this number is immaterial, what matters is that we
  use VXLAN for tenant traffic)

  Overlay VM:
  IP: 10.0.1.3/24
  GW: 10.0.1.1

  Host Info:
  enp21s0f0: 1.1.1.5/24 (This interface is used to contact the controller as 
well as for encapsulated datapath traffic.

  qbr89a962f7-9b: Linux Bridge to which the Overlay VM connects. No IP
  address on this one.

  brctl show:
  qbr89a962f7-9b  8000.56f6fefb9d5c   no  qvb89a962f7-9b
  tap89a962f7-9b

  ifconfig qbr89a962f7-9b
  qbr89a962f7-9b: flags=4163  mtu 1500
  inet6 fe80::54f6:feff:fefb:9d5c  prefixlen 64  scopeid 0x20
  ether 56:f6:fe:fb:9d:5c  txqueuelen 0  (Ethernet)
  RX packets 916  bytes 27072 (26.4 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 10  bytes 780 (780.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  I am using a previously unused NIC named eno1 for this example. When
  eno1 has no IP address, ping from the overlay VM to the router is
  successful. ARP on the VM shows the correct MAC resolution. When I set
  eno1 to 10.0.1.1, ARP on the overlay VM show's qbr89a962f7-9b's MAC
  address and ping never succeeds.

  When things work OK ARP for 10.0.1.1 is fa:16:3e:0c:52:6d

  When eno1 is set to 10.0.1.1 ARP resolution is incorrect, 10.0.1.1
  resolves to 56:f6:fe:fb:9d:5c and ping never succeeds. I've deleted
  ARPs to ensure that resolution is triggered. It appears as of the OVS
  br-int never received the ARP request.

  Thanks,
  -Uday

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499371] Re: Openstack Kilo Nova-Docker:Delete 45 docker instanse got neutron errors from dashboard

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499371

Title:
  Openstack Kilo  Nova-Docker:Delete 45 docker instanse got neutron
  errors from dashboard

Status in neutron:
  Expired

Bug description:
  Bug description:
[Summary]
NOVA-DOCKER:Delete 45 docker instanse got neutron errors from 
dashboard(shutoff/Exited status)
[Topo]
Unbuntu Kilo 14.04 OS , Kilo docker setup , 1 controller ,2 network node,6 
computenode

[Reproduceable or not]
Can be reproduced, 
This issue happened on hihg load configuration only

[Recreate Steps]
  Reproduce step:

  1 setup openstack ubuntu kilo with novadocker
  2 launch more than 45 docker instanse in each compute node
  3 restart the nova-compute service in one compute node
  4 stop the docker service in one compute node ,then start the docker service
  5 then all the instanse is in shutoff/Exited status
  root@quasarsdn2:~# docker ps -a
  CONTAINER IDIMAGE   COMMAND   CREATED 
STATUS  PORTS   NAMES
  bdce12532d68leo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-11831bbd-0e10-4909-89de-a305bcc0fe0d
  95bb5cdc955bleo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-72be3b5b-3a37-418b-8692-7a213f2689d8
  df12546dfd1eleo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-31fc34ee-e24f-4a41-81de-8b8b1e113cc0
  402ed256b27dleo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-9b742eb4-c43d-4400-9713-069757f731c9
  9d224fd8574dleo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-3fdc7feb-d5ec-42ce-98b4-d815d0b94274
  9d2bf2b8f950leo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-cefbe9cb-25ad-4cb5-afb5-fb6d22efcf6a
  90f5b879bcbbleo_ubuntu  "/usr/sbin/sshd -D"   23 hours ago
Exited (0) 32 seconds ago   
nova-defa6253-1de2-4c56-aa1d-39835f32f56d

  6 then select all the shutoff/Exited instance to terminate from dashboard ,
  you will got the error "Error: Unable to connect to Neutron"

  [Log]
  attach the screenshot also

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509046] Re: Refactoring of L3 Scheduler

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509046

Title:
  Refactoring of L3 Scheduler

Status in neutron:
  Expired

Bug description:
  During Kilo we merged "DHCP Service LoadBalancing Scheduler" feature:

  * https://review.openstack.org/#/c/111210/ (neutron-specs)
  * https://review.openstack.org/#/c/137017/ (neutron)

  The implementation provided as simplified framework for writing
  scheduler functions. It would be nice if the L3 scheduler embraced
  this same framework. It would neat and consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494975] Re: Scalability of the legacy L3 reference implementation (OVS)

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494975

Title:
  Scalability of the legacy L3 reference implementation (OVS)

Status in neutron:
  Expired

Bug description:
  Here is the description and how this problem can be reproduced:

  On a multi-node OpenStack instance with standard (legacy) l3 and no
  dvr:

  - Create a new tenant
  - Create a network/subnet, and a router for the tenant
  - Connect the network to the router (with the external network set as the - 
gateway as well)
  - Create a VM on the network and assign a floating ip to it
  - Ping the floating IP until the ping is successful
  Time the above.

  Repeat the above steps. As the number of tenants/networks/routers
  increases (to several hundreds) so does the time it takes for these
  operations to complete.

  Time to create the tenant and the VM remains fairly flat.

  The rest of the operations (setting up the networking) increases from
  a few seconds at the beginning to more than a minute when the number
  of routers grows to around a thousand.

  Note that the test does not stress the messaging or API servers as
  each iteration of above steps is executed after the previous iteration
  has resulted in an operational router and floating IP.

  The breakdown of various l3 operations (which seem to contribute to
  the above) show the possible bugs reported here:

  https://bugs.launchpad.net/neutron/+bug/1494958
  https://bugs.launchpad.net/neutron/+bug/1494959
  https://bugs.launchpad.net/neutron/+bug/1494961
  https://bugs.launchpad.net/neutron/+bug/1494963

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556342] Re: Able to create pool with different protocol than listener protocol

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556342

Title:
  Able to create pool with different protocol than listener protocol

Status in neutron:
  Expired

Bug description:
  When creating a pool with different protocol than listener protocol, a pool 
is create even though the protocols are not compatible.
  Previously, this would not display any pools in neutron lbaas-pool-list since 
the protocols are not compatible. 

  
  Initial state
  $ neutron lbaas-loadbalancer-list
  
+--+--+-+-+--+
  | id   | name | vip_address | 
provisioning_status | provider |
  
+--+--+-+-+--+
  | bf449f65-633d-4859-b417-28b35f4eaea2 | lb1  | 10.0.0.3| ERROR   
| octavia  |
  | c6bf0765-47a9-49d9-a2f2-dd3f1ea81a5c | lb2  | 10.0.0.13   | ACTIVE  
| octavia  |
  | e1210b03-f440-4bc1-84ca-9ba70190854f | lb3  | 10.0.0.16   | ACTIVE  
| octavia  |
  
+--+--+-+-+--+

  $ neutron lbaas-listener-list
  
+--+--+---+--+---++
  | id   | default_pool_id  
| name  | protocol | protocol_port | admin_state_up |
  
+--+--+---+--+---++
  | 4cda881c-9209-42ac-9c97-e1bfab0300b2 | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc 
| list2 | HTTP |80 | True   |
  
+--+--+---+--+---++

  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  +--+---+--++

  
  Create new listener with TCP protocol 
  $ neutron lbaas-listener-create --name list3 --loadbalancer lb3 --protocol 
TCP --protocol-port 22
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 9574801a-675b-4784-baf0-410d1a1fd941   |
  | loadbalancers | {"id": "e1210b03-f440-4bc1-84ca-9ba70190854f"} |
  | name  | list3  |
  | protocol  | TCP|
  | protocol_port | 22 |
  | sni_container_refs||
  | tenant_id | b24968d717804ffebd77803fce24b5a4   |
  +---++

  Create pool with HTTP protocol instead of TCP
  $ neutron lbaas-pool-create --name pool3 --lb-algorithm ROUND_ROBIN 
--listener list3 --protocol HTTP
  Listener protocol TCP and pool protocol HTTP are not compatible.

  Pool list shows pool3 even though the protocols are not compatible and should 
not be able to create pool
  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  | 7e6fbe67-60b0-40cd-afdd-44cddd8c60a1 | pool3 | HTTP | True   |
  +--+---+--++

  From mysql, pool table from octavia DB. No pool3 
  

[Yahoo-eng-team] [Bug 1557002] Re: isolated metadata proxy will not be updated when router interface add/delete

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557002

Title:
  isolated metadata proxy will not be updated when router interface
  add/delete

Status in neutron:
  Expired

Bug description:
  When create/delete router interface, the isolated metadata proxy for
  isolated network will not be updated. It will cause two issues.

  a) The isolated metadata proxy process will still be there, even if no
  subnet will use it. It will waste the resource of host, especially
  when there are many networks.

  Reproduce:
  1) Set "enable_isolated_metadata = True" in configuration.
  2) Create a network.
  3) Create a ipv4 subnet for the network.
  4) Attach the subnet to a router.
  The  isolated metadata proxy process is useless now, but it is still there. 
Even if I restarted the dhcp-agent, the process will not be killed.

  b) The  isolated metadata proxy process will not be spawned, when a
  subnet becomes isolated.

  Reproduce:
  1) Set "enable_isolated_metadata = True" in configuration.
  2) Create a network.
  3) Create a ipv4 subnet for the network.
  4) Attach the subnet to a router.
  5) Update the network with "neutron net-update test-net --admin_state_up 
False" The isolated metadata proxy should be killed now.
  6) Update the network with "neutron net-update test-net --admin_state_up True"
  7) Detach the subnet from the router. The subnet becomes isolated, but the  
isolated metadata proxy process will not be spawned. And the isolated metadata 
service can be used.

  Bug [1] introduced a way to update the network on dhcp agent when
  create/delete router interface. The fix can be based on that bug, to
  update the metadata proxy process according to the change of router
  interface.

  [1] https://bugs.launchpad.net/neutron/+bug/1554825

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593770] Re: Remove the deprecated quota driver "ConfDriver"

2016-10-16 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593770

Title:
  Remove the deprecated quota driver "ConfDriver"

Status in neutron:
  Expired

Bug description:
  The ConfDriver is deprecated since Liberty [1][2], it should be
  removed in Newton now.

  [1] https://bugs.launchpad.net/neutron/+bug/1430523
  [2] https://review.openstack.org/#/c/179543/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574113] Re: curtin/maas don't support multiple (derived) archives/repositories with custom keys

2016-10-16 Thread Launchpad Bug Tracker
This bug was fixed in the package curtin - 0.1.0~bzr425-0ubuntu1~16.04.1

---
curtin (0.1.0~bzr425-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  [ Scott Moser ]
  * debian/new-upstream-snapshot: add writing of debian changelog entries.

  [ Ryan Harper ]
  * New upstream snapshot.
- unittest,tox.ini: catch and fix issue with trusty-level mock of open
- block/mdadm: add option to ignore mdadm_assemble errors  (LP: #1618429)
- curtin/doc: overhaul curtin documentation for readthedocs.org
  (LP: #1351085)
- curtin.util: re-add support for RunInChroot  (LP: #1617375)
- curtin/net: overhaul of eni rendering to handle mixed ipv4/ipv6 configs
- curtin.block: refactor clear_holders logic into block.clear_holders and
  cli cmd
- curtin.apply_net should exit non-zero upon exception.  (LP: #1615780)
- apt: fix bug in disable_suites if sources.list line is blank.
- vmtests: disable Wily in vmtests
- Fix the unittests for test_apt_source.
- get CURTIN_VMTEST_PARALLEL shown correctly in jenkins-runner output
- fix vmtest check_file_strippedline to strip lines before comparing
- fix whitespace damage in tests/vmtests/__init__.py
- fix dpkg-reconfigure when debconf_selections was provided.
  (LP: #1609614)
- fix apt tests on non-intel arch
- Add apt features to curtin.  (LP: #1574113)
- vmtest: easier use of parallel and controlling timeouts
- mkfs.vfat: add force flag for formating whole disks  (LP: #1597923)
- block.mkfs: fix sectorsize flag  (LP: #1597522)
- block_meta: cleanup use of sys_block_path and handle cciss knames
  (LP: #1562249)
- block.get_blockdev_sector_size: handle _lsblock multi result return
  (LP: #1598310)
- util: add target (chroot) support to subp, add target_path helper.
- block_meta: fallback to parted if blkid does not produce output
  (LP: #1524031)
- commands.block_wipe:  correct default wipe mode to 'superblock'
- tox.ini: run coverage normally rather than separately
- move uefi boot knowledge from launch and vmtest to xkvm

 -- Ryan Harper   Mon, 03 Oct 2016 13:43:54
-0500

** Changed in: curtin (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1574113

Title:
  curtin/maas don't support multiple (derived) archives/repositories
  with custom keys

Status in cloud-init:
  Fix Released
Status in curtin:
  Fix Committed
Status in MAAS:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in curtin package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Committed
Status in curtin source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Curtin doesn't support multiple derived archive/repositories with
 custom keys as typically deployed in an offline Landscape deployment.
 Adding the custom key resulted in an error when processing the
 apt_source configuration as provided in this setup.

 Curtin has been updated to support the updated apt-source model
 implemented in cloud-init as well.  Together the existing Landscape
 deployments for offline users can now supply an apt-source config
 that updates curtin to use the specified derived repository with a
 custom key.
 
  [Test Case]

   * Install proposed curtin package and deploy a system behind a
 Landscape Offline configuration with a derived repo.

PASS: Curtin will successfully accept the derived repo and install the
  system from the specified apt repository.

FAIL: Curtin will fail to install the OS with an error like:

W: GPG error: http://100.107.231.166 trusty InRelease:
The following signatures couldn't be verified because the public key
is not available: NO_PUBKEY 2C6F2731D2B38BD3
E: There are problems and -y was used without --force-yes

Unexpected error while running command.
Command: ['chroot', '/tmp/tmpcEfTLw/target', 'eatmydata', 'apt-get',
  '--quiet', '--assume-yes',
  '--option=Dpkg::options::=--force-unsafe-io',
  '--option=Dpkg::Options::=--force-confold', 'install',
  'lvm2', 'ifenslave']
Exit code: 100


  [Regression Potential]

   * Other users of previous curtin 'apt_source' configurations may not
 continue to work without re-formatting the apt_source configuration.

  
  [Original Description]

  In a customer environment I have to deploy using offline resources (no
  internet connection at all), so I created apt mirror and MAAS images
  mirror. I configured MAAS  to use the local  mirrors and I'm able to
  commission the nodes but I'm not able to deploy the nodes because
  there is no way to add gpg key of the local repo in target before the
  'late' stage'.

  Using 

[Yahoo-eng-team] [Bug 1633385] Re: fwaas v2 installation with devstack set the wrong plugin class path in neutron.conf

2016-10-16 Thread zhaobo
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633385

Title:
  fwaas v2 installation with devstack set the wrong plugin class path in
  neutron.conf

Status in neutron:
  Invalid

Bug description:
  This issue will be hit during devstack installation if we use q-fwaas-v2.
  The neutron.conf genarated automaticly will set the fwaas v2 service
  plugin with the wrong class path.

  
  neutron.conf
  ---
  [DEFAULT]
  .
  service_plugins = 
neutron_fwaas.services.firewall.fwaas_plugin_v2.FirewallPluginV2

  endpoint.txt
  ---
  [neutron.service_plugins]
  firewall = neutron_fwaas.services.firewall.fwaas_plugin:FirewallPlugin
  firewall_v2 = neutron_fwaas.services.firewall.fwaas_plugin_v2:FirewallPluginV2
  neutron.services.firewall.fwaas_plugin.FirewallPlugin = 
neutron_fwaas.services.firewall.fwaas_plugin:FirewallPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1633385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633955] [NEW] live migrate rollback disconnect other's volume

2016-10-16 Thread Bin Zhou
Public bug reported:

I encountered this bug in my daily test. I found when volume initialize 
connection failed at dest host, the rollback process will misdisconnect other's 
volume on dest host.
my test step is as follows:

1) create 2 Compute node (host#1 and host#2)
2) create 1 VM on host#1 with volume vol01(vm01)
3) live-migrate vm01 from host#1 to host#2
4) vol01 initialize connection failed on host#2
5) live-migrate rollback and disconnect volume on host#2
6) some volume on host#2 was disconnected by mistake

The issue is that in rollback process, nova disconnect volume from the
block_device_mapping table, which was supposed to be update on dest host
host#2 when volume initialize connection succeed. In this bug, the
volume initialize connection failed at dest host host#2, and the record
in  block_device_mapping table was not updated, remaining the origin one
which created on source host host#1, the difference between records of
dest and source host may be the lun-id mapped on host, that's the point
why other volume was disconnected by mistake on host#2.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633955

Title:
  live migrate rollback disconnect other's volume

Status in OpenStack Compute (nova):
  New

Bug description:
  I encountered this bug in my daily test. I found when volume initialize 
connection failed at dest host, the rollback process will misdisconnect other's 
volume on dest host.
  my test step is as follows:

  1) create 2 Compute node (host#1 and host#2)
  2) create 1 VM on host#1 with volume vol01(vm01)
  3) live-migrate vm01 from host#1 to host#2
  4) vol01 initialize connection failed on host#2
  5) live-migrate rollback and disconnect volume on host#2
  6) some volume on host#2 was disconnected by mistake

  The issue is that in rollback process, nova disconnect volume from the
  block_device_mapping table, which was supposed to be update on dest
  host host#2 when volume initialize connection succeed. In this bug,
  the volume initialize connection failed at dest host host#2, and the
  record in  block_device_mapping table was not updated, remaining the
  origin one which created on source host host#1, the difference between
  records of dest and source host may be the lun-id mapped on host,
  that's the point why other volume was disconnected by mistake on
  host#2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633941] [NEW] VPNaaS: peer-cidr validation is invalid

2016-10-16 Thread Hiroyuki Ito
Public bug reported:

When creating ipsec-site-connection in VPNaaS, it looks peer-cidr validation is 
invalid.
The cidr format like "10/8" should be rejected like cidr in subnet resources 
but it is accepted like the following: 

$ neutron ipsec-site-connection-create --vpnservice-id service1 --ikepolicy-id 
ike1 --ipsecpolicy-id ipsec1 --peer-id 192.168.7.1 --peer-address 192.168.7.1 
--peer-cidr 10/8 --psk pass
Created a new ipsec_site_connection:
+---++
| Field | Value  |
+---++
| admin_state_up| True   |
| auth_mode | psk|
| description   ||
| dpd   | {"action": "hold", "interval": 30, "timeout": 120} |
| id| 2bed308f-5462-45bb-ae79-5cb9003424ef   |
| ikepolicy_id  | be1f92ab-8064-4328-8862-777ae6878691   |
| initiator | bi-directional |
| ipsecpolicy_id| 09c67ae8-6ede-47ca-a15b-c52be1d7feaf   |
| local_ep_group_id ||
| local_id  ||
| mtu   | 1500   |
| name  ||
| peer_address  | 192.168.7.1|
| peer_cidrs| 10/8   |
| peer_ep_group_id  ||
| peer_id   | 192.168.7.1|
| project_id| 068a47c758ae4b5d9fab059539e57740   |
| psk   | pass   |
| route_mode| static |
| status| PENDING_CREATE |
| tenant_id | 068a47c758ae4b5d9fab059539e57740   |
| vpnservice_id | 4f82612c-5e3a-4699-aafa-bdfa5ede31fe   |
+---++

I think this is because _validate_subnet_list_or_none method in
neutron_vpnaas.extensions.vpnaas doesn't return the result.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633941

Title:
  VPNaaS: peer-cidr validation is invalid

Status in neutron:
  New

Bug description:
  When creating ipsec-site-connection in VPNaaS, it looks peer-cidr validation 
is invalid.
  The cidr format like "10/8" should be rejected like cidr in subnet resources 
but it is accepted like the following: 

  $ neutron ipsec-site-connection-create --vpnservice-id service1 
--ikepolicy-id ike1 --ipsecpolicy-id ipsec1 --peer-id 192.168.7.1 
--peer-address 192.168.7.1 --peer-cidr 10/8 --psk pass
  Created a new ipsec_site_connection:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | auth_mode | psk|
  | description   ||
  | dpd   | {"action": "hold", "interval": 30, "timeout": 120} |
  | id| 2bed308f-5462-45bb-ae79-5cb9003424ef   |
  | ikepolicy_id  | be1f92ab-8064-4328-8862-777ae6878691   |
  | initiator | bi-directional |
  | ipsecpolicy_id| 09c67ae8-6ede-47ca-a15b-c52be1d7feaf   |
  | local_ep_group_id ||
  | local_id  ||
  | mtu   | 1500   |
  | name  ||
  | peer_address  | 192.168.7.1|
  | peer_cidrs| 10/8   |
  | peer_ep_group_id  ||
  | peer_id   | 192.168.7.1|
  | project_id| 068a47c758ae4b5d9fab059539e57740   |
  | psk   | pass   |
  | route_mode| static   

[Yahoo-eng-team] [Bug 1633878] [NEW] nova boot error

2016-10-16 Thread Dhanabalan Balasundaram
Public bug reported:

Hello,

 nova boot --flavor m1.tiny --image cirros --nic 
net-id=0d88f440-038f-442a-b37d-c7cc0f994838 \
>   --security-group default --key-name mykey public-instance
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-72511dda-de6b-4ed6-97cd-a95e3ba093ad)


Ubuntu 14.04 and liberty

Please help me in fixing the issue and let me know if more info
required. Thank you

Best regards,
Dhanabalan

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "syslog"
   https://bugs.launchpad.net/bugs/1633878/+attachment/4762044/+files/syslog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633878

Title:
   nova boot error

Status in OpenStack Compute (nova):
  New

Bug description:
  Hello,

   nova boot --flavor m1.tiny --image cirros --nic 
net-id=0d88f440-038f-442a-b37d-c7cc0f994838 \
  >   --security-group default --key-name mykey public-instance
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-72511dda-de6b-4ed6-97cd-a95e3ba093ad)

  
  Ubuntu 14.04 and liberty

  Please help me in fixing the issue and let me know if more info
  required. Thank you

  Best regards,
  Dhanabalan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2016-10-16 Thread Jeremy Liu
** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: sahara
 Assignee: (unassigned) => Jeremy Liu (liujiong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  In Progress
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in cloudkitty:
  In Progress
Status in congress:
  New
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  New
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in Sahara:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in senlin:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633876] [NEW] nova boot fails

2016-10-16 Thread Dhanabalan Balasundaram
Public bug reported:

Hi All,

I am getting below error

OS" Ubuntu 14.04 and Liberty

root@controller:/tmp#  nova boot --flavor m1.tiny --image cirros --nic 
net-id=0d88f440-038f-442a-b37d-c7cc0f994838 \
>   --security-group default --key-name mykey public-instance
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-72511dda-de6b-4ed6-97cd-a95e3ba093ad)
root@controller:/tmp#

Please help me in fixing the issue


Best regards,
DB

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633876

Title:
  nova boot fails

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi All,

  I am getting below error

  OS" Ubuntu 14.04 and Liberty

  root@controller:/tmp#  nova boot --flavor m1.tiny --image cirros --nic 
net-id=0d88f440-038f-442a-b37d-c7cc0f994838 \
  >   --security-group default --key-name mykey public-instance
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-72511dda-de6b-4ed6-97cd-a95e3ba093ad)
  root@controller:/tmp#

  Please help me in fixing the issue

  
  Best regards,
  DB

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2016-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/384294
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=19c354aacd27f6941467e34826774c6199bc4f8f
Submitter: Jenkins
Branch:master

commit 19c354aacd27f6941467e34826774c6199bc4f8f
Author: Juan Antonio Osorio Robles 
Date:   Mon Oct 10 08:56:12 2016 +0300

Add http_proxy_to_wsgi to api-paste

This sets up the HTTPProxyToWSGI middleware in front of Neutron-API. The
purpose of this middleware is to set up the request URL correctly in
case there is a proxy (For instance, a loadbalancer such as HAProxy)
in front of Neutron.

So, for instance, when TLS connections are being terminated in the
proxy, and one tries to get the versions from the / resource of
Neutron, one will notice that the protocol is incorrect; It will show
'http' instead of 'https'. So this middleware handles such cases.
Thus helping Keystone discovery work correctly.

The HTTPProxyToWSGI is off by default and needs to be enabled via a
configuration value.

Change-Id: Ice9ee8f4e04050271d59858f92034c230325718b
Closes-Bug: #1590608


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  In Progress
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in cloudkitty:
  In Progress
Status in congress:
  New
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  New
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in OpenStack Search (Searchlight):
  In Progress
Status in senlin:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1631466] Re: GEt on /v2.0 fails with a 404

2016-10-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/384553
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1d01864c066155939e84b0402730f1266b091841
Submitter: Jenkins
Branch:master

commit 1d01864c066155939e84b0402730f1266b091841
Author: Sergey Belous 
Date:   Mon Oct 10 17:20:42 2016 +0300

Added trailing slash in link to Networking API v2.0

TrivialFix
Closes-bug: #1631466

Change-Id: I310ea62f210ec2d4250d0f93c3081356f429fc41


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1631466

Title:
  GEt on /v2.0 fails with a 404

Status in neutron:
  Fix Released

Bug description:
  Following the networking API reference to list versions of the network
  API, I can list versions from the network endpoint like this:

  http://developer.openstack.org/api-ref/networking/v2/?expanded=list-
  api-versions-detail

  And get details on each version like this:

  http://developer.openstack.org/api-ref/networking/v2/?expanded=list-
  api-versions-detail,show-api-v2-details-detail#show-api-v2-details

  However, in practice, using master neutron:

  stack@osc:/opt/stack/neutron$ git log -1
  commit 80d4df144d62ce638ca7bdd228cdd116e34b3067
  Merge: 3ade301 fc93f7f
  Author: Jenkins 
  Date:   Wed Oct 5 15:36:06 2016 +

  Merge "Relocate Flavor and ServiceProfile DB models"
  stack@osc:/opt/stack/neutron$

  
  The 2nd route to get v2.0 details fails:

  stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/ | 
json_pp
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   118  100   1180 0  33541  0 --:--:-- --:--:-- --:--:-- 39333
  {
 "versions" : [
{
   "id" : "v2.0",
   "links" : [
  {
 "rel" : "self",
 "href" : "http://9.5.127.82:9696/v2.0;
  }
   ],
   "status" : "CURRENT"
}
 ]
  }

  stack@osc:~$ curl -g -H "X-Auth-Token: $OS_TOKEN" http://9.5.127.82:9696/v2.0
  404 Not Found

  The resource could not be found.

  --

  So either the docs are wrong, or the API is busted.

  It looks like this is what should handle the /v2.0 route though:

  
https://github.com/openstack/neutron/blob/80d4df144d62ce638ca7bdd228cdd116e34b3067/neutron/api/v2/router.py#L45

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1631466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp