[Yahoo-eng-team] [Bug 1818697] [NEW] neutron fullstack frequently times out waiting on qos ports

2019-03-05 Thread Doug Wiegley
Public bug reported:

ft1.1: 
neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed(egress,openflow-native)_StringException:
 Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 685, in 
wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/hubs/hub.py",
 line 297, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 690, 
in test_bw_limit_qos_port_removed
vm, MIN_BANDWIDTH, self.direction)
  File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 675, 
in _wait_for_min_bw_rule_applied
lambda: vm.bridge.get_egress_min_bw_for_port(
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 690, in 
wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818697

Title:
  neutron fullstack frequently times out waiting on qos ports

Status in neutron:
  New

Bug description:
  ft1.1: 
neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed(egress,openflow-native)_StringException:
 Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/common/utils.py", line 685, in 
wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/hubs/hub.py",
 line 297, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 
690, in test_bw_limit_qos_port_removed
  vm, MIN_BANDWIDTH, self.direction)
File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 
675, in _wait_for_min_bw_rule_applied
  lambda: vm.bridge.get_egress_min_bw_for_port(
File "/opt/stack/new/neutron/neutron/common/utils.py", line 690, in 
wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818696] [NEW] frequent ci failures trying to delete qos port

2019-03-05 Thread Doug Wiegley
Public bug reported:

Lots of this error:
RuntimeError: OVSDB Error: {"details":"cannot delete QoS row 
03bc0e7a-bd4e-42a7-95e1-493fce7d6342 because of 1 remaining 
reference(s)","error":"referential integrity violation"}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818696

Title:
  frequent ci failures trying to delete qos port

Status in neutron:
  New

Bug description:
  Lots of this error:
  RuntimeError: OVSDB Error: {"details":"cannot delete QoS row 
03bc0e7a-bd4e-42a7-95e1-493fce7d6342 because of 1 remaining 
reference(s)","error":"referential integrity violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817119] [NEW] [rfe] add rbac for security groups

2019-02-21 Thread Doug Wiegley
Public bug reported:

This change started as a small performance fix, allowing hundreds of
tenants to share one 3000+ rule group, instead of having hundreds of
them.

Adds "security_group" as a supported RBAC type:

Neutron-lib:
https://review.openstack.org/635313

Neutron:
https://review.openstack.org/635311

Tempest tests:
https://review.openstack.org/635312

Client:
https://review.openstack.org/636760

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817119

Title:
  [rfe] add rbac for security groups

Status in neutron:
  New

Bug description:
  This change started as a small performance fix, allowing hundreds of
  tenants to share one 3000+ rule group, instead of having hundreds of
  them.

  Adds "security_group" as a supported RBAC type:

  Neutron-lib:
  https://review.openstack.org/635313

  Neutron:
  https://review.openstack.org/635311

  Tempest tests:
  https://review.openstack.org/635312

  Client:
  https://review.openstack.org/636760

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816874] [NEW] l3 agent using return value from methods with no return

2019-02-20 Thread Doug Wiegley
Public bug reported:

* Module neutron.agent.l3.dvr_local_router
neutron/agent/l3/dvr_local_router.py:111:12: E: Assigning result of a 
function call, where the function has no return (assignment-from-no-return)
* Module neutron.agent.l3.router_info
neutron/agent/l3/router_info.py:380:16: E: Assigning result of a function 
call, where the function has no return (assignment-from-no-return)

Note that the lines in those two files about pylint disable assignment-
from-no-return, if still there, should be removed when this is fixed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816874

Title:
  l3 agent using return value from methods with no return

Status in neutron:
  New

Bug description:
  * Module neutron.agent.l3.dvr_local_router
  neutron/agent/l3/dvr_local_router.py:111:12: E: Assigning result of a 
function call, where the function has no return (assignment-from-no-return)
  * Module neutron.agent.l3.router_info
  neutron/agent/l3/router_info.py:380:16: E: Assigning result of a function 
call, where the function has no return (assignment-from-no-return)

  Note that the lines in those two files about pylint disable
  assignment-from-no-return, if still there, should be removed when this
  is fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816485] [NEW] [rfe] change neutron process names to match their role

2019-02-18 Thread Doug Wiegley
Public bug reported:

See the commit message description here:
https://review.openstack.org/#/c/637019/

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816485

Title:
  [rfe] change neutron process names to match their role

Status in neutron:
  In Progress

Bug description:
  See the commit message description here:
  https://review.openstack.org/#/c/637019/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815629] [NEW] api and rpc worker defaults are problematic

2019-02-12 Thread Doug Wiegley
Public bug reported:

We default the number of api workers to the number of cores. At
approximately 2GB per neutron-server, sometimes that's more RAM than is
available, and the OOM killer comes out.

We default the number of rpc workers to 1, which seems to fall behind on
all but the smallest deployments.

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815629

Title:
  api and rpc worker defaults are problematic

Status in neutron:
  In Progress

Bug description:
  We default the number of api workers to the number of cores. At
  approximately 2GB per neutron-server, sometimes that's more RAM than
  is available, and the OOM killer comes out.

  We default the number of rpc workers to 1, which seems to fall behind
  on all but the smallest deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1810563] [NEW] adding rules to security groups is slow

2019-01-04 Thread Doug Wiegley
Public bug reported:

Sometime between liberty and pike, adding rules to SG's got slow, and
slower with every rule added.

Gerrit review with fixes is incoming.

You can repro with a vanilla devstack install on master, and this
script:

#!/bin/bash

OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}')
export OPENSTACK_TOKEN

CCN1=10.210.162.2
CCN3=10.210.162.10
export ENDPOINT=localhost

make_rules() {
iter=$1
prefix=$2
file="$3"

echo "generating rules"

cat >$file <>$file <>$file <http://$ENDPOINT:9696/v2.0/security-group-rules.json -H "User-Agen
t: python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-T
oken: $OPENSTACK_TOKEN" -d @${json} >/dev/null
end=$(perl -e "print time();")
echo $((end-start))
}

tmp=/tmp/sg-test.$$.tmp

echo "Doing test with 1000 rules in bulk"
openstack security group delete dw-test-1
uuid=$(openstack security group create dw-test-1 | grep '| id' | awk '{print 
$4}')
export SG_UUID="$uuid"
make_rules 100 4 $tmp
hit_api $tmp

echo "Doing loop test"
openstack security group delete dw-test-2
uuid=$(openstack security group create dw-test-2 | grep '| id' | awk '{print 
$4}')
export SG_UUID="$uuid"
elapsed=0
mm=0
while [ $mm -lt 20 ]; do
make_rules 5 $(($mm+1)) $tmp
n=$(hit_api $tmp | tail -1)
elapsed=$((elapsed+n))
mm=$((mm+1))
done
echo "Loop test took $elapsed seconds"

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1810563

Title:
  adding rules to security groups is slow

Status in neutron:
  In Progress

Bug description:
  Sometime between liberty and pike, adding rules to SG's got slow, and
  slower with every rule added.

  Gerrit review with fixes is incoming.

  You can repro with a vanilla devstack install on master, and this
  script:

  #!/bin/bash

  OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}')
  export OPENSTACK_TOKEN

  CCN1=10.210.162.2
  CCN3=10.210.162.10
  export ENDPOINT=localhost

  make_rules() {
  iter=$1
  prefix=$2
  file="$3"

  echo "generating rules"

  cat >$file <>$file <>$file <http://$ENDPOINT:9696/v2.0/security-group-rules.json -H "User-Agen
  t: python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-T
  oken: $OPENSTACK_TOKEN" -d @${json} >/dev/null
  end=$(perl -e "print time();")
  echo $((end-start))
  }

  tmp=/tmp/sg-test.$$.tmp

  echo "Doing test with 1000 rules in bulk"
  openstack security group delete dw-test-1
  uuid=$(openstack security group create dw-test-1 | grep '| id' | awk '{print 
$4}')
  export SG_UUID="$uuid"
  make_rules 100 4 $tmp
  hit_api $tmp

  echo "Doing loop test"
  openstack security group delete dw-test-2
  uuid=$(openstack security group create dw-test-2 | grep '| id' | awk '{print 
$4}')
  export SG_UUID="$uuid"
  elapsed=0
  mm=0
  while [ $mm -lt 20 ]; do
  make_rules 5 $(($mm+1)) $tmp
  n=$(hit_api $tmp | tail -1)
  elapsed=$((elapsed+n))
  mm=$((mm+1))
  done
  echo "Loop test took $elapsed seconds"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1810563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1721305] [NEW] fips between two provider nets can never work

2017-10-04 Thread Doug Wiegley
Public bug reported:

If you create two provider networks, mark one as shared, and the other
as external and shared, neutron will happily let you associate a
floating ip from the first to the second.

But, provider nets have gateways outside of neutron's control, so the
NAT on the neutron node can never happen.

But, neutron still tries to fire up an ip on the gateway ip, so it
sometimes works, based on who wins the arp race.

The workaround is to disable the gateway on the networks and put in a
static route for 0.0.0.0/gw instead.

But, umm, yuck.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1721305

Title:
  fips between two provider nets can never work

Status in neutron:
  New

Bug description:
  If you create two provider networks, mark one as shared, and the other
  as external and shared, neutron will happily let you associate a
  floating ip from the first to the second.

  But, provider nets have gateways outside of neutron's control, so the
  NAT on the neutron node can never happen.

  But, neutron still tries to fire up an ip on the gateway ip, so it
  sometimes works, based on who wins the arp race.

  The workaround is to disable the gateway on the networks and put in a
  static route for 0.0.0.0/gw instead.

  But, umm, yuck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1721305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633254] Re: [RFE]Add tag extension to Neutron-Lbaas Resources

2016-10-31 Thread Doug Wiegley
This was discussed at the summit, and the impetus for tags was to pass
meta from the api layer to the drivers, which is explicitly against its
listed use cases, because it leaks implementation details to end users.

The alternative discussed was finishing the flavors framework, which has
a notion of passing metadata to drivers, in an end user agnostic manner
(very much vendor not agnostic for the operator, but they can configure
their cloud however they want.)

Closing after that discussion, please re-open if you have another use
case in mind.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633254

Title:
  [RFE]Add tag extension to Neutron-Lbaas Resources

Status in neutron:
  Won't Fix

Bug description:
  [Use-cases]
  - Supporting tag functionality as part of LBaaSv2
  Implement tag extension support for LBaaSv2 objects such as Listener, Pool 
and PoolMember objects.

  [Limitations]
  In the Mitaka release Neutron was introduced with the tag extension but 
unfortunately tags are limited to Neutron project. From the documentation and 
and comments in the implementation code it is clear that the intent is to 
extend tags to other Neutron modeled objects.

  [Enhancement]
  
- Add tag support to LBaaSv2 Objects
  Extend the tag supported resources of Neutron to LBaaSv2 objects such as 
Listener, Pool and PoolMember.

  - Extend existing API

  Add the support for tags to the Neutron-Lbaas objects API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1633254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630439] Re: linuxbridge-agent fails to start on python3.4

2016-10-06 Thread Doug Wiegley
We don't yet support py3.  Feel free to contribute a fix, though.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630439

Title:
  linuxbridge-agent fails to start on python3.4

Status in neutron:
  Invalid

Bug description:
  I'll attach a log with the failure, but to my eyes they seem like
  py2to3 errors (things missed or something)

  starts fine in python2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616282] [NEW] creating ipv6 subnet on ipv6 vm will cause loss of connectivity

2016-08-23 Thread Doug Wiegley
Public bug reported:

The gate is all clogged, as any DSVM test using neutron (most of them),
when it gets an IPv6 only OSIC node, seems to lose connectivity when
devstack creates the internal neutron ipv6.

Not sure of root cause yet, but starting this bug here.

Please see starting at 01:48:

http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-
infra.2016-08-24.log.html

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616282

Title:
  creating ipv6 subnet on ipv6 vm will cause loss of connectivity

Status in neutron:
  Confirmed

Bug description:
  The gate is all clogged, as any DSVM test using neutron (most of
  them), when it gets an IPv6 only OSIC node, seems to lose connectivity
  when devstack creates the internal neutron ipv6.

  Not sure of root cause yet, but starting this bug here.

  Please see starting at 01:48:

  http://eavesdrop.openstack.org/irclogs/%23openstack-infra
  /%23openstack-infra.2016-08-24.log.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1616282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576475] Re: [RFE] Add OneView ML2 driver to Neutron

2016-04-29 Thread Doug Wiegley
This looks great, but vendor mech drivers are not in the neutron project
itself, so they don't need a neutron RFE approval.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576475

Title:
  [RFE] Add OneView ML2 driver to Neutron

Status in neutron:
  Invalid

Bug description:
  ..
   This work is licensed under a Creative Commons Attribution 3.0 Unported
   License.

   http://creativecommons.org/licenses/by/3.0/legalcode

  ===
  HP OneView Mechanism Driver for Neutron ML2 plugin
  ===

  Launchpad blueprint:

  https://blueprints.launchpad.net/neutron/+spec/oneview-ml2-mechanism-
  driver

  This blueprint specifies the ML2 mechanism driver for HP OneView integration
  to OpenStack Neutron.

  Problem description
  ===

  One of the most important initiatives in OpenStack currently focus in solve
  communication restrictions existent in Ironic. Ironic nodes have severe
  restrictions on how they can interact with other nodes because they demand a
  networking environment composed only by flat networks, not allowing tenants
  networks to be correctly isolated as already happens when using Virtual
  Machines.

  This initiative integrates Ironic and Neutron making possible the creation of
  isolated networks in Neutron to be used by Ironic Baremetal nodes. With this
  integration a better control of nodes communication is expected, improving
  system operation.

  To take advantage of this new interaction between OpenStack components, the
  integration of HP OneView and OpenStack might be extended to improve the
  management it offers considering these new functionalities and be more
  aligned to the evolution of OpenStack Platform.

  Currently, OpenStack only supports integration with OneView in operations for
  provisioning Ironic baremetal instances. Initially, Ironic/OneView driver
  only worked with 'pre-allocated' machines. It means that a Ironic node needed
  to have a Server Profile already applicated to the Server Hardware registered
  in OneView. A new version of the driver is being implemented to dynamically
  allocate Ironic nodes, avoiding that nodes already available in Ironic but not
  released in OneView could not be used by other OneView users.

  However, operations related with the OneView's communication infrastructure
  remains unsupported since there is no integration with current Openstack
  Neutron actions. This limitation restricts Openstack/OneView interaction since
  it demands all the configurations of communication infrastructure to be
  manually replicated in both sides to ensure servers correct communication.

  The mechanism driver proposed here will interact with Neutron and OneView to
  dynamically reflect networking operations made by OpenStack on OneView. With
  these operations it's possible to a OneView administrator to know what is
  happening in OpenStack System which is running in the Data Center and also
  automatizes some operations previously required to be manual.

  Proposed change
  ===

  The diagram below provides an overview of how Neutron and OneView will
  interact using the Neutron-OneView Mechanism Driver. OneView Mechanism
  Driver uses the python-oneviewclient to provide communication between
  Neutron and OneView through OneView's Rest API.

  Flows:
  ::

  +-+
  | |
  |   Neutron Server|
  |  (with ML2 plugin)  |
  | |
  |   +-+
  |   |   OneView   |  Ironic API  ++
  |   |  Mechanism  +--+ Ironic |
  |   |   Driver|  ++
  +---+--+--+
 |
   REST API  |
 |
   +-+-+
   | OneView   |
   +---+

  
  Openstack Neutron based networks and ports information is demanded by
  OneView service to manage virtual networks. In order to send this
  information from neutron service, a new ML2 mechanism driver is required to
  post the precommit data to the OneView service. The OneView mechanism
  driver updates the OneView with port and network changes from Neutron.

  The OneView mechanism driver implements the following Neutron events:

- Port create/update/delete for compute instances;
- Network create/update/delete.

  When new networks are created in Neutron, OneView might also create this
  network to allow system 

[Yahoo-eng-team] [Bug 1487357] Re: No PoolInUse Check when creating VIP

2016-04-27 Thread Doug Wiegley
Submitter triaged their own bug. :-)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487357

Title:
  No PoolInUse Check when creating VIP

Status in neutron:
  Invalid

Bug description:
  From the lbaasv1 api, there seems to me that many vips could map to the same 
pool. After reading the code, it turned out to be not. Deducing from below code 
snippet:
  class LoadBalancerPluginDb(loadbalancer.LoadBalancerPluginBase,
 base_db.CommonDbMixin):
  ...
  def create_vip(self, context, vip):
  ...
  if v['pool_id']:
  # fetching pool again
  pool = self._get_resource(context, Pool, v['pool_id'])
  # (NOTE): we rely on the fact that pool didn't change between
  # above block and here
  vip_db['pool_id'] = v['pool_id']
  pool['vip_id'] = vip_db['id']
  # explicitly flush changes as we're outside any transaction
  context.session.flush()
  ...
  ...
  (neutron_lbaas/db/loadbalancer/loadbalancer_db.py)
  the relationship between vip and pool should be 1:1. If this is the case, 
there should have checked whether pool[vip_id] is null or not and throw a 
PoolInUse exception if no null value present.
  Am I miss anything?
  Thanks,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574476] Re: lbaasv2 session_persistence or session-persistence?

2016-04-26 Thread Doug Wiegley
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Status: New => Confirmed

** Changed in: python-neutronclient
   Importance: Undecided => Low

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574476

Title:
  lbaasv2 session_persistence or session-persistence?

Status in python-neutronclient:
  Confirmed

Bug description:
  problem is in Kilo neutron-lbaas branch.

  When we create a Lbaas pool with --session_persistence it configured ok,
  we create a Lbaas pool with --session-persistence it configured failed.

  But we update a Lbaas pool with --session-persistence  or
  --session_persistence it updated OK.

  
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session-persistence type=dict type='SOURCE_IP'
  Invalid values_specs type=SOURCE_IP
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session_persistence type=dict type='SOURCE_IP'
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | 64bed1f2-ff02-4b12-bdfa-1904079786be   |
  | lb_algorithm| SOURCE_IP  |
  | listeners   | {"id": "162c70aa-175d-473a-b13a-e3c335a0a9e1"} |
  | members ||
  | name| pool500-1  |
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "SOURCE_IP"} |
  | tenant_id   | be58eaec789d44f296a65f96b944a9f5   |
  +-++
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 
--session_persistence type=dict type='HTTP_COOKIE'
  Updated pool: pool500-1
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 
--session-persistence type=dict type='SOURCE_IP'
  Updated pool: pool500-1
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1574476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573949] Re: lbaas: better to close a socket explicitly rather than implicitly when they are garbage-collected

2016-04-26 Thread Doug Wiegley
This is being reported against an lbaas v1 driver, which is deprecated
and pending removal in Newton. If you want to submit a code change, a
reviewer might look at it, but we're not accepting bugs/blueprints/specs
for lbaas v1.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573949

Title:
  lbaas: better to close a socket explicitly rather than implicitly when
  they are garbage-collected

Status in neutron:
  Won't Fix

Bug description:
  https://github.com/openstack/neutron-
  
lbaas/blob/master/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py#L205
  :

  def _get_stats_from_socket(self, socket_path, entity_type):
  try:
  s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
  s.connect(socket_path)
  s.send('show stat -1 %s -1\n' % entity_type)
  raw_stats = ''
  chunk_size = 1024
  while True:
  chunk = s.recv(chunk_size)
  raw_stats += chunk
  if len(chunk) < chunk_size:
  break

  return self._parse_stats(raw_stats)
  except socket.error as e:
  LOG.warning(_LW('Error while connecting to stats socket: %s'), e)
  return {}

  in this function, a socket connection is created but it is not closed
  explicitly. It is better to close it when all things have been done

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574985] Re: Update security group using Heat

2016-04-26 Thread Doug Wiegley
Names are not unique for SG's, so it depends on if its a put or post.
Was this intended in the heat template?

** Project changed: neutron => heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574985

Title:
  Update security group using Heat

Status in heat:
  New

Bug description:
  I created a security group using Horizon dashboard. Then, I created a
  heat template with the same security group name with some new rules so
  that my security group gets updatee with new rules. However, heat
  template created a new security group instead of updating the existing
  one.

  Is this a bug or an unsupported feature ?

  Below is my yaml file

  heat_template_version: 2013-05-23

  description: Create a security group

  parameters:
    sec_group:
  type: string
  default: test-secgroup

  resources:
    security_group:
  type: OS::Neutron::SecurityGroup
  properties:
    name: { get_param: sec_group }
    rules:
  - remote_ip_prefix: 0.0.0.0/0
    protocol: tcp
    port_range_min: 22
    port_range_max: 22
  - remote_ip_prefix: 0.0.0.0/0
    protocol: icmp

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1574985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575180] Re: logging does not work

2016-04-26 Thread Doug Wiegley
** Project changed: neutron => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1575180

Title:
  logging does not work

Status in python-openstackclient:
  New

Bug description:
  I follow the link 
http://docs.openstack.org/developer/python-openstackclient/configuration.html#logging-settings
 to enable openstackclient syslog, here is my cloud.yaml contents:
  juno@bgpvpn:~$ cat /etc/openstack/clouds.yaml 
  clouds:
devstack:
  auth:
auth_url: http://192.168.122.102:35357
password: blade123
project_domain_id: default
project_name: demo
user_domain_id: default
username: demo
  identity_api_version: '3'
  region_name: RegionOne
  volume_api_version: '2'
  operation_log:
logging: TRUE
file: /tmp/openstackclient_admin.log
level: debug
devstack-admin:
  auth:
auth_url: http://192.168.122.102:35357
password: blade123
project_domain_id: default
project_name: admin
user_domain_id: default
username: admin
  identity_api_version: '3'
  region_name: RegionOne
  volume_api_version: '2'
  operation_log:
logging: TRUE
file: /tmp/openstackclient_admin.log
level: debug
devstack-alt:
  auth:
auth_url: http://192.168.122.102:35357
password: blade123
project_domain_id: default
project_name: alt_demo
user_domain_id: default
username: alt_demo
  identity_api_version: '3'
  region_name: RegionOne
  volume_api_version: '2'
  juno@bgpvpn:~$ 
  Then I create a network:
  juno@bgpvpn:~$ openstack --os-cloud devstack-admin network create  juno
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| UP   |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-04-26 12:48:49+00:00|
  | description   |  |
  | headers   |  |
  | id| fe8a5d06-beb9-4d8a-974e-def14596bc0d |
  | ipv4_address_scope| None |
  | ipv6_address_scope| None |
  | mtu   | 1450 |
  | name  | juno |
  | port_security_enabled | True |
  | project_id| 4503c1d4f54b48cdb941f4fa43cf4916 |
  | provider:network_type | vxlan|
  | provider:physical_network | None |
  | provider:segmentation_id  | 1029 |
  | router_external   | Internal |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  | []   |
  | updated_at| 2016-04-26 12:48:49+00:00|
  +---+--+

  But there is no logs generated.
  juno@bgpvpn:~$ cat /tmp/openstackclient_admin.log
  cat: /tmp/openstackclient_admin.log: No such file or directory
  juno@bgpvpn:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1575180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573197] Re: [RFE] Neutron API enhancement for visibility into multi-segmented networks

2016-04-21 Thread Doug Wiegley
Isn't this a dup of this spec, that is in progress?
https://review.openstack.org/#/c/225384/22/specs/newton/routed-
networks.rst

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573197

Title:
  [RFE] Neutron API enhancement for visibility into multi-segmented
  networks

Status in neutron:
  Invalid

Bug description:
  Neutron networks are, by default, assumed to be single segmented L2
  domains, represented by a single segmentation ID (e.g VLAN ID).
  Current neutron API (neutron net-show) works well with this model.
  However, with the introduction of HPB, this assumption is not true
  anymore. Networks are now multi-segmented. A given network could have
  anywhere from 3 to N number of segments depending upon the
  breadth/size of the data center topology. This will be true with the
  implementation of routed networks as well.

  In general, the segments, in multi-segmented networks, will be
  dynamically created.  As mentioned earlier, the number of these
  segments will grow and shrink dynamically representing the breadth of
  data center topology. Therefore, at the very least, admins would like
  to have visibility into these segments - e.g. which segmentation
  type/id is consumed in which segment of the network.

  Venders and Operators are forced to come up with their hacks to get such 
visibility. 
  This RFE proposes that we enhance neutron API to address this visibility 
issue in a vendor/implementation agnostic way - by either enhancing "neutron 
net-show" or by introducing additional commands such as "neutron 
net-segments-list/neutron net-segment-show". 

  This capability is needed for Neutron-Manila integration as well.
  Manila requires visibility into the segmentation IDs used in specific
  segments of a network. Please see Manila use case here -
  https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-
  support

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572867] Re: DEVICE_OWNER_PREFIXES not be defined in anywhere

2016-04-21 Thread Doug Wiegley
It's defined in neutron_lib.constants, and by deprecation link via
neutron.common.constants.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572867

Title:
  DEVICE_OWNER_PREFIXES not  be defined in anywhere

Status in neutron:
  Invalid

Bug description:
  in neutron\objects\qos\rule.py ,the constant DEVICE_OWNER_PREFIXES not
  be defined in anywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572783] Re: Openswan/Libreswan: Check config changes before restart

2016-04-20 Thread Doug Wiegley
There is no user visible change here beyond it not dropping you, and no
config or api changes. This is not DocImpact.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572783

Title:
  Openswan/Libreswan: Check config changes before restart

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/306899
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 814e3f0c7d7bd8b44be61d8badf127b1c60debbc
  Author: nick.zhuyj 
  Date:   Sun Apr 17 21:59:46 2016 -0500

  Openswan/Libreswan: Check config changes before restart
  
  Currently, when neutron-vpn-agent restart, all the pluto process in
  router ns will be restarted too. But actually this is not required
  and will impact the vpn traffic. In this patch, we keep a backup for
  ipsec.conf and ipsec.secrets, and then compare the configurations
  when restart, if no config changes. Restart can be skipped.
  
  Note: this change is DocImpact
  
  Change-Id: I5a7fae909cb56721bd7e4d42999356c7f7464358
  Closes-Bug: #1571455

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572655] Re: Neutron-LBaaS v2: Redundant protocols for Listener and Pool

2016-04-20 Thread Doug Wiegley
Just one place won't work.  I get that if it's pass-through, just one is
fine . But if you're terminated https, you might want cleartext back to
the members to offload SSL.  Or not, if you're just being a sneaky
middleman.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572655

Title:
  Neutron-LBaaS v2:  Redundant protocols for Listener and Pool

Status in neutron:
  Won't Fix

Bug description:
  (This is more of a feature request than a bug.)

  Examine the available protocols for Listener and Pool.

  Listener:  TCP,   HTTP,   HTTPS,  TERMINATED_HTTPS
  Pool: TCP,HTTP,   HTTPS

  This combination may be redundant:
  Listener -> Pool
  HTTPS -> HTTP
  TERMINATED_HTTPS -> HTTP

  It becomes quite complicated that we can create different combinations
  of protocols for pool and listener.

  I suggest having just a one time place to define a protocol - either
  in pool or listener.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572342] Re: Neutron-LBaaS v2: LB stuck in PENDING_UPDATE when adding a member to a pool with no listeners

2016-04-19 Thread Doug Wiegley
Not an issue with other drivers, moving.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572342

Title:
  Neutron-LBaaS v2:  LB stuck in PENDING_UPDATE when adding a member to
  a pool with no listeners

Status in octavia:
  New

Bug description:
  1.  Create a LB.
  2.  Create a Pool. (Do NOT create a listener.)
  3.  Add a member to the Pool.
  4.  Check LB status.

  Result: LB is stuck in provision_status: "PENDING_UPDATE" for more
  than an hour.

  Expected:  Either throw an error to the user when adding a member to a
  pool that has no listener or make it such that the state transitions
  quickly.

  note: I've tested this with a pool that has a listener.  Adding a
  member is relatively fast (< 1 minute).

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1572342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560575] Re: Release networking-infoblox 2.0.1

2016-04-19 Thread Doug Wiegley
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560575

Title:
  Release networking-infoblox 2.0.1

Status in networking-infoblox:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please release version 2.0.1 of networking-infoblox:

  master @ HEAD (3771480eed656d08b1c48326bc17bf52dbcb1b69)

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1560575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570892] Re: Openswan/Libreswan: support sha256 for auth algorithm

2016-04-19 Thread Doug Wiegley
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570892

Title:
  Openswan/Libreswan: support sha256 for auth algorithm

Status in openstack-api-site:
  Confirmed

Bug description:
  https://review.openstack.org/303684
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit b73e1002555cfa70ccfea8ffe685672c0b679212
  Author: nick.zhuyj 
  Date:   Fri Apr 8 23:48:33 2016 -0500

  Openswan/Libreswan: support sha256 for auth algorithm
  
  Add support for sha256 as it is requirement from customer.
  Currently, there is no ike/esp fields in strongswan ipsec.conf
  template, so by default. sha256 is used. But for openswan auth
  algorithm is get from configuration, so only sha1 is supported.
  This patch enable Openswan/Libreswan to support sha256.
  
  Note: this change is DocImpact and APIImpact
  
  Change-Id: I02c80ec3494eb0edef2fdaa5d21ca0c3bbcacac2
  Closes-Bug: #1567846

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1570892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571814] Re: Add an option for WSGI pool size

2016-04-19 Thread Doug Wiegley
** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571814

Title:
  Add an option for WSGI pool size

Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/306187
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit eee9e58ed258a48c69effef121f55fdaa5b68bd6
  Author: Mike Bayer 
  Date:   Tue Feb 9 13:10:57 2016 -0500

  Add an option for WSGI pool size
  
  Neutron currently hardcodes the number of
  greenlets used to process requests in a process to 1000.
  As detailed in
  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
  
  this can cause requests to wait within one process
  for available database connection while other processes
  remain available.
  
  By adding a wsgi_default_pool_size option functionally
  identical to that of Nova, we can lower the number of
  greenlets per process to be more in line with a typical
  max database connection pool size.
  
  DocImpact: a previously unused configuration value
 wsgi_default_pool_size is now used to affect
 the number of greenlets used by the server. The
 default number of greenlets also changes from 1000
 to 100.
  Change-Id: I94cd2f9262e0f330cf006b40bb3c0071086e5d71
  (cherry picked from commit 9d573387f1e33ce85269d3ed9be501717eed4807)

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1571814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565074] Re: Octavia duplicate config option bind_host with Neutron

2016-04-19 Thread Doug Wiegley
Neutron and Octavia are separate services; they don't have to match.
What am I missing?

** Changed in: neutron
   Status: New => Invalid

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565074

Title:
  Octavia duplicate config option bind_host with Neutron

Status in neutron:
  Invalid
Status in octavia:
  Invalid

Bug description:
  As in Octavia stable/mitaka branch,
  
http://git.openstack.org/cgit/openstack/octavia/tree/octavia/common/config.py?h=stable/mitaka

  the cfgOpt is set to be "bind_host" IPOpt() type.

  But in Neutron stable/mitaka,
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/common/config.py?h=stable/mitaka

  The cfgOpt "bind_host" remains the old StrOpt() type.

  Oslo.config will raise the Duplicate Option exception.

  We may want to either change back to StrOpt() or have Neutron use the
  new IPOpt() type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572222] Re: Neutron-LBaaS v2: Deleting pool that has members from load balancer takes more than 1 hour

2016-04-19 Thread Doug Wiegley
The neutron-lbaas API has zero delays, so I'm assuming this is an
octavia issue.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/157

Title:
  Neutron-LBaaS v2: Deleting pool that has members from load balancer
  takes more than 1 hour

Status in octavia:
  New

Bug description:
  As an admin user:

  1.  Create a Load balancer.
  2.  Create a pool for the load balancer.   (do not create a listener)
  3.  Add a member to the pool.  (make member tenant id different than admin)
  4.  Delete the pool.  (make pool tenant id different than admin)
  5.  Check Load balancer status.

  Result:  Load balancer provisioning_status will be set to
  "PENDING_UPDATE" for more than 1 hour.

  Expected Result:  Load balancer provisioning_status should be
  immediately set to ACTIVE or "PENDING_UPDATE" should be a relatively
  short wait.

  note:   Deleting a pool from a load balancer, which has no members, is
  relatively fast.  It seems that when the pool has members, the delete
  pool takes a significant amount of time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571907] Re: Neutron-LBaaS v2: Invalid tenant id accepted on "add member to pool"

2016-04-19 Thread Doug Wiegley
This was discussed in a neutron meeting about six months ago, with the
decision to not validate the tenant. I don't agree, but that was the
decision.

** Changed in: neutron
   Status: New => Won't Fix

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571907

Title:
  Neutron-LBaaS v2: Invalid tenant id accepted on "add member to pool"

Status in neutron:
  Won't Fix

Bug description:
  1.  Create load balancer as an admin.
  2.  Create pool as an admin.
  3.  As an admin, add member to pool but using an invalid tenant id. (e.g., 
"$232!$pw" )

  Result:   API returns 201
  Expected:  API should return BadRequest 400

  Log:
  2016-04-19 00:51:53,500 3286 INFO [tempest.lib.common.rest_client] 
Request (MembersTestAdmin:test_create_member_invalid_tenant_id): 201 POST 
http://127.0.0.1:9696/v2.0/lbaas/pools/1bd85f26-1415-46f0-9a46-3630263fab5b/members
 0.625s
  2016-04-19 00:51:53,500 3286 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Accept': 'application/json', 'X-Auth-Token': '', 
'Content-Type': 'application/json'}
  Body: {"member": {"tenant_id": "$232!$pw", "address": "10.0.0.8", 
"subnet_id": "c0239aee-c594-42a8-beac-fc6faf980e21", "protocol_port": 80}}
  Response - Headers: {'content-type': 'application/json', 'date': 
'Tue, 19 Apr 2016 00:51:53 GMT', 'x-openstack-request-id': 
'req-89a0cc84-8ff5-401c-973b-b7b104687e51', 'content-length': '229', 'status': 
'201', 'connection': 'close'}
  Body: {"member": {"name": "", "weight": 1, "admin_state_up": 
true, "subnet_id": "c0239aee-c594-42a8-beac-fc6faf980e21", "tenant_id": 
"$232!$pw", "address": "10.0.0.8", "protocol_port": 80, "id": 
"597d46fd-1de8-41a5-93f5-cda5c84838e3"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571900] Re: Neutron-LBaaS v2: "Add member to pool" and "Create new health monitor" should be consistent

2016-04-19 Thread Doug Wiegley
You're right, but the ship has sailed, and backwards compat is king.  If
you can come up with a way of supporting the old way and a new
consistent way at the same time, please re-open as an RFE.

** Changed in: neutron
   Status: New => Won't Fix

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571900

Title:
  Neutron-LBaaS v2:  "Add member to pool" and "Create new health
  monitor" should be consistent

Status in neutron:
  Won't Fix

Bug description:
  Since health monitors and members are sub-resources of pools, they
  should both be created/retrieved/updated/deleted the same way.

  See: 
  https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor
  vs.
  
https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Add_a_New_Member_to_a_Pool

  
  Result: In members, the pool_id is part of the URI.  However, in 
health_monitors, it becomes an attribute of the health_monitor object.

  Expected:  CRUD should be similar for health monitors and members.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571990] Re: Getting Error: Invalid service catalog service: network from Horizon, when i click on Network services

2016-04-19 Thread Doug Wiegley
Service catalog errors point to an issue in keystone, not neutron. Take
a look there.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571990

Title:
  Getting Error: Invalid service catalog service: network  from Horizon,
  when i click on Network services

Status in neutron:
  Invalid

Bug description:
  When checked both the neutron-dhcp-agent.service and
  neutron-l3-agent.service are running and in active state

  [root@controller ~]# systemctl status neutron-dhcp-agent.service
  ● neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent
 Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; 
enabled; vendor preset: disabled)
 Active: active (running) since Mon 2016-04-18 05:43:05 EDT; 20h ago
   Main PID: 1389 (neutron-dhcp-ag)
 CGroup: /system.slice/neutron-dhcp-agent.service
 └─1389 /usr/bin/python2 /usr/bin/neutron-dhcp-agent --config-file 
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf 
--config-fi...

  Apr 18 05:43:05 controller systemd[1]: Started OpenStack Neutron DHCP Agent.
  Apr 18 05:43:05 controller systemd[1]: Starting OpenStack Neutron DHCP 
Agent...
  Apr 18 05:43:16 controller neutron-dhcp-agent[1389]: No handlers could be 
found for logger "oslo_config.cfg"
  [root@controller ~]# systemctl status neutron-l3-agent.service
  ● neutron-l3-agent.service - OpenStack Neutron Layer 3 Agent
 Loaded: loaded (/usr/lib/systemd/system/neutron-l3-agent.service; enabled; 
vendor preset: disabled)
 Active: active (running) since Mon 2016-04-18 05:43:05 EDT; 20h ago
   Main PID: 1399 (neutron-l3-agen)
 CGroup: /system.slice/neutron-l3-agent.service
 └─1399 /usr/bin/python2 /usr/bin/neutron-l3-agent --config-file 
/usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent 
--config-fil...

  Apr 18 05:43:05 controller systemd[1]: Started OpenStack Neutron Layer 3 
Agent.
  Apr 18 05:43:05 controller systemd[1]: Starting OpenStack Neutron Layer 3 
Agent...
  Apr 18 05:43:15 controller neutron-l3-agent[1399]: No handlers could be found 
for logger "oslo_config.cfg"

  I am getting an error when i use Horizon as admin user to
  "Project-->Network-->Networks" , the error message is "Error: Invalid
  service catalog service: network"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548604] Re: Can not modify default settings of lbaas haproxy template

2016-04-19 Thread Doug Wiegley
This is not a configurable item; you need to modify a special package if
you want this changed, in both places listed above.  If what you want is
a config toggle/API added for this, please reopen as an RFE.

** Changed in: neutron
   Status: New => Invalid

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548604

Title:
  Can not modify default settings of lbaas haproxy template

Status in neutron:
  Invalid

Bug description:
  I've changed haproxy base template jinja  file, set value of "timeout
  connect" option in  "defaults" entry from 5000 to 4000.

  the file is located at 
  
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_base.j2.

  '''
  . 
  defaults
  log global
  retries 3
  option redispatch
  timeout connect 4000
  timeout client 5
  timeout server 5
  .
  '''

  then i restarted neutron-server service and neutron-lbaas-agent
  service.

  then  i submit a new lbaas ceate job, it generated haproxy config
  file,  /var/lib/neutron/lbaas/2a320b6d-
  bc86-4304-ab89-98438377ac83/conf,

  and the  "timeout connect" value still show 5000.

  '''
  . 
  defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 5
timeout server 5
  .
  '''

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551282] Re: devstack launches extra instance of lbaas agent

2016-04-19 Thread Doug Wiegley
neutron-legacy no longer has lbaas code.

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551282

Title:
  devstack launches extra instance of lbaas agent

Status in neutron:
  Fix Committed

Bug description:
  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

  enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
  ENABLED_SERVICES+=,q-lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544729] [NEW] No grenade coverage for neutron-lbaas/octavia

2016-02-11 Thread Doug Wiegley
Public bug reported:

Stock neutron grenade no longer covers this, so we need a grenade plugin
for neutron-lbaas.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544729

Title:
  No grenade coverage for neutron-lbaas/octavia

Status in neutron:
  New

Bug description:
  Stock neutron grenade no longer covers this, so we need a grenade
  plugin for neutron-lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541670] [NEW] lbaas tests gate on dib

2016-02-03 Thread Doug Wiegley
Public bug reported:

... but dib isn't gated anywhere in real-time like octavia, since
nodepool uses async built images. This causes octavia to break on any
dib break, first. This is too brittle.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541670

Title:
  lbaas tests gate on dib

Status in neutron:
  New

Bug description:
  ... but dib isn't gated anywhere in real-time like octavia, since
  nodepool uses async built images. This causes octavia to break on any
  dib break, first. This is too brittle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536852] [NEW] lbaas tempest code needs work

2016-01-21 Thread Doug Wiegley
Public bug reported:

At a minimum, carry forward amuller's work:

https://review.openstack.org/#/c/269941/
https://review.openstack.org/#/c/269771/

In addition, we should make our tempest tests runnable as a tempest
"plug in".

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536852

Title:
  lbaas tempest code needs work

Status in neutron:
  New

Bug description:
  At a minimum, carry forward amuller's work:

  https://review.openstack.org/#/c/269941/
  https://review.openstack.org/#/c/269771/

  In addition, we should make our tempest tests runnable as a tempest
  "plug in".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508243] Re: Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

2016-01-12 Thread Doug Wiegley
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508243

Title:
  Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

Status in neutron:
  Won't Fix

Bug description:
  The current workflow for TLS Termination on loadbalancers has a couple
  of interesting security vulnerabilities that need to be addressed
  somehow. The solution I propose is to encourage the use of passphrase
  encryption on private keys, and to store that passphrase in Neutron-
  LBaaS along with the Barbican href, instead of inside Barbican.

  Spec: https://review.openstack.org/237807

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483245] Re: LBaaS v2 plugin does not catch driver's LBConfigurationUnsupported exception

2015-12-03 Thread Doug Wiegley
At the moment we don't restrict/advise drivers on what exceptions they
throw, hence the generic error. And we don't currently have plans to
change that. What kind of use cases are you thinking of?

** Changed in: neutron
   Status: New => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483245

Title:
  LBaaS v2 plugin does not catch driver's LBConfigurationUnsupported
  exception

Status in neutron:
  Opinion

Bug description:
  As for now, LBaaS v2 plugin does not explicitly catch
  LBConfigurationUnsupported exception that can be potentially thrown by
  specific driver's method.

  _call_driver_operation method of the plugin is explicitly catching agent 
exceptions only,
  any specific exception from the driver (like LBConfigurationUnsupported ) is 
caught generally with LOG message saying "There was an error in the driver"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487972] Re: LbbasV2+DVR- we see ERROR message in lbaas log every minute

2015-12-03 Thread Doug Wiegley
The agent/haproxy driver is not the reference. Does this occur with
octavia?

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487972

Title:
  LbbasV2+DVR- we see ERROR message in lbaas log every minute

Status in neutron:
  Won't Fix

Bug description:
  After configurinf LBaaSv2 with DVR we see ERRORs every minute in the
  LBaas log.

  Reproducible: 100%
  Setps to reproduce:
  1. AIO+compute node setup with lbaasV2 enabled.- see the there is active 
lbaasv2 agent.
  2. configure DVR on the setup. restart neutron. - see lbaas logs

  IF NOT REPRODUCIBLE, BEFORE STEP 2 CREATE NETS SUBNETS AND ROUTER AND
  THE DELETE. THEN EXECUTE STEP 2.

  > IF WE FIRST ENBALE DVR AND THEN CONFIGURE LBAAS TO V2 WE DO NOT
  SEE THOSE ERRORS.

  The log :
  2015-08-24 09:12:38.779 16280 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 152, in sync_state
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 36, in get_ready_devices
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_devices', host=self.host)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
339, in _send
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
243, in wait
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
149, in get
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply to message ID 770300260dc94a218863238b5b49bbc8
  2015-08-24 09:12:38.779 16280 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager
  2015-08-24 09:12:38.780 16280 DEBUG neutron.openstack.common.periodic_task 
[-] Running periodic task LbaasAgentManager.collect_stats run_periodic_tasks 
/usr/lib/python2.7/site-packages/neutron/openstack/common/periodic_task.py:219
  2015-08-24 09:12:38.780 16280 WARNING neutron.openstack.common.loopingcall 
[-] task > run outlasted interval by 50.01 sec
  2015-08-24 09:12:39.011 16292 DEBUG oslo_messaging._drivers.amqp [-] 
UNIQUE_ID is c50ffdaafbe7459ebc9be07c4e1ea068. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:258
  2015-08-24 09:12:40.252 16292 DEBUG neutron.openstack.common.periodic_task 
[-] Running periodic task LbaasAgentManager.collect_stats run_periodic_tasks 

[Yahoo-eng-team] [Bug 1515454] Re: In LBaaS, DB seems to be updated, even though the actual operation may fail due to driver error

2015-12-03 Thread Doug Wiegley
The way the lbaas model works is that the object is created, the driver
called, and the driver puts it ACTIVE when it's done, or flags an error,
or it's left pending in catastrophe. In that respect, the above behavior
is working as designed.

If you want to change that to an object rollback, please submit that as
an overall rfe to how all lbaas objects work. The all-in-one creation
call may also avoid this issue.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515454

Title:
  In LBaaS,DB seems to be updated, even though the actual  operation may
  fail due to driver error

Status in neutron:
  Won't Fix

Bug description:
  High Level Description:
  While working on lbaas V2 , found a somewhat strange behavior.( mention below)

  Pre-conditions: Enable LBaaS v2 extension
  Step - By -Step reproduction:
  a) Verify all the members in Pool
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
  
+--+--+---++--++
  | id   | address  | protocol_port | 
weight | subnet_id| admin_state_up |
  
+--+--+---++--++
  | 2644b225-53df-4cdf-9ab3-dea5da1d402c | 172.24.4.120 |90 |  
1 | af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
  
+--+--+---++--++
  b) Create a new member , it fails due to driver error

  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-create --subnet public-subnet --address 172.24.4.121 
--protocol-port 90 testpool
  An error happened in the driver

  c) List the members in the specified pool
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
  
+--+--+---++--++
  | id   | address  | protocol_port | 
weight | subnet_id| admin_state_up |
  
+--+--+---++--++
  | 2644b225-53df-4cdf-9ab3-dea5da1d402c | 172.24.4.120 |90 |  
1 | af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
  | 39d1017e-92ca-40fd-b02d-739189a4b8df | 172.24.4.121 |90 |  
1 | af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
  
+--+--+---++--++
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$

  Expected Output: If the driver error occurs, then the new member should not 
be added
  Actual Output: The new member which actually failed due to driver error was 
actually added to the system, which is incorrect behavior.

  Version: Ubuntu 14.04, git for Neutron Client: 
3d736107f97c27a35cff2d7ed6c041521be5ab03
  git for neutron-lbaas:
  321da8f6263d46bf059163bcf7fd005cf68601bd

  Environment: Devstack installation of an All-In-One single node, with FWaaS, 
LBaaSv2 and octavia enabled.
  Perceived Severity: High ( this is negative behaviour, because  an 
inoperatable member is created and exists in the DB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522142] [NEW] pylint needs to be re-enabled in neutron-lbaas

2015-12-02 Thread Doug Wiegley
Public bug reported:

Disabled when requirements bombed. Needs to be re-enabled correctly, or
the team needs to decide to nuke it.

** Affects: neutron
 Importance: Medium
 Assignee: Adam Harwell (adam-harwell)
 Status: Confirmed


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522142

Title:
  pylint needs to be re-enabled in neutron-lbaas

Status in neutron:
  Confirmed

Bug description:
  Disabled when requirements bombed. Needs to be re-enabled correctly,
  or the team needs to decide to nuke it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508243] Re: Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

2015-12-01 Thread Doug Wiegley
Isn't this a failure of the global lbaas creds to barbican? Lbaas
becomes a trusted source since it has global access, and that seems the
security fail, not passwords that we'd then have to store in a db
(double security fail.)

** Changed in: neutron
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508243

Title:
  Store Private Key Passphrase in Neutron-LBaaS for TLS Terminations

Status in neutron:
  Opinion

Bug description:
  The current workflow for TLS Termination on loadbalancers has a couple
  of interesting security vulnerabilities that need to be addressed
  somehow. The solution I propose is to encourage the use of passphrase
  encryption on private keys, and to store that passphrase in Neutron-
  LBaaS along with the Barbican href, instead of inside Barbican.

  Spec: https://review.openstack.org/237807

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519493] [NEW] oslo_i18n cleanup needed

2015-11-24 Thread Doug Wiegley
Public bug reported:

As per the oslo_i18n documentation, neutron/i18n.py should be an
internal only module, named _i18n.py. Stuff needed:

- Rename file.
- Add i18n.py with debtcollector references, warning that each repo needs its 
own, and to stop using.
- Begin migrating subprojects away from shared module.

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Doug Wiegley (dougwig)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519493

Title:
  oslo_i18n cleanup needed

Status in neutron:
  New

Bug description:
  As per the oslo_i18n documentation, neutron/i18n.py should be an
  internal only module, named _i18n.py. Stuff needed:

  - Rename file.
  - Add i18n.py with debtcollector references, warning that each repo needs its 
own, and to stop using.
  - Begin migrating subprojects away from shared module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440221] [NEW] need ipv6 tests for lbaasv2

2015-04-03 Thread Doug Wiegley
Public bug reported:

All of our tests are ipv4, but we should support v6 at this point. Let's
test it.

** Affects: neutron
 Importance: Undecided
 Assignee: Franklin Naval (franknaval)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) = Franklin Naval (franknaval)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440221

Title:
  need ipv6 tests for lbaasv2

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  All of our tests are ipv4, but we should support v6 at this point.
  Let's test it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418704] [NEW] neutron l3_init unit test requires linux

2015-02-05 Thread Doug Wiegley
Public bug reported:

This test bombs on macs:
neutron.tests.unit.test_linux_interface.TestABCDriver.test_l3_init_with_ipv6


Captured traceback:
~~~
Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File neutron/tests/unit/test_linux_interface.py, line 138, in 
test_l3_init_with_ipv6
mock.call().addr.delete(6, '2001:db8:a::123/64')])
  File 
/Users/dougw/work/a10/neutron/.tox/py27/lib/python2.7/site-packages/mock.py, 
line 863, in assert_has_calls
'Actual: %r' % (calls, self.mock_calls)
AssertionError: Calls not found.
Expected: [call('tap0', 'sudo', 
namespace='12345678-1234-5678-90ab-ba0987654321'), 
call().addr.list(scope='global', filters=['permanent']), call().addr.add(6, 
'2001:db8:a::124/64', '2001:db8:a:0::::'), 
call().addr.delete(6, '2001:db8:a::123/64')]
Actual: [call(),
 call('tap0', 'sudo', namespace='12345678-1234-5678-90ab-ba0987654321'),
 call().addr.list(scope='global', filters=['permanent']),
 call().addr.add(6, '2001:db8:a::124/64', 
'2001:db8:a:::::'),
 call().addr.delete(6, '2001:db8:a::123/64'),
 call().route.list_onlink_routes(),
 call().route.list_onlink_routes().__iter__()]

Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout


A unit test should not require ip namespaces and linux commands.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1418704

Title:
  neutron l3_init unit test requires linux

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This test bombs on macs:
  neutron.tests.unit.test_linux_interface.TestABCDriver.test_l3_init_with_ipv6

  
  Captured traceback:
  ~~~
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout
  
  Traceback (most recent call last):
File neutron/tests/unit/test_linux_interface.py, line 138, in 
test_l3_init_with_ipv6
  mock.call().addr.delete(6, '2001:db8:a::123/64')])
File 
/Users/dougw/work/a10/neutron/.tox/py27/lib/python2.7/site-packages/mock.py, 
line 863, in assert_has_calls
  'Actual: %r' % (calls, self.mock_calls)
  AssertionError: Calls not found.
  Expected: [call('tap0', 'sudo', 
namespace='12345678-1234-5678-90ab-ba0987654321'), 
call().addr.list(scope='global', filters=['permanent']), call().addr.add(6, 
'2001:db8:a::124/64', '2001:db8:a:0::::'), 
call().addr.delete(6, '2001:db8:a::123/64')]
  Actual: [call(),
   call('tap0', 'sudo', namespace='12345678-1234-5678-90ab-ba0987654321'),
   call().addr.list(scope='global', filters=['permanent']),
   call().addr.add(6, '2001:db8:a::124/64', 
'2001:db8:a:::::'),
   call().addr.delete(6, '2001:db8:a::123/64'),
   call().route.list_onlink_routes(),
   call().route.list_onlink_routes().__iter__()]
  
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  
  A unit test should not require ip namespaces and linux commands.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1418704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418240] [NEW] we might need a way to switch lbaas v1/v2 commands in neutron client

2015-02-04 Thread Doug Wiegley
Public bug reported:

Referencing this review: https://review.openstack.org/#/c/111475/

This comment:
An open issue remains about the two command sets being confusing. Appending v2 
will be ugly, and will persist even when v1 is gone in a cycle or two. I can't 
think of any great solutions, but how about:
1. If /etc/neutron/neutron.conf exists, show only lb command sets for loaded 
plugins (loadbalancer shows lb-*, loadbalancerv2 shows lbaas-*). 2. Look in the 
environment variable LBAAS_VERSION for , v1, v2, v1,v2, and show the 
command sets based on that (if they ask for both, they get both.)
Other ideas? Or prefixes that aren't confusing and not ugly?
Both v1 and v2 sharing lb-* *at the same time* is somewhat infeasible/gross.

We did one of the suggestions, which was to call out lbaas v2 in the
help, but we may need to do one of the above to make it even easier.

** Affects: neutron
 Importance: Low
 Assignee: Doug Wiegley (dougwig)
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1418240

Title:
  we might need a way to switch lbaas v1/v2 commands in neutron client

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Referencing this review: https://review.openstack.org/#/c/111475/

  This comment:
  An open issue remains about the two command sets being confusing. Appending 
v2 will be ugly, and will persist even when v1 is gone in a cycle or two. I 
can't think of any great solutions, but how about:
  1. If /etc/neutron/neutron.conf exists, show only lb command sets for loaded 
plugins (loadbalancer shows lb-*, loadbalancerv2 shows lbaas-*). 2. Look in the 
environment variable LBAAS_VERSION for , v1, v2, v1,v2, and show the 
command sets based on that (if they ask for both, they get both.)
  Other ideas? Or prefixes that aren't confusing and not ugly?
  Both v1 and v2 sharing lb-* *at the same time* is somewhat infeasible/gross.

  We did one of the suggestions, which was to call out lbaas v2 in the
  help, but we may need to do one of the above to make it even easier.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1418240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400370] [NEW] Need to split advanced services out of neutron (placeholder bug to skip tempest tests temporarily)

2014-12-08 Thread Doug Wiegley
Public bug reported:

Placeholder

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Doug Wiegley (dougwig)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1400370

Title:
  Need to split advanced services out of neutron (placeholder bug to
  skip tempest tests temporarily)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Placeholder

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1400370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391858] [NEW] run_tests.sh broken on mac OS X

2014-11-12 Thread Doug Wiegley
Public bug reported:

(Please assign this to me, as I have
https://review.openstack.org/#/c/106237/ out for review.)

$ ./run_tests.sh
grep: repetition-operator operand invalid
usage: dirname path
Running `tools/with_venv.sh python -m neutron.openstack.common.lockutils python 
setup.py testr --slowest --testr-args='--subunit  '`
Traceback (most recent call last):
  File 
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py,
 line 162, in _run_module_as_main
__main__, fname, loader, pkg_name)
  File 
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py,
 line 72, in _run_code
exec code in run_globals
  File /Users/dougw/work/a10/neutron/neutron/openstack/common/lockutils.py, 
line 31, in module
from neutron.openstack.common import fileutils
  File neutron/openstack/common/fileutils.py, line 21, in module
from oslo.utils import excutils
ImportError: No module named utils

--
Ran 0 tests in 0.000s

OK
Running flake8 ...

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Doug Wiegley (dougwig)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391858

Title:
  run_tests.sh broken on mac OS X

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  (Please assign this to me, as I have
  https://review.openstack.org/#/c/106237/ out for review.)

  $ ./run_tests.sh
  grep: repetition-operator operand invalid
  usage: dirname path
  Running `tools/with_venv.sh python -m neutron.openstack.common.lockutils 
python setup.py testr --slowest --testr-args='--subunit  '`
  Traceback (most recent call last):
File 
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py,
 line 162, in _run_module_as_main
  __main__, fname, loader, pkg_name)
File 
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py,
 line 72, in _run_code
  exec code in run_globals
File /Users/dougw/work/a10/neutron/neutron/openstack/common/lockutils.py, 
line 31, in module
  from neutron.openstack.common import fileutils
File neutron/openstack/common/fileutils.py, line 21, in module
  from oslo.utils import excutils
  ImportError: No module named utils

  --
  Ran 0 tests in 0.000s

  OK
  Running flake8 ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391500] Re: neutron tests getting bad IPs when overlapping_ips=False

2014-11-12 Thread Doug Wiegley
Sergey, agreed, converted existing bug.

** Summary changed:

- recent gw64 test breaks overlapping_ips=False
+ neutron tests getting bad IPs when overlapping_ips=False

** Project changed: neutron = tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391500

Title:
  neutron tests getting bad IPs when overlapping_ips=False

Status in Tempest:
  Incomplete

Bug description:
  This change:

  commit 18cf59700bb637403f27fa9a5fc0b8e24b91673f
  Author: Sergey Shnaidman sshna...@cisco.com
  Date:   Tue Sep 2 22:05:00 2014 +0400

  Create subnet without gateway and explicit IP ver
  
  Now it's impossible to create subnet without gateway in network
  tests. This patch allows you to set gateway explicitly to None.
  Backward compatibility is supported: by default it creates
  subnet with default gateway as before. Also it adds possibility
  to create subnet with specific IP version when you need to create
  two subnets in one tenant of different IP version (dual-stack).
  Fixed attributes test for new requirements and added 2 anothers.
  
  Change-Id: I7aca5e07be436f20cba90339785b46182d97fead

  Adds this test:

  test_create_list_subnet_with_no_gw64_one_network

  Which breaks if overlapping IPs is disabled:

  ==
   Failed 1 tests - output below:
   ==
   
   
tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_list_subnet_with_no_gw64_one_network[gate,smoke]
   
---
   
   Captured traceback:
   ~~~
   Traceback (most recent call last):
 File tempest/api/network/test_networks.py, line 515, in 
test_create_list_subnet_with_no_gw64_one_network
   gateway=ipv6_gateway)
 File tempest/api/network/base.py, line 183, in create_subnet
   **kwargs)
 File tempest/services/network/network_client_base.py, line 151, in 
_create
   resp, body = self.post(uri, post_data)
 File tempest/services/network/network_client_base.py, line 74, in 
post
   return self.rest_client.post(uri, body, headers)
 File tempest/common/rest_client.py, line 234, in post
   return self.request('POST', url, extra_headers, headers, body)
 File tempest/common/rest_client.py, line 454, in request
   resp, resp_body)
 File tempest/common/rest_client.py, line 503, in _error_checker
   raise exceptions.BadRequest(resp_body)
   BadRequest: Bad request
   Details: {u'message': u'Invalid input for operation: Gateway is not 
valid on subnet.', u'type': u'InvalidInput', u'detail': u''}
   Traceback (most recent call last):
   _StringException: Empty attachments:
 stderr
 stdout
   
   pythonlogging:'': {{{
   2014-11-11 11:51:49,765 21322 DEBUG[tempest.common.rest_client] 
Request 
(NetworksIpV6TestJSON:test_create_list_subnet_with_no_gw64_one_network): 201 
POST http://127.0.0.1:9696/v2.0/networks 0.046s
   Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
   Body: {network: {name: network--321892543}}
   Response - Headers: {'status': '201', 'content-length': '240', 
'connection': 'close', 'date': 'Tue, 11 Nov 2014 11:51:49 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-92e358d2-fbe5-4980-a7c6-4418114d779c'}
   Body: {network: {status: ACTIVE, subnets: [], name: 
network--321892543, router:external: false, tenant_id: 
4cdbefa12fb341cb94a96c9f0940902a, admin_state_up: true, shared: false, 
id: c7c5a74b-68ca-4dab-b23d-d6c190f411db}}
   2014-11-11 11:51:49,872 21322 DEBUG[tempest.common.rest_client] 
Request 
(NetworksIpV6TestJSON:test_create_list_subnet_with_no_gw64_one_network): 400 
POST http://127.0.0.1:9696/v2.0/subnets 0.105s
   Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
   Body: {subnet: {ip_version: 6, network_id: 
c7c5a74b-68ca-4dab-b23d-d6c190f411db, cidr: 2003::/64, gateway_ip: 
2003::1}}
   Response - Headers: {'status': '400', 'content-length': '217', 
'connection': 'close', 'date': 'Tue, 11 Nov 2014 11:51:49 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-6c647816-2c07-43f3-8e74-ac989da02996'}
   Body: {NeutronError: {message: Invalid input for operation: 
Requested subnet with cidr: 2003::/64 for network: 
c7c5a74b-68ca-4dab-b23d-d6c190f411db overlaps with another subnet., type: 
InvalidInput, detail: }}
   2014-11-11 11:51:49,929 21322 DEBUG[tempest.common.rest_client] 
Request 

[Yahoo-eng-team] [Bug 1391500] [NEW] recent gw64 test breaks overlapping_ips=False

2014-11-11 Thread Doug Wiegley
Public bug reported:

This change:

commit 18cf59700bb637403f27fa9a5fc0b8e24b91673f
Author: Sergey Shnaidman sshna...@cisco.com
Date:   Tue Sep 2 22:05:00 2014 +0400

Create subnet without gateway and explicit IP ver

Now it's impossible to create subnet without gateway in network
tests. This patch allows you to set gateway explicitly to None.
Backward compatibility is supported: by default it creates
subnet with default gateway as before. Also it adds possibility
to create subnet with specific IP version when you need to create
two subnets in one tenant of different IP version (dual-stack).
Fixed attributes test for new requirements and added 2 anothers.

Change-Id: I7aca5e07be436f20cba90339785b46182d97fead

Adds this test:

test_create_list_subnet_with_no_gw64_one_network

Which breaks if overlapping IPs is disabled:

==
 Failed 1 tests - output below:
 ==
 
 
tempest.api.network.test_networks.NetworksIpV6TestJSON.test_create_list_subnet_with_no_gw64_one_network[gate,smoke]
 
---
 
 Captured traceback:
 ~~~
 Traceback (most recent call last):
   File tempest/api/network/test_networks.py, line 515, in 
test_create_list_subnet_with_no_gw64_one_network
 gateway=ipv6_gateway)
   File tempest/api/network/base.py, line 183, in create_subnet
 **kwargs)
   File tempest/services/network/network_client_base.py, line 151, in 
_create
 resp, body = self.post(uri, post_data)
   File tempest/services/network/network_client_base.py, line 74, in post
 return self.rest_client.post(uri, body, headers)
   File tempest/common/rest_client.py, line 234, in post
 return self.request('POST', url, extra_headers, headers, body)
   File tempest/common/rest_client.py, line 454, in request
 resp, resp_body)
   File tempest/common/rest_client.py, line 503, in _error_checker
 raise exceptions.BadRequest(resp_body)
 BadRequest: Bad request
 Details: {u'message': u'Invalid input for operation: Gateway is not valid 
on subnet.', u'type': u'InvalidInput', u'detail': u''}
 Traceback (most recent call last):
 _StringException: Empty attachments:
   stderr
   stdout
 
 pythonlogging:'': {{{
 2014-11-11 11:51:49,765 21322 DEBUG[tempest.common.rest_client] 
Request 
(NetworksIpV6TestJSON:test_create_list_subnet_with_no_gw64_one_network): 201 
POST http://127.0.0.1:9696/v2.0/networks 0.046s
 Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
 Body: {network: {name: network--321892543}}
 Response - Headers: {'status': '201', 'content-length': '240', 
'connection': 'close', 'date': 'Tue, 11 Nov 2014 11:51:49 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-92e358d2-fbe5-4980-a7c6-4418114d779c'}
 Body: {network: {status: ACTIVE, subnets: [], name: 
network--321892543, router:external: false, tenant_id: 
4cdbefa12fb341cb94a96c9f0940902a, admin_state_up: true, shared: false, 
id: c7c5a74b-68ca-4dab-b23d-d6c190f411db}}
 2014-11-11 11:51:49,872 21322 DEBUG[tempest.common.rest_client] 
Request 
(NetworksIpV6TestJSON:test_create_list_subnet_with_no_gw64_one_network): 400 
POST http://127.0.0.1:9696/v2.0/subnets 0.105s
 Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
 Body: {subnet: {ip_version: 6, network_id: 
c7c5a74b-68ca-4dab-b23d-d6c190f411db, cidr: 2003::/64, gateway_ip: 
2003::1}}
 Response - Headers: {'status': '400', 'content-length': '217', 
'connection': 'close', 'date': 'Tue, 11 Nov 2014 11:51:49 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-6c647816-2c07-43f3-8e74-ac989da02996'}
 Body: {NeutronError: {message: Invalid input for operation: 
Requested subnet with cidr: 2003::/64 for network: 
c7c5a74b-68ca-4dab-b23d-d6c190f411db overlaps with another subnet., type: 
InvalidInput, detail: }}
 2014-11-11 11:51:49,929 21322 DEBUG[tempest.common.rest_client] 
Request 
(NetworksIpV6TestJSON:test_create_list_subnet_with_no_gw64_one_network): 400 
POST http://127.0.0.1:9696/v2.0/subnets 0.056s
 Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
 Body: {subnet: {ip_version: 6, network_id: 
c7c5a74b-68ca-4dab-b23d-d6c190f411db, cidr: 2003:0:0:1::/64, 
gateway_ip: 2003::1}}
 Response - Headers: {'status': '400', 'content-length': '131', 
'connection': 'close', 'date': 'Tue, 11 Nov 2014 11:51:49 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 

[Yahoo-eng-team] [Bug 1353536] Re: lb-healthmonitor-create doesn't recognize the timeout parameter

2014-08-13 Thread Doug Wiegley
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353536

Title:
  lb-healthmonitor-create doesn't recognize the timeout parameter

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Neutron:
  Confirmed

Bug description:
  Using the CLI command 'neutron lb-healthmonitor-create --delay 3
  --type HTTP --max-retries 3 --timeout 3' doesn't work:

  $ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 
--timeout 3
  usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
 [-c COLUMN] [--max-width integer]
 [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID]
 [--admin-state-down]
 [--expected-codes EXPECTED_CODES]
 [--http-method HTTP_METHOD]
 [--url-path URL_PATH] --delay DELAY
 --max-retries MAX_RETRIES --timeout
 TIMEOUT --type {PING,TCP,HTTP,HTTPS}
  neutron lb-healthmonitor-create: error: argument --timeout is required

  Multiple variations of this command were tried - no success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp