[Yahoo-eng-team] [Bug 1647570] [NEW] l2 population fdb updates being sent to all agents

2016-12-05 Thread Arun Kumar
Public bug reported:

l2 population mechanism driver sends out fdb update to all registered
agents when a new VM is spawned on an agent for a network (First port
activated on current agent in this network)

It should only send out fdb updates to agents which have l2population
enabled as this affects performance in large scale deployments.

** Affects: neutron
 Importance: Undecided
 Assignee: Arun Kumar (arooncoomar)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Arun Kumar (arooncoomar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647570

Title:
  l2 population fdb updates being sent to all agents

Status in neutron:
  In Progress

Bug description:
  l2 population mechanism driver sends out fdb update to all registered
  agents when a new VM is spawned on an agent for a network (First port
  activated on current agent in this network)

  It should only send out fdb updates to agents which have l2population
  enabled as this affects performance in large scale deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647541] [NEW] tox -e docs error

2016-12-05 Thread YAMAMOTO Takashi
Public bug reported:

/Users/yamamoto/git/neutron/doc/source/policies/neutron-teams.rst:68: WARNING: B
lock quote ends without a blank line; unexpected unindent.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647541

Title:
  tox -e docs error

Status in neutron:
  In Progress

Bug description:
  /Users/yamamoto/git/neutron/doc/source/policies/neutron-teams.rst:68: 
WARNING: B
  lock quote ends without a blank line; unexpected unindent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515796] Re: The neutron-sanity-check script should support Linux bridge

2016-12-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515796

Title:
  The neutron-sanity-check script should support Linux bridge

Status in neutron:
  Expired

Bug description:
  By default, the neutron-sanity-check script checks for Open vSwitch
  components and breaks on deployments using the Linux bridge agent. For
  example:

  $ neutron-sanity-check --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron
  /plugins/ml2/ml2_conf.ini
  No handlers could be found for logger "neutron.quota"
  2015-11-12 23:38:31.989 7523 INFO neutron.common.config [-] Logging enabled!
  2015-11-12 23:38:31.989 7523 INFO neutron.common.config [-] 
/openstack/venvs/neutron-master/bin/neutron-sanity-check version 8.0.0.dev157
  2015-11-12 23:38:31.994 7523 WARNING oslo_config.cfg [-] Option "verbose" 
from group "DEFAULT" is deprecated for removal.  Its value may be silently 
ignored in the future.
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl [-] Unable 
to execute ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--may-exist', 'add-br', 'patchtest-6fae63', '--', 'set', 'Bridge', 
'patchtest-6fae63', 'datapath_type=system'].
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl Traceback 
(most recent call last):
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_vsctl.py",
 line 63, in run_vsctl
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl 
log_fail_as_error=False).rstrip()
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl   File 
"/openstack/venvs/neutron-master/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 157, in execute
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl raise 
RuntimeError(m)
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl 
RuntimeError:
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl Command: 
['sudo', '/openstack/venvs/neutron-master/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--oneline', 
'--format=json', '--', '--may-exist', 'add-br', 'patchtest-6fae63', '--', 
'set', 'Bridge', 'patchtest-6fae63', 'datapath_type=system']
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl Exit code: 
96
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl
  2015-11-12 23:38:32.394 7523 ERROR neutron.agent.ovsdb.impl_vsctl

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533034] Re: an unclear error info returned when create an ipv6 subnetpool associate to ipv4 address scope

2016-12-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533034

Title:
  an unclear error info returned when create an ipv6 subnetpool
  associate to ipv4 address scope

Status in neutron:
  Expired

Bug description:
  [Summary]
  an unclear error info returned when create an ipv6 subnetpool associate to 
ipv4 address scope
  [Topo]
  devstack all-in-one node

  [Description and expect result]
  need to return a clear error info

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create an ipv4 address scope:
  root@45-59:/opt/stack/devstack# neutron address-scope-create   
  --tenant-id ebbcdabd911340efa9a3430488c43304 scope1 --ip_version 4
  Created a new address_scope:
  ++--+
  | Field  | Value|
  ++--+
  | id | 8902c850-5e2a-41fd-898e-204d9cf0429e |
  | ip_version | 4|
  | name   | scope1   |
  | shared | False|
  | tenant_id  | ebbcdabd911340efa9a3430488c43304 |
  ++--+

  2) create an ipv6 subnetpool associate to the ipv4 address scope, 
  an unclear error info returned:
  root@45-59:/opt/stack/devstack#  neutron subnetpool-create --pool-prefix 
  2::1/64 --address-scope scope1 pool1
  Illegal subnetpool association: subnetpool  cannot 
  associate with address scope 8902c850-5e2a-41fd-898e-204d9cf0429e because 
  subnetpool ip_version is not 4.
  root@45-59:/opt/stack/devstack# 

  ISSUE: "subnetpool " is unclear, also not
  readable for user

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609946] Re: nova-manage ignores --nouse-syslog

2016-12-05 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1609946

Title:
  nova-manage ignores --nouse-syslog

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I am debugging some database issues on a new node.

  I'm building Kolla containers, configured to be centos distro, off of
  latest.

  $ nova-manage --version
  14.0.0

  $ rpm -ql | grep nova
  rpm: no arguments given for query

  I wanted to have the progress for nova-manage to be output to the
  console instead of syslog.

  $ nova-manage --debug --verbose --nouse-syslog db sync
  Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value 
may be silently ignored in the future.

  Regardless of if --nouse-syslog is present or not in the command line,
  it's logging to syslog, as far as I can tell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1609946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640504] Re: release notes and config guide missing new settings for Newton

2016-12-05 Thread Steve Martinelli
Closing this one from the keystone side, as it's fixed from our point of
view. Thanks for the bug report Matt, and thank you guoshan for fixing
it!

** Changed in: keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1640504

Title:
  release notes and config guide missing new settings for Newton

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  OpenStack operators and folks who automate openstack deployments with
  tools like puppet rely on the release notes [1] and config guides [2]
  to highlight new, changed, deleted, and deprecated config options. For
  Keystone Newton both of these guides are missing many features, with
  one prime example being the new PCI-DSS features [3]. The release
  notes actually fail to mention these at all although they are
  documented elsewhere if you knew that you should be looking for them.

  The config reference only mentions 1 deprecation, beyond PCI-DSS there was 
probably more and they are all missing:
  
http://docs.openstack.org/newton/config-reference/tables/conf-changes/keystone.html

  Compare this to the new and updated section for kilo which was complete and 
useful:
  
http://docs.openstack.org/kilo/config-reference/content/keystone-conf-changes-kilo.html

  [1] - 
http://docs.openstack.org/newton/config-reference/tables/conf-changes/keystone.html
  [2] - http://docs.openstack.org/releasenotes/keystone/newton.html
  [3] - http://docs.openstack.org/developer/keystone/security_compliance.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1640504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641823] Re: Config reference: add PCI options

2016-12-05 Thread Steve Martinelli
*** This bug is a duplicate of bug 1640504 ***
https://bugs.launchpad.net/bugs/1640504

** This bug has been marked a duplicate of bug 1640504
   release notes and config guide missing new settings for Newton

** Changed in: keystone
Milestone: ocata-2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641823

Title:
  Config reference: add PCI options

Status in OpenStack Identity (keystone):
  New

Bug description:
  Add configuration options to the config reference [1].

  
  [1] 
https://github.com/openstack/openstack-manuals/tree/master/doc/config-reference/source/identity

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647027] Re: Neutron migration unit test failure with alembic 0.8.9

2016-12-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406447
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d8055d52e5c6f3e0dfd49b857f715edae6520e03
Submitter: Jenkins
Branch:master

commit d8055d52e5c6f3e0dfd49b857f715edae6520e03
Author: Ihar Hrachyshka 
Date:   Fri Dec 2 18:14:57 2016 +

Support alembic 0.8.9 in test_autogen_process_directives

The test case validates that autogenerated alembic commands meet
our expectations.

The new alembic version adds a leading '# ' to each autogenerated
comment to make flake8 happy. This patch adopts the test case to handle
both new and older versions. This is achieved by switching from exact
match to using a regexp.

Change-Id: I9ca411e5b3d20412fffa05f6eb79659f6c56f3fd
Closes-Bug: #1647027


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647027

Title:
  Neutron migration unit test failure with alembic 0.8.9

Status in neutron:
  Fix Released

Bug description:
  alembic 0.8.9 causes unit test failures, as seen in e.g.

  http://logs.openstack.org/36/406436/1/check/gate-cross-neutron-
  python27-ubuntu-xenial/7746634/testr_results.html.gz

  The failure in particular is:

  
  ft287.14: 
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directives_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/base.py", line 127, in func
  return f(self, *args, **kwargs)
File 
"/home/jenkins/workspace/gate-cross-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  return func(*args, **keywargs)
File "neutron/tests/unit/db/test_migration.py", line 690, in 
test_autogen_process_directives
  alembic_ag_api.render_python_code(expand.upgrade_ops)
File 
"/home/jenkins/workspace/gate-cross-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-cross-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = '''\
  ### commands auto generated by Alembic - please adjust! ###
  op.create_table('organization',
  sa.Column('id', sa.Integer(), nullable=False),
  sa.Column('name', sa.String(length=50), nullable=False),
  sa.PrimaryKeyConstraint('id')
  )
  op.add_column('user', sa.Column('organization_id', sa.Integer(), 
nullable=True))
  op.create_foreign_key('org_fk', 'user', 'organization', 
['organization_id'], ['id'])
  ### end Alembic commands ###'''
  actual= '''\
  # ### commands auto generated by Alembic - please adjust! ###
  op.create_table('organization',
  sa.Column('id', sa.Integer(), nullable=False),
  sa.Column('name', sa.String(length=50), nullable=False),
  sa.PrimaryKeyConstraint('id')
  )
  op.add_column('user', sa.Column('organization_id', sa.Integer(), 
nullable=True))
  op.create_foreign_key('org_fk', 'user', 'organization', 
['organization_id'], ['id'])
  # ### end Alembic commands ###'''

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646357] Re: When unsupported microversion requested, a microversion header returned with unsupported version

2016-12-05 Thread Dinesh Bhor
** Also affects: masakari
   Importance: Undecided
   Status: New

** Changed in: masakari
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646357

Title:
  When unsupported microversion requested, a microversion header
  returned with unsupported version

Status in masakari:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When request a unsupported microversion, the 409 returned. But it
  includes that unsupport microversion in the header 'Openstack-Api-
  Version'.

  curl -g -i -X GET http://hp-pc:8774/v2.1/servers/detail -H "OpenStack-
  API-Version: compute 2.50" -H "User-Agent: python-novaclient" -H
  "Accept: application/json" -H "X-Auth-Token:
  5e3c15ceb02f4e78a3b3a35b98a6d48a"

  HTTP/1.1 406 Not Acceptable
  Openstack-Api-Version: compute 2.50
  X-Openstack-Nova-Api-Version: 2.50
  Vary: OpenStack-API-Version
  Vary: X-OpenStack-Nova-API-Version
  Content-Type: application/json; charset=UTF-8
  Content-Length: 123
  X-Compute-Request-Id: req-1d511dd3-6882-443f-979b-e0812bd84f57
  Date: Thu, 01 Dec 2016 05:59:58 GMT

  That looks like strange. We can remove that header when 406 returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/masakari/+bug/1646357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626093] Re: LBaaSV2: listener deletion causes LB port to be Detached "forever"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1626093

Title:
  LBaaSV2: listener deletion causes LB port to be Detached "forever"

Status in octavia:
  New

Bug description:
  Case 1:
  Create a LBaaSV2 LB with a listener. Remove listener. Port Detached. Add 
listener. Nothing happens.

  Case 2:
  Create a LBaaSV2 LB with a listener. Add another listener. Remove one of the 
two. Port Detached.

  This is merely an annoyance.

  neutron port-show shows nothing for device_id and device_owner.
  In Horizon shows as Detached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1626093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602974] Re: [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-12-05 Thread Michael Johnson
Is this a duplicate to https://bugs.launchpad.net/octavia/+bug/1632054 ?

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602974

Title:
  [stable/liberty] LBaaS v2 haproxy: need a way to find status of
  listener

Status in octavia:
  Incomplete

Bug description:
  Currently we dont have option to check status of listener. Below is
  the output of listener without status.

  root@runner:~# neutron lbaas-listener-show 
8c0e0289-f85d-4539-8970-467a45a5c191
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
  | loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  root@runner:~#

  Problem arise when we tried to configure listener and pool back to
  back without any delay. Pool create fails saying listener is not
  ready.

  Workaround is to add 3seconds delay between listener and pool
  creation.

  Logs:

  root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
  | listeners   |  |
  | name| test-lb  |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
  | vip_address | 20.0.0.62|
  | vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
  | vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
  +-+--+
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 90260465-934a-44a4-a289-208e5af74cf5   |
  | loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  Invalid state PENDING_UPDATE of loadbalancer resource 
3ed2ff4a-4d87-46da-8e5b-265364dd6861
  root@runner:~#

  
  Neutron:

  : 

[Yahoo-eng-team] [Bug 1464241] Re: Lbaasv2 command logs not seen

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464241

Title:
  Lbaasv2 command logs not seen

Status in octavia:
  New

Bug description:
  I am testing incorrect and correct lbaasv2 deletion. 
  even if a command fails we do not see it in the  
/var/log/neutron/lbaasv2-agent.log

  BUT 
  We see the lbaas (not lbaasv2) is being updated with information and has 
error. 

  2015-06-11 03:03:34.352 21274 WARNING neutron.openstack.common.loopingcall 
[-] task > run outlasted interval by 50.10 sec
  2015-06-11 03:04:34.366 21274 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 152, in sync_state
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 36, in get_ready_devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_devices', host=self.host)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
339, in _send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
243, in wait
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
149, in get
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply to message ID 73130a6bb5444f259dbf810cfb1003b3
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager

  
  configure lbaasv2 setup- loadbalncer, listener, member, pool, healthmonitor. 

  see lbaasv2 logs and lbaas logs
   /var/log/neutron/lbaasv2-agent.log
   /var/log/neutron/lbaasv-agent.log


  lbaasv2
  kilo
  rhel7.1 
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622946] Re: lbaas with haproxy backend creates the lbaas namespace without the members' subnet

2016-12-05 Thread Michael Johnson
Can you provide your lbaas agent logs?

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622946

Title:
  lbaas with haproxy backend creates the lbaas namespace without the
  members' subnet

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with haproxy, and the VIP and member
  subnets are different, the created lbaas namespace contains only the
  VIP subnet, so the members are unreachable.

  E.g.:
  neutron lbaas-loadbalancer-show 8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee
  .
  .
  .
  | vip_subnet_id   | 23655977-d29f-4917-a519-de27951fde89   |

  neutron lbaas-member-list d3ebda43-53f8-4118-b4db-999c021c9680

  | 4fe79d5e-a517-4e4f-a145-3c80b414be08 |  | 192.168.168.8 |
  22 |  1 | 0a4a1f3e-43cb-4f9c-9d51-c71f0c231a3e | True   |

  Note that the two subnets are different.
  The created haproxy config is OK:
  .
  .
  .
  frontend 6821edd8-54ab-4fba-90e5-94831fcd0ec0
  option tcplog
  bind 10.97.37.1:22
  mode tcp

  backend d3ebda43-53f8-4118-b4db-999c021c9680
  mode tcp
  balance source
  timeout check 20
  server 4fe79d5e-a517-4e4f-a145-3c80b414be08 192.168.168.8:22 weight 1 
check inter 10s fall 3

  But the namespace is not:
  ip netns exec qlbaas-8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee ip addr
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ns-f56b5f8d-ef@if11:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:82:9d:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 10.97.37.1/25 brd 10.97.37.127 scope global ns-f56b5f8d-ef
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe82:9d9a/64 scope link 
 valid_lft forever preferred_lft forever

  
  The member subnet is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1622946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624097] Re: Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in octavia:
  In Progress
Status in python-openstackclient:
  Fix Released

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-05 Thread Michael Johnson
The neutron project with lbaas tag was for neutron-lbaas, but now that
we have merged the projects, I am removing neutron as it is all under
octavia project now.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640504] Re: release notes and config guide missing new settings for Newton

2016-12-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/405711
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=260a31067d58d045477908de62bcc2e6798e1bae
Submitter: Jenkins
Branch:master

commit 260a31067d58d045477908de62bcc2e6798e1bae
Author: jolie 
Date:   Fri Dec 2 09:45:59 2016 +0800

release notes and config guide new settings

OpenStack operators and folks who automate openstack deployments with
tools like puppet rely on the release notes and config guides to
highlight new, changed, deleted, and deprecated config options.

Change-Id: I15abb241af8a41edc3dd3850b08be4ab7a31c9c5
Closes-bug:#1640504


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1640504

Title:
  release notes and config guide missing new settings for Newton

Status in OpenStack Identity (keystone):
  Confirmed
Status in openstack-manuals:
  Fix Released

Bug description:
  OpenStack operators and folks who automate openstack deployments with
  tools like puppet rely on the release notes [1] and config guides [2]
  to highlight new, changed, deleted, and deprecated config options. For
  Keystone Newton both of these guides are missing many features, with
  one prime example being the new PCI-DSS features [3]. The release
  notes actually fail to mention these at all although they are
  documented elsewhere if you knew that you should be looking for them.

  The config reference only mentions 1 deprecation, beyond PCI-DSS there was 
probably more and they are all missing:
  
http://docs.openstack.org/newton/config-reference/tables/conf-changes/keystone.html

  Compare this to the new and updated section for kilo which was complete and 
useful:
  
http://docs.openstack.org/kilo/config-reference/content/keystone-conf-changes-kilo.html

  [1] - 
http://docs.openstack.org/newton/config-reference/tables/conf-changes/keystone.html
  [2] - http://docs.openstack.org/releasenotes/keystone/newton.html
  [3] - http://docs.openstack.org/developer/keystone/security_compliance.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1640504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585250] Re: Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585250

Title:
  Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

Status in octavia:
  In Progress

Bug description:
  There is no indication on the CLI when creating an LBaaSv2 object
  (other than a "loadbalancer") has failed...

  stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 
--loadbalancer MyLB1 --protocol HTTP --protocol-port 80
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 5ca664d6-3a3a-4369-821d-e36c87ff5dc2   |
  | loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
  | name  | MyListener1|
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | 22000d943c5341cd88d27bd39a4ee9cd   |
  +---++

  There is no indication of any issue here, and lbaas-listener-show
  produces the same output.  However, in reality, the listener is in an
  error state...

  mysql> select * from lbaas_listeners;
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | tenant_id| id   | 
name| description | protocol | protocol_port | connection_limit | 
loadbalancer_id  | default_pool_id | admin_state_up | 
provisioning_status | operating_status | default_tls_container_id |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | 
MyListener1 | | HTTP |80 |   -1 | 
549982d9-7f52-48ac-a4fe-a905c872d71d | NULL|  1 | ERROR 
  | OFFLINE  | NULL |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  1 row in set (0.00 sec)

  
  How is a CLI user who doesn't have access to the Neutron DB supposed to know 
an error has occurred (other than "it doesn't work", obviously)?

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618559] Re: LBaaS v2 healthmonitor wrong status detection

2016-12-05 Thread Michael Johnson
Are you still having this issue?  I cannot reproduce it on my devstack.

If you can reproduce this, can you provide the commands you used to
setup the load balancer (all of the steps), the output of neutron net-
list, the output of neutron subnet-list, and the output of "sudo ip
netns"?


** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618559

Title:
  LBaaS v2 healthmonitor wrong status detection

Status in octavia:
  Incomplete

Bug description:
  Summary:
  After enabling health monitor loadbalancer on any request returns 
  HTTP/1.0 503 Service Unavailable  

  I have loadbalancer with vip ip 10.123.21.15. HTTP listener, pool and
  member with IP 10.123.21.12.

  I check status of web-server by:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But when I add healthmonitor:
  neutron lbaas-healthmonitor-create \
--delay 5 \
--max-retries 2 \
--timeout 10 \
--type HTTP \
--url-path /owncloud/status.php \
--pool owncloud-app-lb-http-pool

  neutron lbaas-healthmonitor-show 
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 5  |
  | expected_codes | 200|
  | http_method| GET|
  | id | cf3cc795-ab1f-44c7-a521-799281e1ff64   |
  | max_retries| 2  |
  | name   ||
  | pools  | {"id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7"} |
  | tenant_id  | b5d8bbe7742540c2b9b2e1b324ea854e   |
  | timeout| 10 |
  | type   | HTTP   |
  | url_path   | /owncloud/status.php   |
  +++

  I expect:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But result:
  curl -I -X GET http://10.123.21.15/owncloud/status.php
  ...
  HTTP/1.0 503 Service Unavailable

  Direct request to member:
  curl -I -X GET http://10.123.21.12/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  In neutron logs have no ERROR.

  Some detail about configuration:

  I have 3 controllers. Installed by Fuel with l3 population and DVR enabled.
  lbaas_agent.ini
  interface_driver=openvswitch

  neutron lbaas-loadbalancer-status owncloud-app-lb
  {
  "loadbalancer": {
  "name": "owncloud-app-lb", 
  "provisioning_status": "ACTIVE", 
  "listeners": [
  {
  "name": "owncloud-app-lb-http", 
  "provisioning_status": "ACTIVE", 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  }, 
  "members": [
  {
  "name": "", 
  "provisioning_status": "ACTIVE", 
  "address": "10.123.21.12", 
  "protocol_port": 80, 
  "id": "8a588ed1-8818-44b2-80df-90debee59720", 
  "operating_status": "ONLINE"
  }
  ], 
  "id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7", 
  "operating_status": "ONLINE"
  }
  ], 
  "l7policies": [], 
  "id": "7521308a-15d1-4898-87c8-8f1ed4330b6c", 
  "operating_status": "ONLINE"
  }
  ], 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  

[Yahoo-eng-team] [Bug 1627393] Re: Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not set up correctly

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627393

Title:
  Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not
  set up correctly

Status in octavia:
  New

Bug description:
  I'm hoping this is something that will go away with the neutron-lbaas
  and Octavia merge.

  Create a self-signed certificate like so:

  openssl genrsa -des3 -out self-signed_encrypted.key 2048
  openssl rsa -in self-signed_encrypted.key -out self-signed.key
  openssl req -new -x509 -days 365 -key self-signed.key -out self-signed.crt

  As the admin user, grant the demo user the ability to create cloud
  resources on the demo project:

  openstack role add --project demo --user demo creator

  Now, become the demo user:

  source ~/devstack/openrc demo demo

  As the demo user, upload the self-signed certificate to barbican:

  openstack secret store --name='test_cert' --payload-content-type='text/plain' 
--payload="$(cat self-signed.crt)"
  openstack secret store --name='test_key' --payload-content-type='text/plain' 
--payload="$(cat self-signed.key)"
  openstack secret container create --name='test_tls_container' 
--type='certificate' --secret="certificate=$(openstack secret list | awk '/ 
test_cert / {print $2}')" --secret="private_key=$(openstack secret list | awk 
'/ test_key / {print $2}')"

  As the demo user, grant access to the the above secrets BUT NOT THE
  CONTAINER to the 'admin' user. In my test, the admin user has ID:
  02c0db7c648c4714971219ae81817ba7

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_cert / {print $2}')
  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_key / {print $2}')

  Now, as the demo user, attempt to deploy a neutron-lbaas listener
  using the secret container above:

  neutron lbaas-loadbalancer-create --name lb1 private-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 
--default-tls-container=$(openstack secret container list | awk '/ 
test_tls_container / {print $2}')

  The neutron-lbaas command succeeds, but the Octavia deployment fails
  since it can't access the secret container.

  This is fixed if you remember to grant access to the TLS container to
  the admin user like so:

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack
  secret container list | awk '/ test_tls_container / {print $2}')

  However, neutron-lbaas and octavia should have similar failure
  scenarios if the permissions aren't set up exactly right in any case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1627393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624145] Re: Octavia should ignore project_id on API create commands (except load_balancer)

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624145

Title:
  Octavia should ignore project_id on API create commands (except
  load_balancer)

Status in octavia:
  New

Bug description:
  Right now, the Octavia API allows the specification of the project_id
  on the create commands for the following objects:

  listener
  health_monitor
  member
  pool

  However, all of these objects should be inheriting their project_id
  from the ancestor load_balancer object. Allowing the specification of
  project_id when we create these objects could lead to a situation
  where the descendant object's project_id is different from said
  object's ancestor load_balancer project_id.

  We don't want to break our API's backward compatibility for at least
  two release cycles, so for now we should simply ignore this parameter
  if specified (and get it from the load_balancer object in the database
  directly), and insert TODO notes in the API code to remove the ability
  to specify project_id after a certain openstack release.

  We should also update the Octavia driver in neutron_lbaas to stop
  specifying the project_id on descendant object creation.

  This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596162] Re: lbaasv2:Member can be created with the same ip as vip in loadbalancer

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596162

Title:
  lbaasv2:Member can be created with the same ip as vip in loadbalancer

Status in octavia:
  In Progress

Bug description:
  Create a loadbalancer:
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-loadbalancer-show 
ebe0a748-7797-44fa-be09-1890ca2f5c1f
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | id  | ebe0a748-7797-44fa-be09-1890ca2f5c1f   |
  | listeners   | {"id": "3cfe5262-7e25-4433-a342-93eb118049f9"} |
  | | {"id": "a7c014d4-8c57-43ee-aeab-539847a37f43"} |
  | | {"id": "794efa5b-1e5d-4182-857a-6d8415973007"} |
  | | {"id": "6b64350e-335f-4aa5-b2dd-e86adcdbc0b3"} |
  | name| lb1|
  | operating_status| ONLINE |
  | provider| zxveglb|
  | provisioning_status | ACTIVE |
  | tenant_id   | 6403670bcb0f45cba4cb732a9a936da4   |
  | vip_address | 193.168.1.200  |
  | vip_port_id | f401e0ae-2537-4018-9252-742c16fc22ef   |
  | vip_subnet_id   | 73bee51e-7ea3-44ea-8d98-cf778cd171e0   |
  +-++

  vip address is 193.168.1.200.
  Then create a listener and pool.
  Then create a member,the ip of member is assigned to 193.168.1.200
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-member-create --subnet 
73bee51e-7ea3-44ea-8d98-cf778cd171e0 --address 193.168.1.200 --protocol-port 80 
pool1
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 193.168.1.200|
  | admin_state_up | True |
  | id | e377f7a5-e2d8-493d-ad61-c2ab25ed7c0b |
  | protocol_port  | 80   |
  | subnet_id  | 73bee51e-7ea3-44ea-8d98-cf778cd171e0 |
  | tenant_id  | 6403670bcb0f45cba4cb732a9a936da4 |
  | weight | 1|
  ++--+
  It runs OK.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1596162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583955] Re: provisioning_status of loadbalancer is always PENDING_UPDATE when following these steps

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583955

Title:
  provisioning_status of loadbalancer is always PENDING_UPDATE  when
  following these steps

Status in octavia:
  New

Bug description:
  issue is in kilo branch;

  following these steps:
  1. update admin_state_up of loadbalancer to False
  2. restart lbaas agent
  3. update admin_state_up of loadbalancer to True

  then the provisioning_status of loadbalancer is always PENDING_UPDATE

  agent log is:
  2013-11-20 12:33:54.358 12601 ERROR oslo_messaging.rpc.dispatcher 
[req-add12f1f-f693-4f0b-9eae-5204d8a50a3f ] Exception during message handling: 
An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
282, in update_loadbalancer
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
168, in _get_driver
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1583955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584209] Re: Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource (API)

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: In Progress => Incomplete

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584209

Title:
  Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource
  (API)

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with lbaas v2 (Octavia provider) and
  would like to create a floating ip attached to the vip port for
  loadbalancer.  Currently have to lookup the port id based on the ip
  address associated with the loadbalancer.  It would greatly simplify
  the workflow if the Port ID is returned in the loadbalancer API,
  similar to vip API in lbaas v1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1584209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551282] Re: devstack launches extra instance of lbaas agent

2016-12-05 Thread Michael Johnson
This was finished here: https://review.openstack.org/#/c/358255/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551282

Title:
  devstack launches extra instance of lbaas agent

Status in neutron:
  Fix Released

Bug description:
  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

  enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
  ENABLED_SERVICES+=,q-lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552119] Re: NSxv LBaaS stats error

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552119

Title:
  NSxv LBaaS stats error

Status in neutron:
  Fix Released

Bug description:
  - OpenStack Kilo (2015.1.1-1)
  - NSXv 6.2.1

  I see following errors in neutron.log after enabling LBaaS

  
  2016-03-02 07:36:19.145 27350 INFO neutron.wsgi 
[req-28324239-c925-4602-91c3-24378466d8ae ] 192.168.0.2 - - [02/Mar/2016 
07:36:19] "GET /v2.0/lb/pools/ba3c7e8a-81bf-4459-ad85-224b9f92594f/stats.json 
HTTP/1.1" 500 378 2.441363
  2016-03-02 07:36:19.176 27349 INFO neutron.wsgi [-] (27349) accepted 
('192.168.0.2', 54704)
  2016-03-02 07:36:21.740 27349 ERROR neutron.api.v2.resource 
[req-94a3960b-b01f-4665-a733-1621d7f7cbfa ] stats failed
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 209, in 
_handle_action
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 336, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource stats_data = 
driver.stats(context, pool_id)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/vmware/edge_driver.py",
 line 199, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
self._nsxv_driver.stats(context, pool_id, pool_mapping)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/vshield/edge_loadbalancer_driver.py",
 line 786, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource pools_stats = 
lb_stats.get('pool', [])
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource AttributeError: 
'tuple' object has no attribute 'get'
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468457] Re: Invalid Tempest tests cause A10 CI to fail

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468457

Title:
  Invalid Tempest tests cause A10 CI to fail

Status in octavia:
  New

Bug description:
  The following tests will not pass in A10's CI due to what appear to be 
incorrect tests.
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_for_another_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_admin[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_other_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_using_empty_tenant_field[smoke]

  --
  I'm creating this bug so I have one to reference when I @skip the tests per 
dougwig.

  The empty tenant ID tests need to be modified to expect an error
  condition, but this is not possible as Neutron's request handling
  fills in missing tenant IDs with the tenant ID of the logged in user.
  This is an error condition and should be handled as such.  Fixing it
  in the request handling is going to require fixes in a lot more places
  in Neutron, I believe.  I'll look for other similar tests that would
  expose such functionality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1468457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464229] Re: LbaasV2 Health monitor status

2016-12-05 Thread Michael Johnson
Currently you can view the health status by using the load balancer
status API/command.

neutron lbaas-loadbalancer-status lb1

I am setting this to wishlist as I think there is a valid point that the
show commands should include the operating status.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464229

Title:
  LbaasV2 Health monitor status

Status in octavia:
  In Progress

Bug description:
  lbaasv2 healmonnitor:

  We have no way to see if an LbaasV2 health monitor is succesfful or failed.
  Additionally, we have no way to see if a VM in lbaasv2 pool is up or down ( 
from an Lbaasv2 point of view)

  neutron lbaas-pool-show - should show HealtMonitor status for VMs.

  kilo
  rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] Re: delete lbaasv2 can't delete lbaas namespace automatically.

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in octavia:
  In Progress

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498130] Re: LBaaSv2: Can't delete the Load balancer and also dependant entities if the load balancer provisioning_status is in PENDING_UPDATE

2016-12-05 Thread Michael Johnson
Marking this as invalid as it is as designed to not allow actions on load 
balancers in PENDING_* states.
PENDING_* means an action against that load balancer (DELETE or UPDATE) is 
already in progress.

As for load balancers getting stuck in a PENDING_* state, many bugs have been 
cleaned up for that situation.  If you find a situation that leads to a load 
balancer stuck in a PENDING_* state, please report that as a new bug.
Operators can clear load balnacers stuck in PENDING_* by manually updating the 
database record for the resource.

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498130

Title:
  LBaaSv2: Can't  delete the Load balancer and also dependant entities
  if the load balancer provisioning_status is  in PENDING_UPDATE

Status in octavia:
  Invalid

Bug description:
  If the Load balancer provisioning_status is  in PENDING_UPDATE

  cannot delete the Loadbalancer and also dependent entities like
  listener or pool

   neutron -v lbaas-listener-delete 6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 338 vary: 
X-Auth-Token connection: keep-alive date: Mon, 21 Sep 2015 18:35:55 GMT 
content-type: application/json x-openstack-request-id: 
req-952f21b0-81bf-4e0f-a6c8-b3fc13ac4cd2
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://9.197.47.200:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: neutronclient.neutron.v2_0.lb.v2.listener.DeleteListener 
run(Namespace(id=u'6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6', 
request_format='json'))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://9.197.47.200:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:9696/v2.0/lbaas/listeners.json?fields=id=6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP: [200] date: Mon, 21 Sep 2015 18:35:56 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 346 x-openstack-request-id: 
req-fd7ee22b-f776-4ebd-94c6-7548a5aff362
  RESP BODY: {"listeners": [{"protocol_port": 100, "protocol": "TCP", 
"description": "", "sni_container_ids": [], "admin_state_up": true, 
"loadbalancers": [{"id": "ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2"}], 
"default_tls_container_id": null, "connection_limit": 100, "default_pool_id": 
null, "id": "6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6", "name": "listener100"}]}

  DEBUG: keystoneclient.session REQ: curl -g -i -X DELETE 
http://9.197.47.200:9696/v2.0/lbaas/listeners/6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6.json
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_UPDATE of loadbalancer resource 
ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2", "type": "StateInvalid", "detail": ""}}
  ERROR: neutronclient.shell Invalid state PENDING_UPDATE of loadbalancer 
resource ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 766, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 101, 
in run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
581, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
932, in delete_listener
  return self.delete(self.lbaas_listener_path % (lbaas_listener))
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
289, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  self._handle_fault_response(status_code, replybody)
File 

[Yahoo-eng-team] [Bug 1440285] Re: When neutron lbaas agent is not running, 'neutron lb*’ commands must display an error instead of "404 Not Found"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440285

Title:
  When neutron lbaas agent is not running, 'neutron lb*’ commands must
  display an error instead of "404 Not Found"

Status in octavia:
  Confirmed

Bug description:
  When neutron lbaas agent is not running, all the ‘neutron lb*’
  commands display "404 Not Found". This makes the user think that
  something is wrong with the lbaas agent (when it is not even
  running!).

  Instead, when neutron lbaas agent is not running, an error like
  “Neutron Load Balancer Agent not running” must be displayed so the
  user knows that the lbaas agent must be started first.

  The ‘ps’ command below shows that the neutron lbaas agent is not
  running.

  $ ps aux | grep lb
  $

  $ neutron lb-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-member-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-vip-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-listener-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-loadbalancer-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron --version
  2.3.11

  =

  Below are the neutron verbose messages that show "404 Not Found".

  $ neutron -v lb-healthmonitor-list
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 341 vary: 
X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Sat, 04 Apr 2015 04:37:54 GMT content-type: 
application/json x-openstack-request-id: 
req-95c6d1e1-02a7-4077-8ed2-0cb4f574a397
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://192.168.122.205:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
clifftablib.formatters:YamlFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
clifftablib.formatters:JsonFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
  DEBUG: neutronclient.neutron.v2_0.lb.healthmonitor.ListHealthMonitor 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://192.168.122.205:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:9696/v2.0/lb/health_monitors.json -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}23f2a54d0348e6bfc5364565ece4baf2e2148fa8"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: 404 Not Found

  The resource could not be found.

  ERROR: neutronclient.shell 404 Not Found

  The resource could not be found.

  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
760, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
100, in run_command
  return cmd.run(known_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
29, in run
  return super(OpenStackCommand, self).run(parsed_args)
    File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 91, in 
run
  column_names, data = self.take_action(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
35, in take_action
  return self.get_data(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 691, in get_data
  data = self.retrieve_list(parsed_args)
    File 

[Yahoo-eng-team] [Bug 1426248] Re: lbaas v2 member create should not require subnet_id

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426248

Title:
  lbaas v2 member create should not require subnet_id

Status in octavia:
  Incomplete

Bug description:
  subnet_id on a member is currently required.  It should be optional
  and if not provided, it can be assumed the member can be reached by
  the load balancer (through the loadbalancer's vip subnet)

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1426248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2016-12-05 Thread Michael Johnson
I agree with Brandon here, this is an lbaas-dashboard issue, so marking
the neutron side invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  New

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607061] Re: [RFE] Bulk LBaaS pool member operations

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607061

Title:
  [RFE] Bulk LBaaS pool member operations

Status in octavia:
  Triaged

Bug description:
  [Use-cases]
  - Configuration Management
  Perform administrative operations on a collection of members.

  [Limitations]
  Members must currently be created/modified/deleted one at a time.  This can 
be accomplished programmatically via neutron-api but is cumbersome through the 
CLI.

  [Enhancement]
  Embellish neutron-api (CLI) and GUI to support management of a group of 
members via one operation.  Pitching a few ideas on how to do this.

  - Extend existing API
  Add optional filter parameter to neutron-api to find and modify any member 
caught by the filter.

  - Create new API
  Create new lbaas-members-* commands that makes it clear we're changing a 
collection.  But leave the lbaas-pool-* command alone which are organizing 
collections.

  - Base inheritance
  Create new lbaas-member-base-* commands to define default settings then 
extend lbaas-member-* to specify to the base.  Updating the base would update 
all members that have not overridden the default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1607061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607052] Re: [RFE] Per-server port for LBaaS Health Monitoring

2016-12-05 Thread Michael Johnson
Is this a duplicate to https://bugs.launchpad.net/octavia/+bug/1541579?

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607052

Title:
  [RFE] Per-server port for LBaaS Health Monitoring

Status in octavia:
  Triaged

Bug description:
  [Use-cases]
  - Hierarchical health monitoring
  The operator wants to monitor member health for the pool separately from 
application health.

  - Micro-service deployment
  An application is deployed as docker containers, which consume an ephemeral 
port.

  [Limitations]
  LBaaSv2 health monitor is attached to the pool, but will use the 
protocol-port set in the member object.  Certain operators wish to monitor the 
health of the member (a.k.a member) separately, but in addition to the health 
of the service/application.  This model limits the granularity at which the 
operator can gauge the health of their cloud.

  [Enhancement]
  Add an optional application port field in the member object.  Default is 
.  Enhance health monitor creation with an optional parameter to use the 
service or application port.  Default is .

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1607052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611509] Re: lbaasv2 doesn't support "https" keystone endpoint

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611509

Title:
  lbaasv2 doesn't support "https" keystone endpoint

Status in octavia:
  Confirmed

Bug description:
  I am trying to enable lbaasv2 using octavia driver in one of our mitaka 
deployment. And we got the error
  {code}
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
[req-87d34869-7fec-4269-894b-81a4f1771736 6928cf223a0948699fab55612678cfdc 
10d7de26713241a2b623f2028c77e8eb - - -] There was an error in the driver
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin Traceback (most recent call last):
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 489, in _call_driver_operation
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin driver_method(context, db_entity)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 118, in func_wrapper
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
args[0].failed_completion(args[1], args[2])
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin self.force_reraise()
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin six.reraise(self.type_, 
self.value, self.tb)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 108, in func_wrapper
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin r = func(*args, **kwargs)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 220, in create
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin 
self.driver.req.post(self._url(lb), args)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 150, in post
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return self.request('POST', url, 
args)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py", 
line 131, in request
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin token = 
self.auth_session.get_token()
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 618, in 
get_token
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return 
(self.get_auth_headers(auth) or {}).get('X-Auth-Token')
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 597, in 
get_auth_headers
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return auth.get_headers(self, 
**kwargs)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/plugin.py", line 84, in 
get_headers
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin token = self.get_token(session)
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 89, in 
get_token
  neutron-server.log:2016-08-09 20:15:25.462 74450 ERROR 
neutron_lbaas.services.loadbalancer.plugin return 
self.get_access(session).auth_token
  

[Yahoo-eng-team] [Bug 1629066] Re: RFE Optionally bind load balancer instance to multiple IPs to increase available (source IP, source port) space to support > 64k connections to a single backend

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629066

Title:
  RFE Optionally bind load balancer instance to multiple IPs to increase
  available (source IP, source port) space to support > 64k connections
  to a single backend

Status in octavia:
  Triaged

Bug description:
  This limitation arose in while testing Neutron LBaaS using the HAProxy
  namespace driver, but applies to other proxying type backends
  including Octavia. A single load balancer instance (network namespace,
  or amphora) can only establish as many concurrent TCP connections to a
  single pool member as there are available distinct source IP, source
  TCP port combinations on the load balancing instance (network
  namespace or amphora). The source TCP port range is limited by the
  configured ephemeral port range, but this can be tuned to include all
  the unprivileged TCP ports (1024 - 65535) via sysctl. The available
  source addresses are limited to IP addresses bound to the instance,
  for the load balancing instance must be able to receive the response
  from the pool member.

  In short the total number of concurrent TCP connections to any single
  backend is limited to 64k times the number of available source IP
  addresses. This is because each TCP connection is identified by the
  4-tuple: (src-ip, src-port, dst-ip, dst-port) and (dst-ip, dst-port)
  is used to define a specific pool member. TCP ports are limited by the
  16bit field in the TCP protocol definition. In order to further
  increase the number of possible connections from a load balancing
  instance to a single backend we must increase this tuple space by
  increasing the number of available source IP addresses.

  Therefore, I propose we offer an option to attach multiple fixed-ips
  in the same subnet to the Neutron port of the load balancing instance
  facing the pool member. This would increase the tuple space allowing
  more than 64k concurrent connections to a single backend.

  While this limitation could be addressed by increasing the number of
  listening TCP ports on the pool member and adding additional members
  with the same IP address and different TCP ports, not all applications
  are suitable to this modification.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1629066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585680] Re: neutron-lbaas doesn't have tempest plugin

2016-12-05 Thread Michael Johnson
This was fixed in: https://review.openstack.org/#/c/317862/

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585680

Title:
  neutron-lbaas doesn't have tempest plugin

Status in neutron:
  Fix Released

Bug description:
  Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests
  but it's currently not working because neutron-lbaas is missing a
  Tempest plugin and its entry-point, so discovery of tests does not
  work.

  Right now, to run tempest we need to go in the neutron-lbaas directory and 
run tox inside, etc.
  That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

  This is a official RFE to have it in neutron-lbaas so we can run the
  tests in a consistent way with other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585890] Re: No check that member address whether is in the member subnet

2016-12-05 Thread Michael Johnson
This could be a valid use case where the address is accessible via a
route on the specified subnet.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585890

Title:
  No check that member address whether is in the member subnet

Status in octavia:
  Confirmed

Bug description:
  issue is in kilo branch

  member subnet cidr is 20.0.0.0/24, but member address is 30.0.0.11
  but it configured ok.

  [root@opencos2 v2(keystone_admin)]# neutron subnet-show 
502be3ac-f8d8-43b3-af5b-f0feada72aed
  +---++
  | Field | Value  |
  +---++
  | allocation_pools  | {"start": "20.0.0.2", "end": "20.0.0.254"} |
  | cidr  | 20.0.0.0/24|
  | dns_nameservers   ||
  | enable_dhcp   | True   |
  | gateway_ip| 20.0.0.1   |
  | host_routes   ||
  | id| 502be3ac-f8d8-43b3-af5b-f0feada72aed   |
  | ip_version| 4  |
  | ipv6_address_mode ||
  | ipv6_ra_mode  ||
  | name  ||
  | network_id| 2e424980-14f0-4405-92dc-e4c57c32235a   |
  | subnetpool_id ||
  | tenant_id | be58eaec789d44f296a65f96b944a9f5   |
  +---++
  [root@opencos2 v2(keystone_admin)]# neutron lbaas-member-create pool101 
--subnet 502be3ac-f8d8-43b3-af5b-f0feada72aed --address 30.0.0.11 
--protocol-port 80
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 30.0.0.11|
  | admin_state_up | True |
  | id | 1dcc-2f00-4fd7-9a68-6031a96a172b |
  | protocol_port  | 80   |
  | subnet_id  | 502be3ac-f8d8-43b3-af5b-f0feada72aed |
  | tenant_id  | be58eaec789d44f296a65f96b944a9f5 |
  | weight | 1|
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595416] Re: Add new config attribute to Radware driver

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595416

Title:
  Add new config attribute to Radware driver

Status in octavia:
  In Progress

Bug description:
  Need to add a new configuration attribute for Radware LBaaS v2 driver.
  add_allowed_address_pairs

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1595416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539717] Re: [RFE] Add F5 plugin driver to neutron-lbaas

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539717

Title:
  [RFE] Add F5 plugin driver to neutron-lbaas

Status in octavia:
  Incomplete

Bug description:
  This is an RFE for adding a plugin driver to neturon-lbaas to support
  F5 Networks appliances. Our intent is to provide an LBaaSv2 driver
  that fully supports the LBaaS v2 design, and will be similar to other
  vendor implementations that are already part of neutron-lbaas (e.g.
  A10 Networks, Brocade, Kemp Technologies, etc.).  In doing so, F5
  Networks hopes to expand the use of OpenStack for load balancing
  services, and to provide a migration path to LBaaSv2 for customers
  currently using LBaaSv1.

  Note: by mistake we already created a blueprint request,
  https://blueprints.launchpad.net/neutron/+spec/f5-lbaasv2-driver, but
  understand that  this RFE needs to be discussed and accepted first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1539717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537169] Re: LBaaS should populate DNS Name on creating LoadBalancer

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537169

Title:
  LBaaS should populate DNS Name on creating LoadBalancer

Status in octavia:
  In Progress

Bug description:
  With the merge of https://blueprints.launchpad.net/neutron/+spec
  /external-dns-resolution (https://review.openstack.org/#/c/212213/)

  neutron supports a name parameter on a port. this can be used to
  create a DNS record for the port, both locally on the network, and
  globally in Designate.

  when creating a loadbalancer, LBaaS should populate this field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1537169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523219] Re: [RFE] Add support X-Forwarded-For header in LBaaSv2

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523219

Title:
  [RFE] Add support X-Forwarded-For header in LBaaSv2

Status in octavia:
  In Progress

Bug description:
  X-Forwarded-For headers are used by proxies and load balancers to pass on the 
original client's IP to the server, while NATing the request.
  This is very handy for some applications but has some overheads and therefore 
has to be configurable.
  LBaaSv2 API doesn't offer support for enabling XFF header.
  Without having an XFF header, the members cannot conclude which IP address 
originated the NATed request - e.g for auditing purposes.
  Changes required are addition of a boolean property to the listener - which 
will indicate that an XFF header should be appended to the requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1523219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541579] Re: [RFE] Port based HealthMonitor in neutron_lbaas

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541579

Title:
  [RFE] Port based HealthMonitor in neutron_lbaas

Status in octavia:
  Triaged

Bug description:
  Summary:
  Neutron LbaaS lags port based monitoring. 

  Description:
  The current HealthMonitory attached to pool that monitors the member port by 
default. But some use case may run their service monitoring in a different port 
rather than the service port, these type of ECV is incapable in the current 
HealthMonitoring Object. 

  Expected:
  We have to have a new field called 'destination': 'ip:port', since most of 
the external LBs support this feature and organizations uses it. since pool can 
have multiple HealthMonitors attached to it.

  'destination': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'default': '*:*',
  'is_visible': True},

  Version: Kilo/Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1541579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565511] Re: Loadbalancers should be rescheduled when a LBaaS agent goes offline

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565511

Title:
  Loadbalancers should be rescheduled when a LBaaS agent goes offline

Status in octavia:
  In Progress

Bug description:
  Currently, when a LBaaS agent goes offline the loadbalancers remain under 
that agent.
  In a similar logic to 'allow_automatic_l3agent_failover', the neutron server 
should reschedule loadbalancers from dead lbaas agents.

  this should be enabled with an option as well, such as:
  allow_automatic_lbaas_agent_failover

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1565511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585266] Re: [RFE] Can't specify an error type on LBaaS objects that fail to provision

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585266

Title:
  [RFE] Can't specify an error type on LBaaS objects that fail to
  provision

Status in octavia:
  Triaged

Bug description:
  LBaaSv2 objects have a provisioning_status field that can indicate
  when provisioning has failed, but there is no way to describe to the
  user what the error was.  The ability to specify an error message as a
  parameter to the BaseManagerMixin.failed_completion() function that
  can then be returned in "show" calls to the object would save users
  and administrators a lot of time when debugging issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457556] Re: [RFE] [LBaaS] ssh connection timeout

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457556

Title:
  [RFE] [LBaaS] ssh connection timeout

Status in octavia:
  In Progress
Status in python-neutronclient:
  Incomplete

Bug description:
  In the V2 api, we need a way to tune the lb connection timeouts so
  that we can have a pool of ssh servers that have long running tcp
  connections. ssh sessions can last days to weeks and users get grumpy
  if the session times out if they are in the middle of doing something.
  Currently the timeouts are tuned to drop connections that are too long
  running regardless of if  there is traffic on the connection or not.
  This is good for http, but bad for ssh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1457556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581876] Re: neutron lbaas v2: update of default "device_driver" inside lbaas_agent.ini

2016-12-05 Thread Michael Johnson
This is correct in the code: 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/agent/agent_manager.py#L38-L45
Marking invalid for neutron

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581876

Title:
  neutron lbaas v2: update of default "device_driver" inside
  lbaas_agent.ini

Status in neutron:
  Invalid
Status in puppet-neutron:
  New

Bug description:
  Dear,

  As from Mitaka only v2 of lbaas is supported please update default
  "device_driver" inside config file /etc/neutron/lbaas_agent.ini from:

  device_driver =
  
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

  to

  device_driver =
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

  More inside this IRC log:

  http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas
  /%23openstack-lbaas.2016-02-02.log.html

  Kind regards,
  Michal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643571] Re: lbaas data model is too recursive

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643571

Title:
  lbaas data model is too recursive

Status in octavia:
  New

Bug description:
  this is an example of pool to_api_dict().
  http://paste.openstack.org/show/589872/
  as you can see, it has too many copies of same objects.
  note: there are only 3 objects in the dict.

  while from_sqlalchemy_model has some recursion protection,
  it's better to make it shallower.

  especially, to_dict/to_api_dict should not do much recursion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1643571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609352] Re: LBaaS: API doesn't return correctly

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609352

Title:
  LBaaS: API doesn't return correctly

Status in octavia:
  In Progress

Bug description:
  = Problem Description =
  I want to get all ids about LBaaS's pools.

  I use this command:

  curl -g -i -X GET http://10.0.44.233:9696/v2.0/lbaas/pools.json?fields=id \
  -H "User-Agent: python-neutronclient" \
  -H "Accept: application/json" \
  -H "X-Auth-Token: a77ea1dd7fb748448d36142ef844802d"

  But the Neutron server didn't return correctly. The response is :

  HTTP/1.1 200 OK
  Content-Type: application/json; charset=UTF-8
  Content-Length: 344
  X-Openstack-Request-Id: req-8ed9d992-6a4c-44ac-9c59-de65794e919f
  Date: Wed, 03 Aug 2016 10:56:18 GMT

  {"pools": [{"lb_algorithm": "ROUND_ROBIN", "protocol": "HTTP",
  "description": "", "admin_state_up": true, "session_persistence":
  null, "healthmonitor_id": null, "listeners": [{"id":
  "f8392236-e065-4aa2-a4ef-d6c6821cc038"}], "members": [{"id":
  "ea1292f4-fb6a-4594-9d13-9ff0dec865d8"}], "id": "b360fc75-b23d-
  46a3-b936-6c9480d35219", "name": ""}]}[root@server-233
  ~(keystone_admin)]

  Neutron server returns all the infos about pools.

  In the request, I specify the url with "fields=id". But the Neutron
  server didn't return correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1609352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586225] Re: No check that healthmonitor delay should >= timeout

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586225

Title:
  No check that healthmonitor delay should >= timeout

Status in octavia:
  New

Bug description:
  issue is in kilo branch

  healthmonitor delay is 10, timeout is 12

  it does not make sense


  [root@opencos2 ~(keystone_admin)]# neutron lbaas-healthmonitor-show 
6d29f448-1965-40b9-86e2-cf18d86ae6f8
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 10 |
  | expected_codes | 305,205|
  | http_method| GET|
  | id | 6d29f448-1965-40b9-86e2-cf18d86ae6f8   |
  | max_retries| 10 |
  | pools  | {"id": "591be59b-eb81-4f1d-8ab7-b023df6cccfa"} |
  | tenant_id  | be58eaec789d44f296a65f96b944a9f5   |
  | timeout| 12 |
  | type   | PING   |
  | url_path   | /api/  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1586225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622793] Re: LBaaS back-end pool connection limit is 10% of listener connection limit for reference and namespace drivers

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622793

Title:
  LBaaS back-end pool connection limit is 10% of listener connection
  limit for reference and namespace drivers

Status in octavia:
  Confirmed

Bug description:
  Both the reference Octavia driver and the namespace driver use haproxy
  to deliver load balancing services with LBaaSv2. When closely looking
  at the operation of the haproxy daemons with a utility like hatop (
  https://github.com/feurix/hatop ), one can see that the connection
  limit for back-ends is exactly 10% of whatever the connection limit is
  set for the pool's listener front-ends. This behavior could cause an
  unexpectedly low effective connection limit if the user has a small
  number of back-end servers in the pool.

  From the haproxy documentation, this is because the default value of a
  backend's "fullconn" parameter is set to 10% of the sum of all front-
  ends referencing it. Specifically:

  "Since it's hard to get this value right, haproxy automatically sets it to
  10% of the sum of the maxconns of all frontends that may branch to this
  backend (based on "use_backend" and "default_backend" rules). That way it's
  safe to leave it unset."

  (Source: https://cbonte.github.io/haproxy-
  dconv/configuration-1.6.html#fullconn )

  The point of this calculation (according to the haproxy documentation)
  is to protect fragile back-end servers from spikes in load that might
  reach the front-ends' connection limits. However, for long-lasting but
  low-load connections to a small number of back-end servers through the
  load balancer, this means that the haproxy-based back-ends have an
  effective connection limit that is much smaller than what the user
  expects it to be.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1622793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640265] Re: LBaaSv2 uses fixed MTU of 1500, leading to packet dropping

2016-12-05 Thread Michael Johnson
New patch is here: https://review.openstack.org/#/c/399945/

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640265

Title:
  LBaaSv2 uses fixed MTU of 1500, leading to packet dropping

Status in octavia:
  In Progress

Bug description:
  The LBaaSv2's HAProxy plugin sets up a VIF without specifying its MTU.
  Therefore, the VIF always gets the default MTU of 1500. When attaching
  the load balancer to a VXLAN-backed project (tenant) network, which by
  default has a MTU of 1450, this leads to packet dropping.

  Pre-conditions: A standard OpenStack + Neutron deployment. A project
  (tenant) network backed by VXLAN, GRE, or other protocol that reduces
  MTU to less than 1500.

  Step-by-step reproduction steps:
  * Create a SSL load balancer, OR a TCP load balancer terminated in a SSL 
server.
  * Try connecting to it: curl -kv https://virtual_ip

  Expected behaviour: connection attempts should succeed

  Actual behaviour: 25% to 50% connection attempts will fail to complete

  Log output: neutron-lbaasv2-agent.log displays:
  WARNING neutron.agent.linux.interface [-] No MTU configured for port 

  OpenStack version: stable/newton
  Linux distro: Ubuntu 16.04
  Deployment mechanism: OpenStack-Ansible
  Environment: multi-node

  Perceived severity: This issue causes LBaaSv2 with HAProxy to be
  unusable for SSL and other protocols which need to transfer large
  (>1450 bytes) packets, unless external network equipment is set up to
  clamp the MSS or unless the deployer is able to set path_mtu to values
  greater than 1550.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1640265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647491] [NEW] Missing documentation for glance-manage db_purge command

2016-12-05 Thread Alexander Bashmakov
Public bug reported:

glance-manage db purge is an advanced operator command for purging
deleted records from the database [1]. Documentation for the purpose and
usage of this command should be added here [2].

[1] https://github.com/openstack/glance/blob/master/glance/cmd/manage.py#L146
[2] http://docs.openstack.org/developer/glance/man/glancemanage.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1647491

Title:
  Missing documentation for glance-manage db_purge command

Status in Glance:
  New

Bug description:
  glance-manage db purge is an advanced operator command for purging
  deleted records from the database [1]. Documentation for the purpose
  and usage of this command should be added here [2].

  [1] https://github.com/openstack/glance/blob/master/glance/cmd/manage.py#L146
  [2] http://docs.openstack.org/developer/glance/man/glancemanage.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1647491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565801] Re: Add process monitor for haproxy

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801

Title:
  Add process monitor for haproxy

Status in octavia:
  In Progress

Bug description:
  Bug 1565511 aims to solve cases where the lbaas agent goes offline.
  To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

  This[1] neutron spec offers the following approach:  
  "We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
   
  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1565801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569827] Re: LBaaS agent floods log when stats socket is not found

2016-12-05 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569827

Title:
  LBaaS agent floods log when stats socket is not found

Status in octavia:
  In Progress

Bug description:
  The LBaaS agent creates a lot of log messages, when a new lb-pool is
  created.

  As soon as I create a lb-pool:
  neutron lb-pool-create --lb-method ROUND_ROBIN --name log-test-lb --protocol 
TCP --subnet-id a6ce9a77-53ca-4704-aaf4-fc255cc5fa74
  The log file /var/log/neutron/neutron-lbaas-agent.log starts to fill up with 
messages like these:
  2016-04-13 12:56:08.922 15373 WARNING 
neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket 
not found for pool 37cbf817-f1ac-4d47-9a04-93c911d0afdd

  The message is correct, as the file /var/lib/neutron/lbaas/37cbf817
  -f1ac-4d47-9a04-93c911d0afdd/sock is not present. But the message
  repeats every 10s.

  The messages stop as soon as the lb-pool gets a VIP. At this step the
  file /var/lib/neutron/lbaas/37cbf817-f1ac-4d47-9a04-93c911d0afdd/sock
  is present. I would expect the lbaas agent to verify the sock file
  could really be present before issuing the message.

  Version:
  Openstack Juno  on SLES 11 SP3.
  The package version of openstack-neutron-lbaas-agent is 2014.2.2.dev26-0.11.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1569827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647486] [NEW] sample-data makes incorrect credentials call

2016-12-05 Thread Adam Young
Public bug reported:


ADMIN_PASSWORD=keystone tools/sample_data.sh

... lots of stuff working fine ...

usage: openstack ec2 credentials create [-h]
[-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width ]
[--noindent] [--prefix PREFIX]
[--project ] [--user ]
[--user-domain ]
[--project-domain ]
openstack ec2 credentials create: error: argument --user: expected one argument

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1647486

Title:
  sample-data makes incorrect credentials call

Status in OpenStack Identity (keystone):
  New

Bug description:
  
  ADMIN_PASSWORD=keystone tools/sample_data.sh

  ... lots of stuff working fine ...

  usage: openstack ec2 credentials create [-h]
  [-f {json,shell,table,value,yaml}]
  [-c COLUMN] [--max-width ]
  [--noindent] [--prefix PREFIX]
  [--project ] [--user ]
  [--user-domain ]
  [--project-domain ]
  openstack ec2 credentials create: error: argument --user: expected one 
argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1647486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls

2016-12-05 Thread gordon chung
** No longer affects: ceilometer

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in congress:
  Fix Released
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in heat:
  New
Status in Ironic:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-neutronclient:
  Fix Released

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624064] Re: Bump up Glance API minor version to 2.4

2016-12-05 Thread Alexander Bashmakov
Fixed here: https://review.openstack.org/#/c/366973/

** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1624064

Title:
  Bump up Glance API minor version to 2.4

Status in Glance:
  Fix Released

Bug description:
  https://review.openstack.org/350809
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a2b329c997b41632b29471b9ddacb7b19adfdb0d
  Author: Nikhil Komawar 
  Date:   Wed Aug 3 18:17:47 2016 -0400

  Bump up Glance API minor version to 2.4
  
  This is the minor version bump for Newton after some of the API
  impacting changes occur.
  
  APIImpact
  UpgradeImpact
  DocImpact
  
  Depends-On: Ie463e2f30db94cde7716c83a94ec2fb0c0658c91
  
  Change-Id: I5d1c4380682efa4c15ff0f294f269c800fe6762a

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1624064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2016-12-05 Thread Andrea Frittoli
** Changed in: devstack
   Status: In Progress => Fix Released

** Changed in: tempest
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Committed

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310131] Re: Some non-supported actions in Ironic nova driver do not return errors to the user

2016-12-05 Thread Jay Faulkner
Yes, this is still an issue, but it has to be fixed by a major feature
update (capabilities) to Nova, so there's no Ironic action/code to fix.
Therefore marking this invalid against Ironic.

** Changed in: ironic
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310131

Title:
  Some non-supported actions in Ironic nova driver do not return errors
  to the user

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  While performing checking Nova API actions that I expected to fail
  when testing with the Ironic driver, I noticed in some cases a
  positive response is returned, but the action fails within the compute
  process when the action is attempted to be executed. When working with
  other drivers, I expect to see some time of immediate response from
  the initial request stating that the action isn't possible. The
  actions I've specifically verified this with are:

  - Pause

2014-04-19 21:47:30.940 ERROR oslo.messaging._drivers.common 
[req-10dedfe7-9fe2-4c0d-9a4e-a85abdd137df demo demo] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File "/opt/stack/nova/nova/exception.py", line 88, in 
wrapped\npayload)\n', '  File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped\nreturn f(self, 
context, *args, **kw)\n', '  File "/opt/stack/nova/
 nova/compute/manager.py", line 276, in decorated_function\npass\n', '  
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 262, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 329, in decorated_function\n
 function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 305, in decorated_function\n
e, sys.exc_info())\n', '  File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 292, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 3659, in pause_instance\n
self.driver.pause(instance)\n', '  File "/opt/stack/nova/nova/virt/driver.py", 
line 521, in pause\nraise NotImplementedError()\n', 'NotImplementedError\n']

  - Rescue

screen-n-cpu.log:2014-04-19 21:56:29.518 DEBUG ironicclient.common.http 
[req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo demo]
screen-n-cpu.log:2014-04-19 21:56:29.523 ERROR 
oslo.messaging.rpc.dispatcher [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] Exception during message handling: Instance 
5b43d631-91e1-4384-9b87-93283b3ae958 cannot be rescued: Driver Error:
screen-n-cpu.log:2014-04-19 21:56:29.524 ERROR 
oslo.messaging._drivers.common [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] Returning exception Instance 5b43d631-91e1-4384-9b87-93283b3ae958 cannot 
be rescued: Driver Error:  to caller
screen-n-cpu.log:2014-04-19 21:56:29.524 ERROR 
oslo.messaging._drivers.common [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] ['Traceback (most recent call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File "/opt/stack/nova/nova/compute/manager.py", line 395, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', '  
File "/opt/stack/nova/nova/exception.py", line 88, in wrapped\npayload)\n', 
'  File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__\nsix.reraise(self.type_, self
 .value, self.tb)\n', '  File "/opt/stack/nova/nova/exception.py", 

[Yahoo-eng-team] [Bug 1647395] Re: Unexpected API Error while launch an instance

2016-12-05 Thread Matt Riedemann
It's blowing up trying to connect to glance because you don't have
CONF.glance.api_servers set in nova.conf on your API node.:

2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions for api_server 
in CONF.glance.api_servers:
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions TypeError: 
'NoneType' object is not iterable

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647395

Title:
  Unexpected API Error  while launch an instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Server Ubuntu(16.10), VirtualBox 5.1.10 (Controller, Compute, 2 Nat Service 
NIC)
  Compute QEMU

  Openstack: Newton

  Following error occurs when trying to launch any instance:


  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
631, in create
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1527, in create
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line , in 
_create_instance
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
image_id, boot_meta = self._get_image(context, image_href)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 777, in _get_image
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions image = 
self.image_api.get(context, image_href)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/api.py", line 93, in get
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
show_deleted=show_deleted)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 477, in show
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
_reraise_translated_image_exception(image_id)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 1069, in 
_reraise_translated_image_exception
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
six.reraise(new_exc, None, exc_trace)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 475, in show
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions image = 
self._client.call(context, 2, 'get', image_id)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 173, in call
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions version)
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 152, in 
_create_onetime_client
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
self.api_servers = get_api_servers()
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 114, in 
get_api_servers
  2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions for 
api_server in CONF.glance.api_servers:
  

[Yahoo-eng-team] [Bug 1647451] [NEW] Post live migration step could fail due to auth errors

2016-12-05 Thread Timofey Durakov
Public bug reported:

Description
===
When live migration is finished it's possible that keystone auth token is 
already expired,
that causes for post_step to fail

Steps to reproduce
==
there are 2 options to reproduce this issue:
1. run live-migration of heavy loaded instance, wait for token to expire, and 
after that try to execute live-migration-force-complete
2. set a breakpoint in _post_live_migration method of compute manager, once 
breakpoint is reached,
do openstack token revoke, continue nova execution normally

Expected result
===
live-migration to be finished sucessfully

Actual result
=
post step is failed, overall migration is also failed

Environment
===
1. I've tested this case on Newton version, but the issue should be valid for 
master branch too.

2. Libvirt + kvm

2. Ceph

3. Neutron vxlan

** Affects: nova
 Importance: Medium
 Assignee: Timofey Durakov (tdurakov)
 Status: In Progress


** Tags: live-migration

** Changed in: nova
 Assignee: (unassigned) => Timofey Durakov (tdurakov)

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647451

Title:
  Post live migration step could fail due to auth errors

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  When live migration is finished it's possible that keystone auth token is 
already expired,
  that causes for post_step to fail

  Steps to reproduce
  ==
  there are 2 options to reproduce this issue:
  1. run live-migration of heavy loaded instance, wait for token to expire, and 
after that try to execute live-migration-force-complete
  2. set a breakpoint in _post_live_migration method of compute manager, once 
breakpoint is reached,
  do openstack token revoke, continue nova execution normally

  Expected result
  ===
  live-migration to be finished sucessfully

  Actual result
  =
  post step is failed, overall migration is also failed

  Environment
  ===
  1. I've tested this case on Newton version, but the issue should be valid for 
master branch too.

  2. Libvirt + kvm

  2. Ceph

  3. Neutron vxlan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2016-12-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406914
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=9fb9d55ec55e4f5105de0cd6f19b530786ec91a2
Submitter: Jenkins
Branch:master

commit 9fb9d55ec55e4f5105de0cd6f19b530786ec91a2
Author: Andrea Frittoli 
Date:   Mon Dec 5 12:22:25 2016 +

Change personality inject path to /

The CirrOS image root disk is empty, it's only populated during
boot from the initrd image. So we can only safely inject files
before boot into '/' directly.

Closes-bug: #1646002

Depends-on: I405793b9e145308e51a08710d8e5df720aec6fde
Change-Id: I2092059acdeab0755215e7ae690e243b5b4df367


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Released

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643013] Re: admin dashboard policy check wrong

2016-12-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/399786
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=43e9df85ab286ddee96e9cff97f551781baf70d1
Submitter: Jenkins
Branch:master

commit 43e9df85ab286ddee96e9cff97f551781baf70d1
Author: David Lyle 
Date:   Fri Nov 18 15:02:20 2016 -0700

Rework hardcoded policy in admin dash

Since the content in a Dashboard is not hardcoded, having hardcoded
policy checks to specific services at the dashboard level is wrong.
The Dashboard was designed to evaluate all panels to determine policy
so this type of thing could be avoided. This patch moves the content
specific policy checks to the panels where they apply.

Additionally, this fix uncovered another bug where policy_rules are
wrapped in a list regardless of format. This patch adds a check and
only wraps where necessary.

Change-Id: I79314a45c3c552ebcb3bb7cc881c2467fa009c5d
Closes-Bug: #1643013
Closes-Bug: #1643074


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1643013

Title:
  admin dashboard policy check wrong

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The addition of all the hardcoded policy rules in 
dashboards/admin/dashboard.py is unnecessary and actually wrong because it 
imposes policy based on content for panels that may be disabled. The 
functionality is actually already built in and designed to be dynamic, see: 
https://github.com/openstack/horizon/blob/master/horizon/base.py#L648
  where the panels are iterated over to check for policy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1643013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643074] Re: policy check for panels and dashboards don't handle nested policy rules

2016-12-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/399786
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=43e9df85ab286ddee96e9cff97f551781baf70d1
Submitter: Jenkins
Branch:master

commit 43e9df85ab286ddee96e9cff97f551781baf70d1
Author: David Lyle 
Date:   Fri Nov 18 15:02:20 2016 -0700

Rework hardcoded policy in admin dash

Since the content in a Dashboard is not hardcoded, having hardcoded
policy checks to specific services at the dashboard level is wrong.
The Dashboard was designed to evaluate all panels to determine policy
so this type of thing could be avoided. This patch moves the content
specific policy checks to the panels where they apply.

Additionally, this fix uncovered another bug where policy_rules are
wrapped in a list regardless of format. This patch adds a check and
only wraps where necessary.

Change-Id: I79314a45c3c552ebcb3bb7cc881c2467fa009c5d
Closes-Bug: #1643013
Closes-Bug: #1643074


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1643074

Title:
  policy check for panels and dashboards don't handle nested policy
  rules

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The policy_rules property on Panel and Dashboard is intended to handle
  nested policy rules, where the top level rules are OR'd and the lower
  level is AND'd. Currently regardless of what is passed in, it's
  wrapped in another list. There should be a check that policy_rules is
  not already a list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1643074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647431] [NEW] grenade job times out on Xenial

2016-12-05 Thread Daniel Alvarez
Public bug reported:

gate-grenade-dsvm-neutron-multinode-ubuntu-xenial job is failing on
neutron gate

I have checked some other patches and looks like the job doesn't fail on
them so apparently it's not deterministic.


>From the logs: 

[1]
2016-12-05 09:07:46.832799 | ERROR: the main setup script run by this job 
failed - exit code: 124

[2]
2016-12-05 09:07:10.778 | + 
/opt/stack/new/grenade/projects/70_cinder/resources.sh:destroy:207 :   timeout 
30 sh -c 'while openstack server show cinder_server1 >/dev/null; do sleep 1; 
done'
2016-12-05 09:07:40.781 | + 
/opt/stack/new/grenade/projects/70_cinder/resources.sh:destroy:1 :   exit_trap
2016-12-05 09:07:40.782 | + /opt/stack/new/grenade/functions:exit_trap:103 :   
local r=124


[1] 
http://logs.openstack.org/40/402140/7/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/ad0cf41/console.html
[2] 
http://logs.openstack.org/40/402140/7/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/ad0cf41/logs/grenade.sh.txt.gz

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647431

Title:
  grenade job times out on Xenial

Status in neutron:
  Confirmed

Bug description:
  gate-grenade-dsvm-neutron-multinode-ubuntu-xenial job is failing on
  neutron gate

  I have checked some other patches and looks like the job doesn't fail
  on them so apparently it's not deterministic.

  
  From the logs: 

  [1]
  2016-12-05 09:07:46.832799 | ERROR: the main setup script run by this job 
failed - exit code: 124

  [2]
  2016-12-05 09:07:10.778 | + 
/opt/stack/new/grenade/projects/70_cinder/resources.sh:destroy:207 :   timeout 
30 sh -c 'while openstack server show cinder_server1 >/dev/null; do sleep 1; 
done'
  2016-12-05 09:07:40.781 | + 
/opt/stack/new/grenade/projects/70_cinder/resources.sh:destroy:1 :   exit_trap
  2016-12-05 09:07:40.782 | + /opt/stack/new/grenade/functions:exit_trap:103 :  
 local r=124

  
  [1] 
http://logs.openstack.org/40/402140/7/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/ad0cf41/console.html
  [2] 
http://logs.openstack.org/40/402140/7/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/ad0cf41/logs/grenade.sh.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647432] [NEW] Multiple SIGHUPs to keepalived might trigger re-election

2016-12-05 Thread John Schwarz
Public bug reported:

As the title says, multiple SIGHUPs that are sent to the keepalived
process might cause it to forfeit mastership and re-negotiate a new
master (which might be the original master). This means that when, for
example, associating/disassociating 2 floatingips in quick succession
(each triggers a SIGHUP), the master node will forfeit re-election
(causing it to switch to BACKUP, thus removing all the remaining FIP's
IPs and severing connectivity.

** Affects: neutron
 Importance: High
 Assignee: John Schwarz (jschwarz)
 Status: In Progress


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647432

Title:
  Multiple SIGHUPs to keepalived might trigger re-election

Status in neutron:
  In Progress

Bug description:
  As the title says, multiple SIGHUPs that are sent to the keepalived
  process might cause it to forfeit mastership and re-negotiate a new
  master (which might be the original master). This means that when, for
  example, associating/disassociating 2 floatingips in quick succession
  (each triggers a SIGHUP), the master node will forfeit re-election
  (causing it to switch to BACKUP, thus removing all the remaining FIP's
  IPs and severing connectivity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647421] [NEW] Neutron attempts to schedule DHCP agents even when intentionally not in use

2016-12-05 Thread Russell Bryant
Public bug reported:

OVN has its own native support for DHCP, so the Neutron DHCP agent is
not in use.  When networks get created, we see warnings in the log about
Neutron still trying to schedule DHCP agents.

We should be able to disable this code path completely when the DHCP
agent is intentionally not in use.

2016-12-05 16:44:12.252 23149 WARNING neutron.scheduler.dhcp_agent_scheduler 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] No more DHCP agents
2016-12-05 16:44:12.253 23149 WARNING 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] Unable to schedule network 
45d29a60-a672-429f-b7d7-551ee985c8ca: no agents available; will retry on 
subsequent port and subnet creation events.

** Affects: networking-ovn
 Importance: Medium
 Status: Confirmed

** Affects: neutron
 Importance: Undecided
 Status: New

** Changed in: networking-ovn
   Status: New => Confirmed

** Changed in: networking-ovn
   Importance: Undecided => Medium

** Description changed:

  OVN has its own native support for DHCP, so the Neutron DHCP agent is
  not in use.  When networks get created, we see warnings in the log about
  Neutron still trying to schedule DHCP agents.
  
  We should be able to disable this code path completely when the DHCP
  agent is intentionally not in use.
+ 
+ 2016-12-05 16:44:12.252 23149 WARNING neutron.scheduler.dhcp_agent_scheduler 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] No more DHCP agents
+ 2016-12-05 16:44:12.253 23149 WARNING 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] Unable to schedule network 
45d29a60-a672-429f-b7d7-551ee985c8ca: no agents available; will retry on 
subsequent port and subnet creation events.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647421

Title:
  Neutron attempts to schedule DHCP agents even when intentionally not
  in use

Status in networking-ovn:
  Confirmed
Status in neutron:
  New

Bug description:
  OVN has its own native support for DHCP, so the Neutron DHCP agent is
  not in use.  When networks get created, we see warnings in the log
  about Neutron still trying to schedule DHCP agents.

  We should be able to disable this code path completely when the DHCP
  agent is intentionally not in use.

  2016-12-05 16:44:12.252 23149 WARNING neutron.scheduler.dhcp_agent_scheduler 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] No more DHCP agents
  2016-12-05 16:44:12.253 23149 WARNING 
neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api 
[req-038bedd2-fa56-4cfb-af68-8a36f39a2e9c 560fda6bf041441581ea756be353433c 
21888697d4914989af439a25ebda0b76 - - -] Unable to schedule network 
45d29a60-a672-429f-b7d7-551ee985c8ca: no agents available; will retry on 
subsequent port and subnet creation events.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1647421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2016-12-05 Thread Scott Moser
re-opening this for cloud-init as the change in cloud-init actually
regressed the behavior.


** Changed in: cloud-init (Ubuntu)
   Status: Fix Released => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Released => Confirmed

** Changed in: cloud-init
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in cloud-init:
  Confirmed
Status in MAAS:
  Triaged
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  A customer reused hardware that had previously deployed a RHEL
  Overcloud-controller which places metadata on the disk as a legitimate
  source, that cloud-init looks at by default.  When the newly enlisted
  node appeared it had the name of "overcloud-controller-0" vs. maas-
  enlist, pulled from the disk metadata which had overridden MAAS'
  metadata.  Commissioning continually failed on all of the nodes until
  the disk metadata was manually removed (KVM boot Ubuntu ISO, rm -f
  data or dd zeros to disk).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1582323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647408] [NEW] Neutron port update with device id does not work with nova properly

2016-12-05 Thread Sridhar Venkat
Public bug reported:

Deploy a new VM (VM1) and attach a network interface to it. Deploy
another VM (VM2). Using Neutron port update API, update device id (to
nova instance of VM2) of network interface attached to VM1. In neutron,
device id gets updated successfully. However, the changes are not
reflected in nova. For example, VIF unplug of port from VM1 is not
called (and subsequently VIF plug of port to VM2 is not called).

Neutron port update with device id does not work with the networking of
nova and neutron. With current code, Neutron api allows device id of a
port to be updatable, however, it does not get carried over to nova
appropriately.

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Venkat (svenkat)
 Status: In Progress

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Sridhar Venkat (svenkat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647408

Title:
  Neutron port update with device id does not work with nova properly

Status in neutron:
  In Progress

Bug description:
  Deploy a new VM (VM1) and attach a network interface to it. Deploy
  another VM (VM2). Using Neutron port update API, update device id (to
  nova instance of VM2) of network interface attached to VM1. In
  neutron, device id gets updated successfully. However, the changes are
  not reflected in nova. For example, VIF unplug of port from VM1 is not
  called (and subsequently VIF plug of port to VM2 is not called).

  Neutron port update with device id does not work with the networking
  of nova and neutron. With current code, Neutron api allows device id
  of a port to be updatable, however, it does not get carried over to
  nova appropriately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647399] [NEW] issue with non listing snapshots in images list in rebuild instance window

2016-12-05 Thread Bartlomiej
Public bug reported:

When trying to rebuild an instance from snapshot, snapshot does not show
in rebuild image list

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1647399

Title:
  issue with non listing snapshots in images list in rebuild instance
  window

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When trying to rebuild an instance from snapshot, snapshot does not
  show in rebuild image list

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1647399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647395] [NEW] Unexpected API Error while launch an instance

2016-12-05 Thread Jens
Public bug reported:

Server Ubuntu(16.10), VirtualBox 5.1.10 (Controller, Compute, 2 Nat Service NIC)
Compute QEMU

Openstack: Newton

Following error occurs when trying to launch any instance:


2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
631, in create
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
**create_kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1527, in create
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line , in 
_create_instance
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions image_id, 
boot_meta = self._get_image(context, image_href)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 777, in _get_image
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions image = 
self.image_api.get(context, image_href)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/api.py", line 93, in get
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
show_deleted=show_deleted)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 477, in show
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
_reraise_translated_image_exception(image_id)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 1069, in 
_reraise_translated_image_exception
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
six.reraise(new_exc, None, exc_trace)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 475, in show
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions image = 
self._client.call(context, 2, 'get', image_id)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 173, in call
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions version)
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 152, in 
_create_onetime_client
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions 
self.api_servers = get_api_servers()
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 114, in 
get_api_servers
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions for 
api_server in CONF.glance.api_servers:
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions TypeError: 
'NoneType' object is not iterable
2016-12-05 15:32:12.150 2998 ERROR nova.api.openstack.extensions
2016-12-05 15:32:12.155 2998 INFO nova.api.openstack.wsgi 
[req-adef7e01-3cca-4811-be11-c51f31908ce1 5c672198e2954c3880276929668a52b9 
e04babbaea404bb699f1a56d09a851b2 - default default] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

2016-12-05 15:32:12.161 2998 INFO nova.osapi_compute.wsgi.server 
[req-adef7e01-3cca-4811-be11-c51f31908ce1 5c672198e2954c3880276929668a52b9 
e04babbaea404bb699f1a56d09a851b2 - default default] 10.0.0.11 "POST 

[Yahoo-eng-team] [Bug 1647370] [NEW] Resource tracker doesn't free resources on confirm resize

2016-12-05 Thread Ludovic Beliveau
Public bug reported:

Description
===

If the audit hasn’t been triggered and confirm resize is executed, the 
resources aren't dropped because the itype stored in the 
self.tracked_migrations correspond to new flavor.

But if the audit got executed, it correspond to the old flavor, and resources 
gets dropped properly.

Steps to reproduce
==

1) Resize a guest
2) Confirm the resize before the periodic audit gets triggered

Expected result
===

The guest's resources corresponding to the initial flavor should had
been freed.

Actual result
=

The guest's resources corresponding to the initial flavor aren't freed.

Environment
===

commit e83a3572344f9be39930ea9ead83a1f9b94ac7fe
Author: Timofey Durakov 
Date:   Thu Dec 1 11:58:18 2016 +0300

Fix for live-migration job

Commit 9293ac0 to devstack-plugin-ceph altered
CEPH_LOOPBACK_DISK_SIZE_DEFAULT variable initialization
This fix added source for setting this variable in correct way.

Closes-Bug: #1646418

Change-Id: I84c3b78c53cfa283e9bcb7cf4b70ec6c95044e9c

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647370

Title:
  Resource tracker doesn't free resources on confirm resize

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  If the audit hasn’t been triggered and confirm resize is executed, the 
resources aren't dropped because the itype stored in the 
self.tracked_migrations correspond to new flavor.
  
  But if the audit got executed, it correspond to the old flavor, and resources 
gets dropped properly.

  Steps to reproduce
  ==

  1) Resize a guest
  2) Confirm the resize before the periodic audit gets triggered

  Expected result
  ===

  The guest's resources corresponding to the initial flavor should had
  been freed.

  Actual result
  =

  The guest's resources corresponding to the initial flavor aren't
  freed.

  Environment
  ===

  commit e83a3572344f9be39930ea9ead83a1f9b94ac7fe
  Author: Timofey Durakov 
  Date:   Thu Dec 1 11:58:18 2016 +0300

  Fix for live-migration job
  
  Commit 9293ac0 to devstack-plugin-ceph altered
  CEPH_LOOPBACK_DISK_SIZE_DEFAULT variable initialization
  This fix added source for setting this variable in correct way.
  
  Closes-Bug: #1646418
  
  Change-Id: I84c3b78c53cfa283e9bcb7cf4b70ec6c95044e9c

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647347] [NEW] image_meta code migration in finish_migraiton from older release

2016-12-05 Thread jichenjc
Public bug reported:

we had some problem when migrate from older release from L to N (it's
not kvm driver)


we had this error in virt layer in finish_migration function when we use 
following code in virt layer's finish_migration

image_meta = self._image_api.get(context, image_meta.id)

2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] self.obj_load_attr(name)
2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416]   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 627, in 
obj_load_attr
2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] _("Cannot load '%s' in the base 
class") % attrname)
2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] NotImplementedError: Cannot load 'id' in 
the base class
2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416]

so the problem is seems we didn't have image_meta.id set if it's an old
instance because old instance image_meta comes from system_metadata, I
think this image.id should be set when we create ImageMeta in any case?

e.g 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4006

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647347

Title:
  image_meta code migration in finish_migraiton  from older release

Status in OpenStack Compute (nova):
  New

Bug description:
  we had some problem when migrate from older release from L to N (it's
  not kvm driver)

  
  we had this error in virt layer in finish_migration function when we use 
following code in virt layer's finish_migration

  image_meta = self._image_api.get(context, image_meta.id)

  2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] self.obj_load_attr(name)
  2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416]   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 627, in 
obj_load_attr
  2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] _("Cannot load '%s' in the base 
class") % attrname)
  2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416] NotImplementedError: Cannot load 'id' in 
the base class
  2016-12-01 07:09:14.600 35918 ERROR nova.compute.manager [instance: 
c7c2adff-6e33-4b3f-b5e3-74327ea80416]

  so the problem is seems we didn't have image_meta.id set if it's an
  old instance because old instance image_meta comes from
  system_metadata, I think this image.id should be set when we create
  ImageMeta in any case?

  e.g 
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4006

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647345] [NEW] UUID field setting in InstanceMapping

2016-12-05 Thread jichenjc
Public bug reported:

see this warning in unit test


/home/jichen/git/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:352:
 FutureWarning: 46d5efdf 540b 4657 850b 28c5024a8ce5 is an invalid UUID. Using 
UUIDFields with invalid UUIDs is no longer supported, and will be removed in a 
future release. Please update your code to input valid UUIDs or accept 
ValueErrors for invalid UUIDs. See 
http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
  "for further details" % value, FutureWarning)


the problem is bigger than a warning when the code change, as we have some code 
here in nova/cmd/manage.py , this will make the instance uuid not acceptable in 
the real code

1365 # Don't judge me. There's already an InstanceMapping with this 
UUID
1366 # so the marker needs to be non destructively modified.
1367 next_marker = next_marker.replace('-', ' ')
1368 objects.InstanceMapping(ctxt, instance_uuid=next_marker,
1369 project_id=marker_project_id).create()
1370 return 1

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647345

Title:
  UUID field setting in InstanceMapping

Status in OpenStack Compute (nova):
  New

Bug description:
  see this warning in unit test

  
/home/jichen/git/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:352:
 FutureWarning: 46d5efdf 540b 4657 850b 28c5024a8ce5 is an invalid UUID. Using 
UUIDFields with invalid UUIDs is no longer supported, and will be removed in a 
future release. Please update your code to input valid UUIDs or accept 
ValueErrors for invalid UUIDs. See 
http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
"for further details" % value, FutureWarning)

  
  the problem is bigger than a warning when the code change, as we have some 
code here in nova/cmd/manage.py , this will make the instance uuid not 
acceptable in the real code

  1365 # Don't judge me. There's already an InstanceMapping with 
this UUID
  1366 # so the marker needs to be non destructively modified.
  1367 next_marker = next_marker.replace('-', ' ')
  1368 objects.InstanceMapping(ctxt, instance_uuid=next_marker,
  1369 project_id=marker_project_id).create()
  1370 return 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2016-12-05 Thread Andrea Frittoli
The root disk for the cirros image is blank before boot.

The boot process starts from initrd. The file system in initrd is then
copied to /dev/vda and boot continues from there. Injection happens
before boot, thus there's no /etc folder found.

The test should inject to / instead.


** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  New

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647316] [NEW] scheduler report client sends allocations with value of zero, violating min_unit

2016-12-05 Thread Chris Dent
Public bug reported:


When a VM boots using non-local disk, it tries to send an allocation of 
'DISK_GB': 0. This violates the default min_unit of 1 and causes an error that 
looks like this:

[req-858cbed4-c113-45e8-94e3-1d8ee64f9de0 488c2b05a66b441199f4c1dca7accd5b 
3fa5b55ecc154427b636119f0920d252 - default default] Bad inventory
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/handlers/allocation.py",
 line 253, in set_allocations
allocations.create_all()
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
return fn(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1050, in create_all
self._set_allocations(self._context, self.objects)
  File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
return fn(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1011, in _set_allocations
before_gens = _check_capacity_exceeded(conn, allocs)
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 921, in _check_capacity_exceeded
resource_provider=rp_uuid)
InvalidAllocationConstraintsViolated: Unable to create allocation for 'DISK_GB' 
on resource provider 'f9398126-d0e8-4cf8-ae45-9103a88aa13d'. The requested 
amount would violate inventory constraints.

The causing code is at
https://github.com/openstack/nova/blob/474c2ef28234dacc658e9a78762cac66ef7fe334/nova/scheduler/client/report.py#L105

The correct fix is probably that whenever the value of any resource
class is zero, don't send that resource class in the dict.

** Affects: nova
 Importance: Medium
 Assignee: Chris Dent (cdent)
 Status: Triaged


** Tags: placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647316

Title:
  scheduler report client sends allocations with value of zero,
  violating min_unit

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  
  When a VM boots using non-local disk, it tries to send an allocation of 
'DISK_GB': 0. This violates the default min_unit of 1 and causes an error that 
looks like this:

  [req-858cbed4-c113-45e8-94e3-1d8ee64f9de0 488c2b05a66b441199f4c1dca7accd5b 
3fa5b55ecc154427b636119f0920d252 - default default] Bad inventory
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/handlers/allocation.py",
 line 253, in set_allocations
  allocations.create_all()
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
  return fn(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1050, in create_all
  self._set_allocations(self._context, self.objects)
File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 894, in wrapper
  return fn(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 1011, in _set_allocations
  before_gens = _check_capacity_exceeded(conn, allocs)
File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py", 
line 921, in _check_capacity_exceeded
  resource_provider=rp_uuid)
  InvalidAllocationConstraintsViolated: Unable to create allocation for 
'DISK_GB' on resource provider 'f9398126-d0e8-4cf8-ae45-9103a88aa13d'. The 
requested amount would violate inventory constraints.

  The causing code is at
  
https://github.com/openstack/nova/blob/474c2ef28234dacc658e9a78762cac66ef7fe334/nova/scheduler/client/report.py#L105

  The correct fix is probably that whenever the value of any resource
  class is zero, don't send that resource class in the dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647263] Re: neutron service can not start

2016-12-05 Thread Ihar Hrachyshka
http://logs.openstack.org/56/394356/6/experimental/gate-rally-dsvm-
neutron-extensions-rally/e919153/logs/screen-q-svc.txt.gz

Also, LP is not a support forum.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647263

Title:
  neutron service can not start

Status in neutron:
  Invalid

Bug description:
  Hi, all guys.
  We create a job about lbaas and fwaas in openstack-infra, it runs error, 
neutron did not start, why was that?

  the failed log link:
  http://logs.openstack.org/56/394356/6/experimental/gate-rally-dsvm-
  neutron-extensions-rally/e919153/logs/devstacklog.txt.gz

  THANKS!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp