[Yahoo-eng-team] [Bug 1602974] Re: [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-07-14 Thread Prashant Shetty
Thanks Brandon for looking into this. Couple of points,

1. This is HA proxy drviver implementation not Octavia
2. VM's are spawned free hand and had given enough time for LB & health-monitor 
to provision members of pool.

I am assigning this bug back..Please let me know what you think..

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602974

Title:
  [stable/liberty] LBaaS v2 haproxy: need a way to find status of
  listener

Status in neutron:
  New

Bug description:
  Currently we dont have option to check status of listener. Below is
  the output of listener without status.

  root@runner:~# neutron lbaas-listener-show 
8c0e0289-f85d-4539-8970-467a45a5c191
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
  | loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  root@runner:~#

  Problem arise when we tried to configure listener and pool back to
  back without any delay. Pool create fails saying listener is not
  ready.

  Workaround is to add 3seconds delay between listener and pool
  creation.

  Logs:

  root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
  | listeners   |  |
  | name| test-lb  |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
  | vip_address | 20.0.0.62|
  | vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
  | vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
  +-+--+
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 90260465-934a-44a4-a289-208e5af74cf5   |
  | loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   

[Yahoo-eng-team] [Bug 1600788] Re: If a common message is not being used, they should each be treated separately with respect to choosing a marker function

2016-07-14 Thread weiweigu
** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600788

Title:
  If a common message is not being used, they should each be treated
  separately with respect to choosing a marker function

Status in Ceilometer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Triaged
Status in oslo.log:
  Fix Released
Status in Solum:
  New

Bug description:
  Follow the
  http://docs.openstack.org/developer/oslo.i18n/guidelines.html

  For example, do not do this:

  # WRONG
  LOG.exception(_('There was an error.'))
  raise LocalExceptionClass(_('An error occured.'))
  Instead, use this style:

  # RIGHT
  LOG.exception(_LE('There was an error.'))
  raise LocalExceptionClass(_('An error occured.'))

  And oslo.log has the problem,we shuld correct it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599260] Re: Old version information should be configurable

2016-07-14 Thread Steve Martinelli
I believe this is a no-op for keystone and ironic now, we'll
automatically pick up the new oslosphinx and it'll be fixed, i'm marking
them as invalid

** Changed in: ironic
   Status: New => Invalid

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
 Assignee: David Stanek (dstanek) => (unassigned)

** Changed in: keystone
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1599260

Title:
  Old version information should be configurable

Status in Ironic:
  Invalid
Status in OpenStack Identity (keystone):
  Invalid
Status in oslosphinx:
  Fix Released

Bug description:
  Have a look at http://developer.openstack.org//api-
  ref/identity/v2/index.html

  The version information here makes no sense, we do not have links to
  old documents for the api at all. These links should not appear. Can
  we make them configurable, please?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1599260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603292] [NEW] Neutron network tags should not be empty string

2016-07-14 Thread shihanzhang
Public bug reported:

Now neutron network tags can be empty string, but I think there is no
use case for a empty string tag, so we should add a check for tags.

root@server201:~# neutron tag-add --resource-type network --resource test --tag 
'test_tag'
root@server201:~# neutron tag-add --resource-type network --resource test --tag 
'   '
root@server201:~# neutron net-show test
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2016-07-15T01:45:51  |
| description   |  |
| id| f1060382-c7fa-43d5-a214-e8525184e7f0 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | test |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 26   |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
|   | test_tag |
| tenant_id | 9e211e5ad3c0407aaf6c5803dc307c27 |
| updated_at| 2016-07-15T01:45:51  |
+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603292

Title:
  Neutron network tags should not be empty string

Status in neutron:
  New

Bug description:
  Now neutron network tags can be empty string, but I think there is no
  use case for a empty string tag, so we should add a check for tags.

  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag 'test_tag'
  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag '   '
  root@server201:~# neutron net-show test
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-07-15T01:45:51  |
  | description   |  |
  | id| f1060382-c7fa-43d5-a214-e8525184e7f0 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | test |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 26   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  |   | test_tag |
  | tenant_id | 9e211e5ad3c0407aaf6c5803dc307c27 |
  | updated_at| 2016-07-15T01:45:51  |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603292/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1586268] Re: Unit test: self.assertNotEqual in unit.test_base.BaseTest.test_eq does not work in PY2

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/342035
Committed: 
https://git.openstack.org/cgit/openstack/ceilometer/commit/?id=5cebb31c09ba408baf6bdda075cd0cc2c754a388
Submitter: Jenkins
Branch:master

commit 5cebb31c09ba408baf6bdda075cd0cc2c754a388
Author: Ji-Wei 
Date:   Thu Jul 14 17:33:11 2016 +0800

base.Resource not define __ne__() built-in function

Class base.Resource defines __eq__() built-in function, but does
not define __ne__() built-in function, so self.assertEqual works
but self.assertNotEqual does not work at all in this test case in
python2. This patch fixes it.

Change-Id: I819cb27664661e0b67d1e886c28432a2d1134cb0
Closes-Bug: #1586268


** Changed in: ceilometer
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586268

Title:
  Unit test: self.assertNotEqual in  unit.test_base.BaseTest.test_eq
  does not work in PY2

Status in Ceilometer:
  Fix Released
Status in daisycloud-core:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in keystonemiddleware:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in python-barbicanclient:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  In Progress
Status in python-manilaclient:
  New
Status in python-muranoclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-smaugclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in tempest:
  In Progress

Bug description:
  Version: master(20160527)

  In case cinderclient.tests.unit.test_base.BaseTest.test_eq 
self.assertNotEqual does not work.
  Class base.Resource defines __eq__() built-in function, but does not define 
__ne__() built-in function, so self.assertEqual works but self.assertNotEqual 
does not work at all in this test case.

  steps:
  1 Clone code of python-cinderclient from master.
  2 Modify the case of unit test: cinderclient/tests/unit/test_base.py
    line50--line62.
  def test_eq(self):
  # Two resources with same ID: never equal if their info is not equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hi'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  self.assertNotEqual(r1, r2)

  # Two resources with same ID: equal if their info is equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hello'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

  # Two resoruces of different types: never equal
  r1 = base.Resource(None, {'id': 1})
  r2 = volumes.Volume(None, {'id': 1})
  self.assertNotEqual(r1, r2)

  # Two resources with no ID: equal if their info is equal
  r1 = base.Resource(None, {'name': 'joe', 'age': 12})
  r2 = base.Resource(None, {'name': 'joe', 'age': 12})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

     Modify self.assertEqual(r1, r2) to self.assertNotEqual(r1, r2).

  3 Run unit test, and return success.

  After that, I make a test:

  class Resource(object):
  def __init__(self, person):
  self.person = person

  def __eq__(self, other):
  return self.person == other.person

  r1 = Resource("test")
  r2 = Resource("test")
  r3 = Resource("test_r3")
  r4 = Resource("test_r4")

  print r1 != r2
  print r1 == r2
  print r3 != r4
  print r3 == r4

  The result is :
  True
  True
  True
  False

  Whether r1 is precisely the same to r2 or not, self.assertNotEqual(r1,
  r2) return true.So I think self.assertNotEqual doesn't work at all in
  python2 and  should be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1586268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603267] [NEW] Possible typo in templates/stacks/_preview_details.htm

2016-07-14 Thread Jesper M
Public bug reported:

This commit seems to miss a s in stacks..

https://github.com/openstack/horizon/commit/453ac5254c0d00e6bbb172ea5f1302dd82fe0af8#commitcomment-18254965

otherwise this happends:https://ask.openstack.org/en/question/94643
/mitaka-horizon-preview-stack-throws-python-error/#94695

I'm just commenting here to make aware in case  it really is a typo, and
not a misconfiguration on my part.

I followed the mitaka guide.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon stacks

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1603267

Title:
  Possible typo in templates/stacks/_preview_details.htm

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This commit seems to miss a s in stacks..

  
https://github.com/openstack/horizon/commit/453ac5254c0d00e6bbb172ea5f1302dd82fe0af8#commitcomment-18254965

  otherwise this happends:https://ask.openstack.org/en/question/94643
  /mitaka-horizon-preview-stack-throws-python-error/#94695

  I'm just commenting here to make aware in case  it really is a typo,
  and not a misconfiguration on my part.

  I followed the mitaka guide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603268] [NEW] unstable grenade multinode

2016-07-14 Thread Armando Migliaccio
Public bug reported:

Grafana is showing the gate-grenade-dsvm-neutron-multinode being
unstable since July 13th [1]. More digging needed.

[1] http://grafana.openstack.org/dashboard/db/neutron-failure-
rate?panelId=5

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603268

Title:
  unstable grenade multinode

Status in neutron:
  Confirmed

Bug description:
  Grafana is showing the gate-grenade-dsvm-neutron-multinode being
  unstable since July 13th [1]. More digging needed.

  [1] http://grafana.openstack.org/dashboard/db/neutron-failure-
  rate?panelId=5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520321] Re: keystone-manage token_flush command fails

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/341165
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=21d868618139872454a1ca63485297a8b42d1cca
Submitter: Jenkins
Branch:master

commit 21d868618139872454a1ca63485297a8b42d1cca
Author: “Richard 
Date:   Tue Jul 12 19:51:55 2016 +

Improve user experience involving token flush

Currently with the use of memcache it is no longer necessary to use
the token_flush command. Running this command with KVS driver enabled
fails and throws a Traceback and NotImplemented errors. For a better
UX, we allow the implementation to pass and log a warning message

Change-Id: I95addc8df3a39135fb3fe3c63b6b21c1c279ace8
Closes-Bug: #1520321


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1520321

Title:
  keystone-manage token_flush command fails

Status in Fuel for OpenStack:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  
  Description:
  ===
  The token flush command fails on MOS 8.0 build #207
  (launch by the crontab /etc/cron.hourly/keystone)

  To reproduce:
  =
  run this command on a controller node:

  su -c '/usr/bin/keystone-manage token_flush' keystone
  No handlers could be found for logger "oslo_config.cfg"

  
  Log: /var/log/keystone/keystone-manage.log

  2015-11-26 17:13:39.145 670 WARNING oslo_log.versionutils [-] Deprecated: 
direct import of driver is deprecated as of Liberty in favor of entrypoints and 
may be removed in N.
  2015-11-26 17:13:39.153 670 INFO keystone.common.kvs.core [-] Using default 
dogpile sha1_mangle_key as KVS region token-driver key_mangler
  2015-11-26 17:13:39.156 670 CRITICAL keystone [-] NotImplemented: The action 
you have requested has not been implemented.
  2015-11-26 17:13:39.156 670 ERROR keystone Traceback (most recent call last):
  2015-11-26 17:13:39.156 670 ERROR keystone   File "/usr/bin/keystone-manage", 
line 10, in 
  2015-11-26 17:13:39.156 670 ERROR keystone sys.exit(main())
  2015-11-26 17:13:39.156 670 ERROR keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/cmd/manage.py", line 47, in main
  2015-11-26 17:13:39.156 670 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2015-11-26 17:13:39.156 670 ERROR keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 685, in main
  2015-11-26 17:13:39.156 670 ERROR keystone CONF.command.cmd_class.main()
  2015-11-26 17:13:39.156 670 ERROR keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/cmd/cli.py", line 244, in main
  2015-11-26 17:13:39.156 670 ERROR keystone 
token_manager.flush_expired_tokens()
  2015-11-26 17:13:39.156 670 ERROR keystone   File 
"/usr/lib/python2.7/dist-packages/keystone/token/persistence/backends/kvs.py", 
line 356, in flush_expired_tokens
  2015-11-26 17:13:39.156 670 ERROR keystone raise 
exception.NotImplemented()
  2015-11-26 17:13:39.156 670 ERROR keystone NotImplemented: The action you 
have requested has not been implemented.
  2015-11-26 17:13:39.156 670 ERROR keyston

  Expected result:
  no error

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1520321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603238] [NEW] BOM error updating hostname on centos6.x

2016-07-14 Thread Joshua Harlow
Public bug reported:

Seeing the following:

Jul 14 15:42:38 cent6-example [CLOUDINIT] util.py[DEBUG]: Failed to
update the hostname to cent6-example.cloud.phx3.gdg
(cent6-example)#012Traceback (most recent call last):#012  File
"/usr/lib/python2.6/site-
packages/cloudinit/config/cc_update_hostname.py", line 39, in handle#012
cloud.distro.update_hostname(hostname, fqdn, prev_fn)#012  File
"/usr/lib/python2.6/site-packages/cloudinit/distros/__init__.py", line
214, in update_hostname#012prev_hostname =
self._read_hostname(prev_hostname_fn)#012  File "/usr/lib/python2.6
/site-packages/cloudinit/distros/rhel.py", line 172, in
_read_hostname#012(_exists, contents) =
rhel_util.read_sysconfig_file(filename)#012  File "/usr/lib/python2.6
/site-packages/cloudinit/distros/rhel_util.py", line 64, in
read_sysconfig_file#012return (exists, SysConf(contents))#012  File
"/usr/lib/python2.6/site-
packages/cloudinit/distros/parsers/sys_conf.py", line 61, in
__init__#012write_empty_values=True)#012  File "/usr/lib/python2.6
/site-packages/configobj.py", line 1219, in __init__#012
self._load(infile, configspec)#012  File "/usr/lib/python2.6/site-
packages/configobj.py", line 1272, in _load#012infile =
self._handle_bom(infile)#012  File "/usr/lib/python2.6/site-
packages/configobj.py", line 1422, in _handle_bom#012if not
line.startswith(BOM):#012UnicodeDecodeError: 'ascii' codec can't decode
byte 0xff in position 0: ordinal not in range(128)

$ rpm -qa | grep configobj
python-configobj-4.6.0-3.el6.noarch

This might be fixed in a newer configobj (probably is) but just wanted
to note this here.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1603238

Title:
  BOM error updating hostname on centos6.x

Status in cloud-init:
  New

Bug description:
  Seeing the following:

  Jul 14 15:42:38 cent6-example [CLOUDINIT] util.py[DEBUG]: Failed to
  update the hostname to cent6-example.cloud.phx3.gdg
  (cent6-example)#012Traceback (most recent call last):#012  File
  "/usr/lib/python2.6/site-
  packages/cloudinit/config/cc_update_hostname.py", line 39, in
  handle#012cloud.distro.update_hostname(hostname, fqdn,
  prev_fn)#012  File "/usr/lib/python2.6/site-
  packages/cloudinit/distros/__init__.py", line 214, in
  update_hostname#012prev_hostname =
  self._read_hostname(prev_hostname_fn)#012  File "/usr/lib/python2.6
  /site-packages/cloudinit/distros/rhel.py", line 172, in
  _read_hostname#012(_exists, contents) =
  rhel_util.read_sysconfig_file(filename)#012  File "/usr/lib/python2.6
  /site-packages/cloudinit/distros/rhel_util.py", line 64, in
  read_sysconfig_file#012return (exists, SysConf(contents))#012
  File "/usr/lib/python2.6/site-
  packages/cloudinit/distros/parsers/sys_conf.py", line 61, in
  __init__#012write_empty_values=True)#012  File "/usr/lib/python2.6
  /site-packages/configobj.py", line 1219, in __init__#012
  self._load(infile, configspec)#012  File "/usr/lib/python2.6/site-
  packages/configobj.py", line 1272, in _load#012infile =
  self._handle_bom(infile)#012  File "/usr/lib/python2.6/site-
  packages/configobj.py", line 1422, in _handle_bom#012if not
  line.startswith(BOM):#012UnicodeDecodeError: 'ascii' codec can't
  decode byte 0xff in position 0: ordinal not in range(128)

  $ rpm -qa | grep configobj
  python-configobj-4.6.0-3.el6.noarch

  This might be fixed in a newer configobj (probably is) but just wanted
  to note this here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1603238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593127] Re: VIP delete event payload does not have sufficent information

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/340911
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e92b68dd8258eadd36f4d725681895d8ef3a68f0
Submitter: Jenkins
Branch:master

commit e92b68dd8258eadd36f4d725681895d8ef3a68f0
Author: Kumar Acharya 
Date:   Tue Jul 12 12:19:41 2016 +0530

delete event payload

This fix will allow the delete event to have the required data
in the notification payload.

Change-Id: I57a001ca2fddc2a750026e7da7980bfd8e5aab40
Closes-Bug: 1593127


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593127

Title:
  VIP delete event  payload does not have sufficent information

Status in neutron:
  Fix Released

Bug description:
  Existing context for vip.delete.start only contains loadbalancer id as the 
payload.
  This will not help if we want to automate few things from designate 
perspective like deleting records associated with the loadbalancer.

  Since this is delete process, we don't have any other way also to get
  the required info as when the vip is deleted all related information
  is also deleted.

  The current payload:
  "payload":{
"loadbalancer_id":"8c5c5886-2d5c-4826-8632-1678e1217a72"
 }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592000] Re: [RFE] Admin customized default security-group

2016-07-14 Thread Assaf Muller
I'd like to see this RFE discussed with the drivers team before it is
marked as Won't Fix.

** Changed in: neutron
   Status: Won't Fix => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592000

Title:
  [RFE] Admin customized default security-group

Status in neutron:
  Confirmed

Bug description:
  Allow the admin to decide which rules should be added (by default) to
  the tenant default security-group once created.

  At the moment, each tenant default security-group is created with specific 
set of rules: allow all egress and allow ingress from default sg.
  However, this is not the desired behavior for all deployments, as some would 
want to practice a “zero trust” model where all traffic is blocked unless 
explicitly decided otherwise, or on the other hand, allow all inbound+outbound 
traffic.
  It’s worth nothing that at some use cases the default behavior can be 
expressed with very specific sets of rules, which only the admin has the 
knowledge to define (e.g- allow connection to active directory endpoints), in 
such cases the impact on usability is even worse, as it requires the admin to 
create rules on every tenant default security-group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603236] [NEW] py35: TestCheckForMutableDefaultArgs fails

2016-07-14 Thread Eric Brown
Public bug reported:

The py35 gate fails on TestCheckForMutableDefaultArgs.


http://logs.openstack.org/52/337952/7/check/gate-keystone-python35-db-nv/2e9682b/testr_results.html.gz


ft125.1: 
keystone.tests.unit.test_hacking_checks.TestCheckForMutableDefaultArgs.test_StringException:
 Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/tests/unit/test_hacking_checks.py",
 line 64, in test
self.assert_has_errors(code, expected_errors=errors)
  File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/tests/unit/test_hacking_checks.py",
 line 53, in assert_has_errors
self.assertItemsEqual(expected_errors or [], actual_errors)
  File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1182, in assertItemsEqual
return self.assertSequenceEqual(expected, actual, msg=msg)
  File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1014, in assertSequenceEqual
self.fail(msg)
  File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: Sequences differ: [(7, [201 chars] 'K001'), (28, 27, 'K001'), 
(29, 21, 'K001'), (32, 11, 'K001')] != [(7, [201 chars] 'K001'), (28, 26, 
'K001'), (29, 21, 'K001'), (32, 10, 'K001')]

First differing element 12:
(28, 27, 'K001')
(28, 26, 'K001')
  [(7, 10, 'K001'),
   (10, 15, 'K001'),
   (10, 29, 'K001'),
   (13, 15, 'K001'),
   (16, 15, 'K001'),
   (16, 31, 'K001'),
   (22, 14, 'K001'),
   (22, 31, 'K001'),
   (22, 53, 'K001'),
   (25, 14, 'K001'),
   (25, 36, 'K001'),
   (28, 10, 'K001'),
-  (28, 27, 'K001'),
?^
+  (28, 26, 'K001'),
?^
   (29, 21, 'K001'),
-  (32, 11, 'K001')]
?^
+  (32, 10, 'K001')]
?^
 

The root cause is a difference the in the ast node col_offset value.
Python 3.4 and earlier were incorrect whereas 3.5 is fixed. It only
affected two of the function definitions in the code sample.

Here is a sample piece of code that illustrates the difference in the
ast module between Pythong 3.5 and earlier versions:

http://paste.openstack.org/show/532929/

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603236

Title:
  py35: TestCheckForMutableDefaultArgs fails

Status in OpenStack Identity (keystone):
  New

Bug description:
  The py35 gate fails on TestCheckForMutableDefaultArgs.

  
  
http://logs.openstack.org/52/337952/7/check/gate-keystone-python35-db-nv/2e9682b/testr_results.html.gz

  
  ft125.1: 
keystone.tests.unit.test_hacking_checks.TestCheckForMutableDefaultArgs.test_StringException:
 Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/tests/unit/test_hacking_checks.py",
 line 64, in test
  self.assert_has_errors(code, expected_errors=errors)
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/tests/unit/test_hacking_checks.py",
 line 53, in assert_has_errors
  self.assertItemsEqual(expected_errors or [], actual_errors)
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1182, in assertItemsEqual
  return self.assertSequenceEqual(expected, actual, msg=msg)
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 1014, in assertSequenceEqual
  self.fail(msg)
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: Sequences differ: [(7, [201 chars] 'K001'), (28, 27, 'K001'), 
(29, 21, 'K001'), (32, 11, 'K001')] != [(7, [201 chars] 'K001'), (28, 26, 
'K001'), (29, 21, 'K001'), (32, 10, 'K001')]

  First differing element 12:
  (28, 27, 'K001')
  (28, 26, 'K001')
[(7, 10, 'K001'),
 (10, 15, 'K001'),
 (10, 29, 'K001'),
 (13, 15, 'K001'),
 (16, 15, 'K001'),
 (16, 31, 'K001'),
 (22, 14, 'K001'),
 (22, 31, 'K001'),
 (22, 53, 'K001'),
 (25, 14, 'K001'),
 (25, 36, 'K001'),
 (28, 10, 'K001'),
  -  (28, 27, 'K001'),
  ?^
  +  (28, 26, 'K001'),
  ?^
 (29, 21, 'K001'),
  -  (32, 11, 'K001')]
  ?^
  +  (32, 10, 'K001')]
  ?^
   

  The root cause is a difference the in the ast node col_offset value.
  Python 3.4 and earlier were incorrect whereas 3.5 is fixed. It only
  affected two of the function definitions in the code sample.

  Here is a sample piece of code that illustrates the difference in the
  ast module between Pythong 3.5 and earlier 

[Yahoo-eng-team] [Bug 1603235] [NEW] trunk plugin missing necessary callback

2016-07-14 Thread Isaku Yamahata
Public bug reported:

trunk pluin now provides only notification on add/delete_subports.
For SDN controller support(OpenDaylight specifically) more callbacks are needed.
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

- resource: TRUNK event: PRECOMMIT/AFTER_CREATE/DELETE on trunk creaion/deletion
- resource:TRUNK event: PRECOMMIT/AFTER_UPDATE on add/delete subports

** Affects: neutron
 Importance: Undecided
 Assignee: Isaku Yamahata (yamahata)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603235

Title:
  trunk plugin missing necessary callback

Status in neutron:
  In Progress

Bug description:
  trunk pluin now provides only notification on add/delete_subports.
  For SDN controller support(OpenDaylight specifically) more callbacks are 
needed.
  https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

  - resource: TRUNK event: PRECOMMIT/AFTER_CREATE/DELETE on trunk 
creaion/deletion
  - resource:TRUNK event: PRECOMMIT/AFTER_UPDATE on add/delete subports

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602974] Re: [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-07-14 Thread Brandon Logan
This is probably bc you're using Octavia in the backend.  Octavia spins
up nova VMs to host the load balancers and if vt-x (nested
virtualization) is not enabled, the VM provisioning time will be a long
time.  There is an effort to have containers host instead of VMs via
nova-lxd.  However, this is some time off.  If you could check vt-x in
your environment and verify, please do.  In the meantime I'm going to
mark this as invalid.  If you find it is, not feel free to mark it back.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602974

Title:
  [stable/liberty] LBaaS v2 haproxy: need a way to find status of
  listener

Status in neutron:
  Invalid

Bug description:
  Currently we dont have option to check status of listener. Below is
  the output of listener without status.

  root@runner:~# neutron lbaas-listener-show 
8c0e0289-f85d-4539-8970-467a45a5c191
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
  | loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  root@runner:~#

  Problem arise when we tried to configure listener and pool back to
  back without any delay. Pool create fails saying listener is not
  ready.

  Workaround is to add 3seconds delay between listener and pool
  creation.

  Logs:

  root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
  | listeners   |  |
  | name| test-lb  |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
  | vip_address | 20.0.0.62|
  | vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
  | vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
  +-+--+
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 90260465-934a-44a4-a289-208e5af74cf5   |
  | loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80

[Yahoo-eng-team] [Bug 1532220] Re: [api-ref]OS-EP-FILTER extension missing

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/341787
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=caa7faf160426fe5db7cd6414fde319c71006408
Submitter: Jenkins
Branch:master

commit caa7faf160426fe5db7cd6414fde319c71006408
Author: Gage Hugo 
Date:   Wed Jul 13 12:30:38 2016 -0500

Add OS-EP-FILTER to api-ref

Added the missing OS-EP-FILTER extension for Identity v3 to the
api-ref docs.

Change-Id: I29ef91ce1f37af5233c85168cafc08aee61a5a93
Closes-Bug: #1532220


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1532220

Title:
  [api-ref]OS-EP-FILTER extension missing

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Missing OS-EP-FILTER extension for Identity v3:
  http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3-os-ep-filter-ext.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1532220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599473] Re: guest with direct port send broadcast request on dhcp renew in case guest and dhcp on the same physical node

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338252
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b6592c7372db39884a2282f048b6a29ef9fc2783
Submitter: Jenkins
Branch:master

commit b6592c7372db39884a2282f048b6a29ef9fc2783
Author: Edan David 
Date:   Wed Jul 6 09:17:48 2016 -0400

Add dhcp to Fdb extension's permitted device owners

Change-Id: I8c15f340b82424de44f5477ce36b67efe76dee59
Closes-Bug: #1599473


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599473

Title:
  guest with direct port send broadcast request on dhcp renew in case
  guest and dhcp on the same physical node

Status in neutron:
  Fix Released

Bug description:
  In case guest with direct port is located at the same host as network node,
  the dhcp renewal is sent in broadcast instead of unicast causing unnecessary 
noise.
  The reason is that after the expiration of the lease the guest send a renew 
message to the dhcp server, the PF then directs this message to the wire 
instead, because the FDB table is not yet updated for outgoing messages to the 
dhcp server (all previous messages sent were broadcast).

  the following is the tcpdump of the renew lease:

  14:24:04.289620 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.0.3 
(ff:ff:ff:ff:ff:ff) tell 0.0.0.0, length 42
  14:24:04.931965 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto 
UDP (17), length 328)
  10.0.0.3.68 > 255.255.255.255.67: [udp sum ok] BOOTP/DHCP, Request from 
fa:16:3e:9b:58:71, length 300, xid 0x2fb24d63, secs 49, Flags [none] (0x)
     Client-IP 10.0.0.3
     Client-Ethernet-Address fa:16:3e:9b:58:71
     Vendor-rfc1048 Extensions
   Magic Cookie 0x63825363
   DHCP-Message Option 53, length 1: Request
   Hostname Option 12, length 9: "localhost"
   Parameter-Request Option 55, length 13:
     Subnet-Mask, BR, Time-Zone, Classless-Static-Route
     Domain-Name, Domain-Name-Server, Hostname, YD
     YS, NTP, MTU, Option 119
     Default-Gateway
  14:24:04.932330 IP (tos 0xc0, ttl 64, id 19713, offset 0, flags [none], proto 
UDP (17), length 371)
  10.0.0.2.67 > 10.0.0.3.68: [udp sum ok] BOOTP/DHCP, Reply, length 343, 
xid 0x2fb24d63, secs 49, Flags [none] (0x)
     Client-IP 10.0.0.3
     Your-IP 10.0.0.3
     Server-IP 10.0.0.2
     Client-Ethernet-Address fa:16:3e:9b:58:71
     Vendor-rfc1048 Extensions
   Magic Cookie 0x63825363
   DHCP-Message Option 53, length 1: ACK
   Server-ID Option 54, length 4: 10.0.0.2
   Lease-Time Option 51, length 4: 120
   RN Option 58, length 4: 56
   RB Option 59, length 4: 101
   Subnet-Mask Option 1, length 4: 255.255.255.0
   BR Option 28, length 4: 10.0.0.255
   Domain-Name-Server Option 6, length 4: 10.0.0.2
   Domain-Name Option 15, length 14: "openstacklocal"
   Hostname Option 12, length 13: "host-10-0-0-3"
   Default-Gateway Option 3, length 4: 10.0.0.1
   Classless-Static-Route Option 121, length 14: 
(169.254.169.254/32:10.0.0.1),(default:10.0.0.1)
   MTU Option 26, length 2: 1500

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1599473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597532] Re: Containers/Swift has a LOT of padding

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335689
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=fdc17c5677b2ad6312c38b2f4f186bbbc16a6555
Submitter: Jenkins
Branch:master

commit fdc17c5677b2ad6312c38b2f4f186bbbc16a6555
Author: Diana Whitten 
Date:   Wed Jun 29 14:37:24 2016 -0700

Containers/Swift has unneccesary padding

* Added themable checkboxes so that all the checkboxes
  are now consistent.  There was already a single themable
  one in the contain detail view

* Removed unneccesary padding around swift breadcrumb

* Added themable checkboxes to hz-table

* Replaced 'empty table' with bootstrap well, which works
  quite well for this type of implememtation.

Closes-bug: #1597532
Change-Id: Ifff3f608d309ef0bd926c553be0a3a0e1d419096


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597532

Title:
  Containers/Swift has a LOT of padding

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Containers/Swift has a LOT of padding

  The two columns should align together ... and the empty state should
  use an info or well or something more bootstrappy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603212] [NEW] Validate that data is within valid_values

2016-07-14 Thread Pablo
Public bug reported:

Neutron lib includes valid_values for some functions but doesn't
validate on all of them that it's among them.

Follow up from https://review.openstack.org/#/c/337237/

** Affects: neutron
 Importance: Undecided
 Assignee: Pablo (iranzo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Pablo (iranzo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603212

Title:
  Validate that data is within valid_values

Status in neutron:
  New

Bug description:
  Neutron lib includes valid_values for some functions but doesn't
  validate on all of them that it's among them.

  Follow up from https://review.openstack.org/#/c/337237/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602373] Re: cloud-init doesn't always land files that one expects

2016-07-14 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1256-0ubuntu1

---
cloud-init (0.7.7~bzr1256-0ubuntu1) yakkety; urgency=medium

  * New upstream snapshot.
- distros/debian.py: fix eni renderer to not render .link files
- fixes for execution in python2.6.
- ConfigDrive: fix writing of 'injected' files and legacy networking
  (LP: #1602373)
- improvements to /etc/network/interfaces rendering including rendering
  of 'lo' devices and sorting attributes within a interface section.
- fix mcollective module that was completely broken if using python3
  (LP: #1597699)

 -- Scott Moser   Thu, 14 Jul 2016 14:54:05 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1602373

Title:
  cloud-init doesn't always land files that one expects

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
   Begin SRU Template 
  [Impact]
  Injected files functionality of OpenStack's config drive is broken.

  [Test Case]
  == Reproduce broken functionality ==
  $ echo "hi mom" > my-file.txt
  $ cat > "user-data" <<"EOF"
  #!/bin/sh
  logfile=/run/my.log
  file="/my-file.txt"
  if [ -e "$file" ]; then
     ( echo "=== PASS: file $file " ; cat $file ) | tee -a $logfile
     exit 0
  else:
     echo "=== FAIL: no file $file " | tee -a $logfile
     exit 1
  EOF

  openstack server create --key-name=brickies --flavor=m1.small \
    --config-drive=1 --image=e9e1dd6a-5e44-4126-81d5-fdd2ab5f9cb6 \
    --user-data=user-data --file=/my-file.txt=my-file.txt \
    injected-file0

  The launched system will have a file in /run/my.log that shows 'FAIL'
  and will not have /my-file.txt on disk.

  == See Fix ==
  # enable proposed
  $ cat > enable-proposed <<"EOF"
  #!/bin/sh
  set -e
  rel=$(lsb_release -sc)
  awk '$1 == "deb" && $2 ~ /ubuntu.com/ {
    printf("%s %s %s-proposed main universe\n", $1, $2, rel); exit(0) };
    ' "rel=$rel" /etc/apt/sources.list |
  tee /etc/apt/sources.list.d/proposed.list
  EOF
  $ sudo sh ./enable-proposed
  $ sudo apt-get update
  $ sudo apt-get install cloud-init

  # Remove /var/lib/cloud and /var/log/cloud-init* to remove state
  # and indicate this is a new instance on reboot
  $ sudo rm -Rf /var/lib/cloud /var/log/cloud-init*
  $ sudo reboot

  Now ssh back in after reboot, you should
  a.) have /my-file.txt
  b.) see PASS in /run/my.log
  c.) see mention of the 'injected file' in /var/log/cloud-init.log

  [Regression Potential]
  Regression potential on Ubuntu should be very small.

   End SRU Template 

  Trove launches instances using the servers.create() API with some
  files. Trove provides a dictionary of files that it wants on the
  instance and most of the time this works. Nova passes them to the
  launched VM as metadata on config drive.

  Sometimes though, it doesn't.

  When injection fails, I see a cloud-init.log that looks like this:

  https://gist.github.com/amrith/7566d8fef4b6e813cca77e5e3b1f1d5a

  When injection succeeds, I see a cloud-init.log that looks like this:

  https://gist.github.com/amrith/50d9e3050d88ec51e13b0a510bd718c3

  Observe that the one that succeeds properly injects three files:

  /etc/injected_files (which is something I added just for debugging)
  /etc/trove/conf.d/... two files here ...

  On a machine where this injection failed:

  root@m10:/tmp/zz/openstack/content# ls -l /etc/trove
  total 4
  drwxr-xr-x 2 amrith root 4096 Jul 12 16:55 conf.d
  root@m10:/tmp/zz/openstack/content# ls -l /etc/trove/conf.d/
  total 4
  root@m10:/tmp/zz/openstack/content#

  Clearly, no files made it over. Yet, the files are definitely there on
  the config drive ...

  I've mounted the config drive.

  root@m10:/tmp/zz/openstack/content# mount | grep zz
  /dev/sr0 on /tmp/zz type iso9660 (ro,relatime)

  root@m10:/tmp/zz/openstack/content# cd /tmp/zz
  root@m10:/tmp/zz# find .
  .
  ./ec2
  ./ec2/2009-04-04
  ./ec2/2009-04-04/meta-data.json
  ./ec2/latest
  ./ec2/latest/meta-data.json
  ./openstack
  ./openstack/2012-08-10
  ./openstack/2012-08-10/meta_data.json
  ./openstack/2013-04-04
  ./openstack/2013-04-04/meta_data.json
  ./openstack/2013-10-17
  ./openstack/2013-10-17/meta_data.json
  ./openstack/2013-10-17/vendor_data.json
  ./openstack/2015-10-15
  ./openstack/2015-10-15/meta_data.json
  ./openstack/2015-10-15/network_data.json
  ./openstack/2015-10-15/vendor_data.json
  ./openstack/2016-06-30
  ./openstack/2016-06-30/meta_data.json
  ./openstack/2016-06-30/network_data.json
  ./openstack/2016-06-30/vendor_data.json
  ./openstack/content
  ./openstack/content/
  ./openstack/content/0001
  ./openstack/content/0002
  ./openstack/latest
  

[Yahoo-eng-team] [Bug 1602880] Re: Material: Progress Bars should allow text

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/341852
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1af2a966c5f70e02f01fd2d47c93a7382522fb4a
Submitter: Jenkins
Branch:master

commit 1af2a966c5f70e02f01fd2d47c93a7382522fb4a
Author: Diana Whitten 
Date:   Wed Jul 13 16:27:43 2016 -0700

Material: Progress Bars should allow text

Material progress bars are so thin, that when you add text to them, its
unplesant. We need to support a progress bar type containing text that
is bigger by default.

Used this opportunity to align other progress bar experiences, and in
addition, generalize the text progress bar and progress loader bar for
general use everywhere.

Change-Id: I3d51c6a4582e3dc043f30632b6635a9ff17f5fbf
Closes-bug: #1602880


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1602880

Title:
  Material: Progress Bars should allow text

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  material progress bars are so thin, that when you add text to them,
  its unpleasant.  We need to support a progress bar type containing
  text that is bigger by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1602880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603103] Re: FreezeGun 0.3.5 fails keystone-coverage-db gate

2016-07-14 Thread Ron De Rose
The error was in my patch, where I was specifying the requirements.
Thus, this is not longer valid.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603103

Title:
  FreezeGun 0.3.5 fails keystone-coverage-db gate

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When utilizing FreezeGun in unit tests, the patch fails at the
  keystone-coverage-db gate.  Here is the code:

  https://review.openstack.org/#/c/340074/
  with freezegun.freeze_time(time) as frozen_datetime:
  ...
  frozen_datetime.tick(
  delta=datetime.timedelta(seconds=self.max_duration + 1))

  The logs show that it fails at frozen_datetime.tick, saying that
  NoneType doesn't have a method tick.  However, the tests run
  successfully when run manually and pass all other gates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1603103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603197] [NEW] neutron: VirtualInterface isn't cleaned up before rescheduling during allocation failure

2016-07-14 Thread Matt Riedemann
Public bug reported:

This is a follow-on to bug 1602357. That fixed the case that we delete
the VirtualInterface objects in the nova db when deallocating networks
for an instance or a single port.

But if we fail to allocate networking from the start, we also do a
cleanup on the ports we've created and/or updated, but we aren't
deleting the VIFs we've created, here:

https://github.com/openstack/nova/blob/92a388a1e34559b2ce69d31fdef996ff029495a6/nova/network/neutronv2/api.py#L847

That also needs to happen because we could do something like:

1. create/update port1, create vif1, ok
2. create/update port2, fails - we deallocate port1 and port2 but not vif1

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603197

Title:
  neutron: VirtualInterface isn't cleaned up before rescheduling during
  allocation failure

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This is a follow-on to bug 1602357. That fixed the case that we delete
  the VirtualInterface objects in the nova db when deallocating networks
  for an instance or a single port.

  But if we fail to allocate networking from the start, we also do a
  cleanup on the ports we've created and/or updated, but we aren't
  deleting the VIFs we've created, here:

  
https://github.com/openstack/nova/blob/92a388a1e34559b2ce69d31fdef996ff029495a6/nova/network/neutronv2/api.py#L847

  That also needs to happen because we could do something like:

  1. create/update port1, create vif1, ok
  2. create/update port2, fails - we deallocate port1 and port2 but not vif1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600788] Re: If a common message is not being used, they should each be treated separately with respect to choosing a marker function

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/340894
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=14ccf5986e0423a2835c2c5be8c9a1246f0db2f3
Submitter: Jenkins
Branch:master

commit 14ccf5986e0423a2835c2c5be8c9a1246f0db2f3
Author: weiweigu 
Date:   Tue Jul 12 21:12:04 2016 +0800

Replace "LOG.warn(_" with "LOG.(_LW"

Follow http://docs.openstack.org/developer/oslo.i18n/guidelines.html:
If a common message is not being used, they should each betreated
separately with respect to choosing a marker function.So this patch
is to fix it.

Change-Id: Id122aa6395c534bee5287264c8951181f08d6f19
Closes-Bug: #1600788


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600788

Title:
  If a common message is not being used, they should each be treated
  separately with respect to choosing a marker function

Status in Ceilometer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Identity (keystone):
  New
Status in neutron:
  Triaged
Status in oslo.log:
  Fix Released
Status in Solum:
  New

Bug description:
  Follow the
  http://docs.openstack.org/developer/oslo.i18n/guidelines.html

  For example, do not do this:

  # WRONG
  LOG.exception(_('There was an error.'))
  raise LocalExceptionClass(_('An error occured.'))
  Instead, use this style:

  # RIGHT
  LOG.exception(_LE('There was an error.'))
  raise LocalExceptionClass(_('An error occured.'))

  And oslo.log has the problem,we shuld correct it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603034] Re: pci whitelist exception will kill the periodic update of the hypervisor statistics

2016-07-14 Thread Matt Riedemann
I've got a fix here: https://review.openstack.org/#/c/342301/

** Changed in: nova
 Assignee: Raghuveer Shenoy (rshenoy) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: Triaged => In Progress

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => Confirmed

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Tags added: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603034

Title:
  pci whitelist exception will kill the periodic update of the
  hypervisor statistics

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  An encountered exception in the pci whitelist will cause the periodic
  hypervisor update loop to terminate and not be tried again. Retries
  should continue at the normal interval.

  Scenario 1:

  Update the nova.conf with the pci_whitelist as follows:
  pci_passthrough_whitelist = [ {"devname": "hed1", "physical_network": 
"physnet1"},{"physical_network": "physnet1", "address": 
"*:04:00.0"},{"physical_network": "physnet2", "address": "*:04:00.1"}]

  We get the following error in the nova compute log if hed1 is not
  present. But compute still shows up and the periodic hypervisor update
  stops working.

  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
[req-0e7e62d5-23c9-48f2-8ca4-b47b763c29df None None] Error updating resources 
for node padawan-cp1-comp0001-mgmt.
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/manager.py",
 line 6472, in update_available_resource
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 531, in update_available_resource
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 271, in inner
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager return f(*args, 
**kwargs)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 564, in _update_available_resource
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager node_id=n_id)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/manager.py",
 line 68, in __init__
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager self.dev_filter 
= whitelist.Whitelist(CONF.pci_passthrough_whitelist)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/whitelist.py",
 line 78, in __init__
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager self.specs = 
self._parse_white_list_from_config(whitelist_spec)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/whitelist.py",
 line 59, in _parse_white_list_from_config
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager spec = 
devspec.PciDeviceSpec(ds)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/devspec.py",
 line 134, in __init__
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
self._init_dev_details()
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/devspec.py",
 line 155, in _init_dev_details
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager raise 
exception.PciDeviceNotFoundById(id=self.dev_name)
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
PciDeviceNotFoundById: PCI device hed1 not found
  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603162] [NEW] IP deallocation failed on external system with pluggable IPAM

2016-07-14 Thread Carl Baldwin
Public bug reported:

This bug is visible when pluggable IPAM is active.  It can be seen with
this patch [1].  It does not cause gate failures but it is still
something that should be understood.  This logstash query [2] seems to
find where they occur.  It is helpful to look at the DEBUG level logging
around the time of the error.  For example see this paste [3].

It seems that the session gets broken with an exception that causes a
rollback.  Then, the IPAM rollback attempts to use the same session for
rollback which fails.  Should the reference pluggable IPAM driver be
using a different session?  Or, should it call rollback?

[1] https://review.openstack.org/#/c/181023
[2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22IP%20deallocation%20failed%20on%20external%20system%5C%22
[3] http://paste.openstack.org/show/532891/

** Affects: neutron
 Importance: High
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603162

Title:
  IP deallocation failed on external system with pluggable IPAM

Status in neutron:
  New

Bug description:
  This bug is visible when pluggable IPAM is active.  It can be seen
  with this patch [1].  It does not cause gate failures but it is still
  something that should be understood.  This logstash query [2] seems to
  find where they occur.  It is helpful to look at the DEBUG level
  logging around the time of the error.  For example see this paste [3].

  It seems that the session gets broken with an exception that causes a
  rollback.  Then, the IPAM rollback attempts to use the same session
  for rollback which fails.  Should the reference pluggable IPAM driver
  be using a different session?  Or, should it call rollback?

  [1] https://review.openstack.org/#/c/181023
  [2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22IP%20deallocation%20failed%20on%20external%20system%5C%22
  [3] http://paste.openstack.org/show/532891/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599111] Re: HTTP exception thrown: Unexpected API Error

2016-07-14 Thread John Garbutt
nova-docker is not part of upstream nova, or supported by upstream Nova.
Moving to the nova-docker project.

** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599111

Title:
  HTTP exception thrown: Unexpected API Error

Status in OpenStack Compute (nova):
  Invalid
Status in nova-docker:
  New

Bug description:
  Description
  ===
  Exception  is thrown when creating container from nova.

  Steps to reproduce
  ==
  1. create a container image in glance
  glance image-create --container-format=docker --disk-format=raw --name ubuntu
  2. create a container
  nova boot --flavor m1.small --image ubuntu ubuntucontainer

  Expected result
  ===
  Container should be created

  Actual result
  =
  API Error

  Environment
  ===
  1. Nova version
  commit b9d757bc0429159a235a397c51d510bd40e19709
  Merge: 44db7db 566bdf1
  Author: Jenkins 
  Date:   Wed Apr 13 03:21:25 2016 +

  Merge "Remove unused parameter from _get_requested_instance_group"

  
  2. Nova-docker version
  commit 034a4842fc1ebba5912e02cff8cd197ae81eb0c3
  Author: zhangguoqing 
  Date:   Mon May 23 12:17:07 2016 +

  add active_migrations attribute to DockerDriver
  
  1. For passing the nova unit tests about the active_migrations attribute.
  2. Fix test_get_dns_entries DNS IPs that changed from nova.
  3. Add conf path to netconf that changed from nova.
  
  Closes-Bug: #1584741
  Closes-Bug: #1582615
  
  Change-Id: Iaab7e695055f042b9060f07e31681c66197b8c79

  3. Glance version
  commit bded216e10b07735a09077f0d4f4901e963c83b5
  Author: OpenStack Proposal Bot 
  Date:   Tue Apr 12 23:08:25 2016 +

  Updated from global requirements
  
  Change-Id: I706c9ea19e8ab2c49ce748bba31ae03dd0ec6d74

  
  4. Compute Driver
  compute_driver=novadocker.virt.docker.DockerDriver

  Logs
  2016-07-05 15:52:02.835 DEBUG nova.api.openstack.wsgi 
[req-a956f02f-ef9f-4309-8c4d-fa3ab355bd5a admin admin] Calling method '>' 
from (pid=30686) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:699
  2016-07-05 15:52:03.451 ERROR nova.api.openstack.extensions 
[req-a956f02f-ef9f-4309-8c4d-fa3ab355bd5a admin admin] Unexpected exception in 
API method
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/images.py", line 87, in show
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions image = 
self._image_api.get(context, id)
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/api.py", line 93, in get
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions 
show_deleted=show_deleted)
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 283, in show
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 513, in _translate_from_glance
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 597, in _extract_attributes
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions output[attr] 
= getattr(image, attr) or 0
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 490, in __getattr__
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions self.get()
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/glanceclient/openstack/common/apiclient/base.py",
 line 512, in get
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions 
{'x_request_id': self.manager.client.last_request_id})
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions AttributeError: 
'HTTPClient' object has no attribute 'last_request_id'
  2016-07-05 15:52:03.451 TRACE nova.api.openstack.extensions

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1595795] Re: [BGP][devstack] Install bgp failed because of Permission denied

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/333668
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=1d7155cc0cf602eb00d6882ff21ee27b9fa40cf9
Submitter: Jenkins
Branch:master

commit 1d7155cc0cf602eb00d6882ff21ee27b9fa40cf9
Author: Dongcan Ye 
Date:   Fri Jun 24 10:59:05 2016 +0800

Fix bug for Permission denied

Permission denied for creating directory '/etc/neutron', here we
use root privilege for creating the directory, then set the owner
as STACK_USER.

Change-Id: I2133d3f92dcec7e3187a6382ded233ac1f36fee7
Closes-Bug: #1595795


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595795

Title:
  [BGP][devstack] Install bgp failed because of Permission denied

Status in neutron:
  Fix Released

Bug description:
  Environment:
  OS: Ubuntu 14.04
  Code repo: master

  Install Neutron bgp in DevStack failed.
  http://paste.openstack.org/show/521784/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603146] [NEW] create network with subnet broken for non admins

2016-07-14 Thread Eric Peterson
Public bug reported:

https://bugs.launchpad.net/horizon/+bug/1398845 Broke network creation
for normal users, that also want to create a subnet.

The policy check is not the correct one to use, as there is no existing
network to check ownership against.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1603146

Title:
  create network with subnet broken for non admins

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://bugs.launchpad.net/horizon/+bug/1398845 Broke network creation
  for normal users, that also want to create a subnet.

  The policy check is not the correct one to use, as there is no
  existing network to check ownership against.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603124] [NEW] [stable/liberty] LBaaS v2 haproxy: pool member status shown wrongly

2016-07-14 Thread Prashant Shetty
Public bug reported:

Setup:

1. Loadbalancer with one listener and pool.
2. 4 webservers as members of pool.
3. create health-monitor for the pool

Test:

1. Send 4 curl requests, requests are distributed equally among the members as 
ROUND_ROBIN lb_alorithm is selected.
2. using "nova stop <>" SHUTDOWN one VM and send 4 requests again.
3. health monitor seems to detect member down and it stopped forwarding request 
to that particular webserver.

Problem:

1. CLI "neutron lbaas-member-list <>" says all members are admin_up
stack@runner:~/prash/nsbu_cqe_openstack/tools$ neutron lbaas-member-list 
test-lb-pool-http
+--+---+---++--++
| id   | address   | protocol_port | weight | 
subnet_id| admin_state_up |
+--+---+---++--++
| 002e68a0-03db-4f46-9c82-e35d395ada6b | 20.0.0.9  |80 |  1 | 
63cbeccd-6887-4dda-b4d2-b7503bce870a | True   |
| 0a97fecf-30c8-483d-8158-17523a726594 | 20.0.0.28 |80 |  1 | 
63cbeccd-6887-4dda-b4d2-b7503bce870a | True   |
| 348e0f42-9ab3-417e-a6e3-9857a988b6f4 | 20.0.0.29 |80 |  1 | 
63cbeccd-6887-4dda-b4d2-b7503bce870a | True   |
| 823d0f22-a442-4450-b7c8-0038e3f142b6 | 20.0.0.8  |80 |  1 | 
63cbeccd-6887-4dda-b4d2-b7503bce870a | True   |
+--+---+---++--++
stack@runner:~/prash/nsbu_cqe_openstack/tools$ 

2. CLI "neutron lbaas-loadbalancer-status <>" states all members are
"ACTIVE" and "ONLINE".

stack@runner:~/prash/nsbu_cqe_openstack/tools$ neutron 
lbaas-loadbalancer-status test-lb
{
"loadbalancer": {
"listeners": [
{
"pools": [
{
"name": "test-lb-pool-http", 
"provisioning_status": "ACTIVE", 
"healthmonitor": {
"type": "HTTP", 
"id": "96ec79bc-d212-431b-87da-2a053064675a", 
"provisioning_status": "ACTIVE"
}, 
"members": [
{
"provisioning_status": "ACTIVE", 
"protocol_port": 80, 
"id": "002e68a0-03db-4f46-9c82-e35d395ada6b", 
"operating_status": "ONLINE", 
"address": "20.0.0.9"
}, 
{
"provisioning_status": "ACTIVE", 
"protocol_port": 80, 
"id": "823d0f22-a442-4450-b7c8-0038e3f142b6", 
"operating_status": "ONLINE", 
"address": "20.0.0.8"
}, 
{
"provisioning_status": "ACTIVE", 
"protocol_port": 80, 
"id": "348e0f42-9ab3-417e-a6e3-9857a988b6f4", 
"operating_status": "ONLINE", 
"address": "20.0.0.29"
}, 
{
"provisioning_status": "ACTIVE", 
"protocol_port": 80, 
"id": "0a97fecf-30c8-483d-8158-17523a726594", 
"operating_status": "ONLINE", 
"address": "20.0.0.28"
}
], 
"id": "4a4b3d7e-f061-4ef6-aef4-1322aa4e6c6f", 
"operating_status": "ONLINE"
}
], 
"provisioning_status": "ACTIVE", 
"name": "test-lb-http", 
"operating_status": "ONLINE", 
"id": "dedabd6e-eb51-435f-9fda-e9b90eef4108"
}
], 
"provisioning_status": "ACTIVE", 
"name": "test-lb", 
"operating_status": "ONLINE", 
"id": "57814c66-731b-4dc7-abad-7d582c546873"
}
}
stack@runner:~/prash/nsbu_cqe_openstack/tools$


Nova output:

stack@runner:~/prash/nsbu_cqe_openstack/tools$ nova list
+--++-++-+---+
| ID   | Name   | Status  | Task State | 
Power State | Networks  |

[Yahoo-eng-team] [Bug 1603121] [NEW] db_sync doesn't work with sql_mode = 'TRADITIONAL'

2016-07-14 Thread Alexandru
Public bug reported:

Hi

The keystone-manage db_sync command fails with the following error :

2016-07-14 16:13:17.670 19170 ERROR keystone DBError:
(_mysql_exceptions.ProgrammingError) (1064, 'You have an error in your
SQL syntax; check the manual that corresponds to your MariaDB server
version for the right syntax to use near \'"keystone"\' at line 1')
[SQL: 'SHOW FULL TABLES FROM "keystone"']

OS: Debian 8.5
Keystone ver : 2:9.0.0-2~bpo8+1
Mysql: Server version: 5.5.44-MariaDB-log MariaDB Server

The problem seems to be related with :
cfg.StrOpt('mysql_sql_mode',
   default='TRADITIONAL', 

from /usr/lib/python2.7/dist-packages/oslo_db/options.py
It seems that on MariaDB if you set:
 set session sql_mode = 'TRADITIONAL';
the query :
show full tables from "keystone" 
fails

I've solve the problem by adding ANSI to default sql mode:
cfg.StrOpt('mysql_sql_mode',
   default='TRADITIONAL,ANSI',

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603121

Title:
  db_sync doesn't work with sql_mode = 'TRADITIONAL'

Status in OpenStack Identity (keystone):
  New

Bug description:
  Hi

  The keystone-manage db_sync command fails with the following error :

  2016-07-14 16:13:17.670 19170 ERROR keystone DBError:
  (_mysql_exceptions.ProgrammingError) (1064, 'You have an error in your
  SQL syntax; check the manual that corresponds to your MariaDB server
  version for the right syntax to use near \'"keystone"\' at line 1')
  [SQL: 'SHOW FULL TABLES FROM "keystone"']

  OS: Debian 8.5
  Keystone ver : 2:9.0.0-2~bpo8+1
  Mysql: Server version: 5.5.44-MariaDB-log MariaDB Server

  The problem seems to be related with :
  cfg.StrOpt('mysql_sql_mode',
 default='TRADITIONAL', 

  from /usr/lib/python2.7/dist-packages/oslo_db/options.py
  It seems that on MariaDB if you set:
   set session sql_mode = 'TRADITIONAL';
  the query :
  show full tables from "keystone" 
  fails

  I've solve the problem by adding ANSI to default sql mode:
  cfg.StrOpt('mysql_sql_mode',
 default='TRADITIONAL,ANSI',

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1603121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603103] [NEW] FreezeGun 0.3.5 fails keystone-coverage-db gate

2016-07-14 Thread Ron De Rose
Public bug reported:

When utilizing FreezeGun in unit tests, the patch fails at the keystone-
coverage-db gate.  Here is the code:

https://review.openstack.org/#/c/340074/
with freezegun.freeze_time(time) as frozen_datetime:
...
frozen_datetime.tick(
delta=datetime.timedelta(seconds=self.max_duration + 1))

The logs show that it fails at frozen_datetime.tick, saying that
NoneType doesn't have a method tick.  However, the tests run
successfully when run manually and pass all other gates.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603103

Title:
  FreezeGun 0.3.5 fails keystone-coverage-db gate

Status in OpenStack Identity (keystone):
  New

Bug description:
  When utilizing FreezeGun in unit tests, the patch fails at the
  keystone-coverage-db gate.  Here is the code:

  https://review.openstack.org/#/c/340074/
  with freezegun.freeze_time(time) as frozen_datetime:
  ...
  frozen_datetime.tick(
  delta=datetime.timedelta(seconds=self.max_duration + 1))

  The logs show that it fails at frozen_datetime.tick, saying that
  NoneType doesn't have a method tick.  However, the tests run
  successfully when run manually and pass all other gates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1603103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528349] Re: Nova and Glance contain a near-identical signature_utils module

2016-07-14 Thread Daniel Berrange
This isn't a bug - its a feature request to switch to using a new
library from Nova. I'm fine with that as a suggestion, but it should be
file as a blueprint - probably can be a specless blueprint.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528349

Title:
  Nova and Glance contain a near-identical signature_utils module

Status in Glance:
  Confirmed
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  It appears that https://review.openstack.org/256069 took the
  signature_utils modules from Glance and modified it in fairly
  superficial ways based on review feedback:

$ diff -u nova/nova/signature_utils.py 
glance/glance/common/signature_utils.py  | diffstat
signature_utils.py |  182 
-
1 file changed, 83 insertions(+), 99 deletions(-)

  The Oslo project was created to avoid this sort of short-sighted cut-
  and-pasting. This code should really be in a python library that both
  Glance and Nova could use directly.

  Perhaps the code could be moved to a new library in the Glance
  project, or a new library in the Oslo project, or into the
  cryptography library itself?

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1528349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537625] Re: invalid path for plugin skeleton in api_plugins.rst

2016-07-14 Thread John Garbutt
This doc should not be fixed, it has now been removed.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537625

Title:
  invalid path for plugin skeleton in api_plugins.rst

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In doc/source/api_plugins.rst invalid path [1] is given for plugin
  skeleton example which is not exist.


  [1]
  https://github.com/openstack/nova/blob/master/doc/source/api_plugins.rst
  #basic-plugin-structure

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1537625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593342] Re: metadata agent does not cache results if cache is configured using oslo.cache options

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/330707
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d034532d376b6ca2309d20f59ccc060a762ff12c
Submitter: Jenkins
Branch:master

commit d034532d376b6ca2309d20f59ccc060a762ff12c
Author: Ihar Hrachyshka 
Date:   Thu Jun 16 18:40:08 2016 +0200

cache_utils: fixed cache misses for the new (oslo.cache) configuration

When the new (oslo.cache) way of configuring the cache is used, cache is
never hit, because self._cache.get() consistently raises exceptions:

TypeError: 'sha1() argument 1 must be string or buffer, not tuple'

It occurs because the key passed into the oslo.cache region does not
conform to oslo.cache requirements. The library enforces the key to be
compatible with sha1_mangle_key() function:


http://git.openstack.org/cgit/openstack/oslo.cache/tree/oslo_cache/core.py?id=8b8a718507b30a4a2fd36e6c14d1071bd6cca878#n140

With this patch, we transform the key to a string, to conform to the
requirements.

The bug sneaked into the tree unnoticed because of two reasons:

- there were no unit tests to validate the new way of cache
  configuration.
- the 'legacy' code path was configuring the cache in a slightly
  different way, omitting some oslo.cache code.

For the former, new unit tests were introduced that cover the cache on
par with the legacy mode.

For the latter, the legacy code path was modified to rely on the same
configuration path as for the new way.

Closes-Bug: #1593342
Change-Id: I2724aa21f66f0fb69147407bfcf3184585d7d5cd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593342

Title:
  metadata agent does not cache results if cache is configured using
  oslo.cache options

Status in neutron:
  Fix Released

Bug description:
  When the new configuration options from oslo.cache ([cache]enabled,
  [cache]backend, [cache]expiration_time, etc.) are used to configure
  the cache instead of the legacy cache_url option, the metadata cache
  is never hit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558503] Re: Flavor m1.tiny could not be found Exception while creating instance

2016-07-14 Thread John Garbutt
So I think this is actually python-novaclient.

It checks to see if its a valid id, and correctly gets 404.

This seems like the correct/expected behaviour.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558503

Title:
  Flavor m1.tiny could not be found Exception while creating instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  While creating instance using "nova boot --flavor  ",
  Instance is created successfully, but in /var/log/nova/nova-api.log
  file we find the following error log:-

  HTTP exception thrown: Flavor m1.tiny could not be found.
  "GET /v2/cf66e8c655474008a8c1fc088665df83/flavors/m1.tiny HTTP/1.1" status: 
404 len: 298 time: 0.0457311

  By Logs we can see :-
  1. nova-api first tries to find the flavor using flavor name and then throw 
the exception.
  2. then nova-api tries to find the flavor using flavor id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504725] Re: rabbitmq-server restart twice, log is crazy increasing until service restart

2016-07-14 Thread John Garbutt
** Changed in: nova
   Status: Confirmed => Invalid

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504725

Title:
  rabbitmq-server restart twice, log is crazy increasing until service
  restart

Status in neutron:
  New
Status in oslo.messaging:
  Won't Fix

Bug description:
  After I restart the rabbitmq-server for the second time, the service log(such 
as nova,neutron and so on) is increasing crazy, log is such as " TypeError: 
'NoneType' object has no attribute '__getitem__'".
  It seems that the channel is setted to None. 

  trace log:

  2015-10-10 15:20:59.413 29515 TRACE root Traceback (most recent call last):
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 95, in 
inner_func
  2015-10-10 15:20:59.413 29515 TRACE root return infunc(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_executors/impl_eventlet.py", 
line 96, in _executor_thread
  2015-10-10 15:20:59.413 29515 TRACE root incoming = self.listener.poll()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
122, in poll
  2015-10-10 15:20:59.413 29515 TRACE root self.conn.consume(limit=1, 
timeout=timeout)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1202, in consume
  2015-10-10 15:20:59.413 29515 TRACE root six.next(it)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1100, in iterconsume
  2015-10-10 15:20:59.413 29515 TRACE root error_callback=_error_callback)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
868, in ensure
  2015-10-10 15:20:59.413 29515 TRACE root ret, channel = autoretry_method()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 458, in _ensured
  2015-10-10 15:20:59.413 29515 TRACE root return fun(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 545, in __call__
  2015-10-10 15:20:59.413 29515 TRACE root self.revive(create_channel())
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 251, in channel
  2015-10-10 15:20:59.413 29515 TRACE root chan = 
self.transport.create_channel(self.connection)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 91, in 
create_channel
  2015-10-10 15:20:59.413 29515 TRACE root return connection.channel()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 289, in channel
  2015-10-10 15:20:59.413 29515 TRACE root return self.channels[channel_id]
  2015-10-10 15:20:59.413 29515 TRACE root TypeError: 'NoneType' object has no 
attribute '__getitem__'
  2015-10-10 15:20:59.413 29515 TRACE root

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533867] Re: In cell mode and latest kilo code, nova get-vnc-console throw 500 error

2016-07-14 Thread John Garbutt
Given this is for cells, and cells is now largely frozen code, marking
as opinion.

** Changed in: nova
   Status: Confirmed => Opinion

** Tags added: cells

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533867

Title:
  In cell mode and latest kilo code, nova get-vnc-console throw 500
  error

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  We are using kilo version of nova (commit id  
b8c4f1bce356838dd3dac3b59734cf47f72373e5). 
  Setup 3 cells with their own rabbitmq and mysql. 
  Try nova get-vnc-console vm_id, got 500 error and error in compute side 
complain 
  nova.api.openstack AttributeError: 'dict' object has no attribute 'uuid' 
  After dive into it, the message compute received from AMQ was not been 
serialized to instance object but to a dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1533867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603038] [NEW] Execption on admin_token usage ValueError: Unrecognized

2016-07-14 Thread Attila Fazekas
Public bug reported:

1. iniset keystone.conf DEFAULT admin_token deprecated
2. reload keystone (systemctl restart httpd)
3. curl -g -i -X GET http://192.168.9.98/identity_v2_admin/v2.0/users -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: deprecated"


I know the admin_token is deprecated, but is should be handled without
throwing an extra exception.


2016-07-14 11:00:28.487 20453 WARNING keystone.middleware.core 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] The admin_token_auth 
middleware presents a security risk and should be removed from the 
[pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of 
your paste ini file.
2016-07-14 11:00:28.593 20453 DEBUG keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Authenticating user token 
process_request 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py:354
2016-07-14 11:00:28.593 20453 WARNING keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid token contents.
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth Traceback (most 
recent call last):
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 399, in _do_fetch_token
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth return data, 
access.create(body=data, auth_token=token)
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth return 
wrapped(*args, **kwargs)
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/access/access.py", line 49, in 
create
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth raise 
ValueError('Unrecognized auth response')
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth ValueError: 
Unrecognized auth response
2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth 
2016-07-14 11:00:28.594 20453 INFO keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid user token
2016-07-14 11:00:28.595 20453 DEBUG keystone.middleware.auth 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] RBAC: auth_context: {} 
fill_context /opt/stack/keystone/keystone/middleware/auth.py:219
2016-07-14 11:00:28.604 20453 INFO keystone.common.wsgi 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] GET 
http://192.168.9.98/identity_v2_admin/v2.0/users
2016-07-14 11:00:28.604 20453 WARNING oslo_log.versionutils 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] Deprecated: get_users of 
the v2 API is deprecated as of Mitaka in favor of a similar function in the v3 
API and may be removed in Q.
2016-07-14 11:00:28.622 20453 DEBUG oslo_db.sqlalchemy.engines 
[req-d1c79cbf-698f-4844-9efd-7be444040cf0 - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:256

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603038

Title:
  Execption on admin_token usage ValueError: Unrecognized

Status in OpenStack Identity (keystone):
  New

Bug description:
  1. iniset keystone.conf DEFAULT admin_token deprecated
  2. reload keystone (systemctl restart httpd)
  3. curl -g -i -X GET http://192.168.9.98/identity_v2_admin/v2.0/users -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: deprecated"


  I know the admin_token is deprecated, but is should be handled without
  throwing an extra exception.


  2016-07-14 11:00:28.487 20453 WARNING keystone.middleware.core 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] The admin_token_auth 
middleware presents a security risk and should be removed from the 
[pipeline:api_v3], [pipeline:admin_api], and [pipeline:public_api] sections of 
your paste ini file.
  2016-07-14 11:00:28.593 20453 DEBUG keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Authenticating user token 
process_request 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py:354
  2016-07-14 11:00:28.593 20453 WARNING keystone.middleware.auth 
[req-f13bf34e-4b80-4c2b-8e47-646ce5665abf - - - - -] Invalid token contents.
  2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth Traceback (most 
recent call last):
  2016-07-14 11:00:28.593 20453 TRACE keystone.middleware.auth   File 

[Yahoo-eng-team] [Bug 1603034] [NEW] pci whitelist exception will kill the periodic update of the hypervisor statistics

2016-07-14 Thread Raghuveer Shenoy
Public bug reported:

An encountered exception in the pci whitelist will cause the periodic
hypervisor update loop to terminate and not be tried again. Retries
should continue at the normal interval.

Scenario 1:

Update the nova.conf with the pci_whitelist as follows:
pci_passthrough_whitelist = [ {"devname": "hed1", "physical_network": 
"physnet1"},{"physical_network": "physnet1", "address": 
"*:04:00.0"},{"physical_network": "physnet2", "address": "*:04:00.1"}]

We get the following error in the nova compute log if hed1 is not
present. But compute still shows up and the periodic hypervisor update
stops working.

2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
[req-0e7e62d5-23c9-48f2-8ca4-b47b763c29df None None] Error updating resources 
for node padawan-cp1-comp0001-mgmt.
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager Traceback (most recent 
call last):
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/manager.py",
 line 6472, in update_available_resource
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
rt.update_available_resource(context)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 531, in update_available_resource
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 271, in inner
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager return f(*args, 
**kwargs)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 564, in _update_available_resource
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager node_id=n_id)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/manager.py",
 line 68, in __init__
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager self.dev_filter = 
whitelist.Whitelist(CONF.pci_passthrough_whitelist)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/whitelist.py",
 line 78, in __init__
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager self.specs = 
self._parse_white_list_from_config(whitelist_spec)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/whitelist.py",
 line 59, in _parse_white_list_from_config
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager spec = 
devspec.PciDeviceSpec(ds)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/devspec.py",
 line 134, in __init__
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
self._init_dev_details()
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager   File 
"/opt/stack/venv/nova-20160607T195234Z/lib/python2.7/site-packages/nova/pci/devspec.py",
 line 155, in _init_dev_details
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager raise 
exception.PciDeviceNotFoundById(id=self.dev_name)
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager PciDeviceNotFoundById: 
PCI device hed1 not found
2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager

** Affects: nova
 Importance: Undecided
 Assignee: Raghuveer Shenoy (rshenoy)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Raghuveer Shenoy (rshenoy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603034

Title:
  pci whitelist exception will kill the periodic update of the
  hypervisor statistics

Status in OpenStack Compute (nova):
  New

Bug description:
  An encountered exception in the pci whitelist will cause the periodic
  hypervisor update loop to terminate and not be tried again. Retries
  should continue at the normal interval.

  Scenario 1:

  Update the nova.conf with the pci_whitelist as follows:
  pci_passthrough_whitelist = [ {"devname": "hed1", "physical_network": 
"physnet1"},{"physical_network": "physnet1", "address": 
"*:04:00.0"},{"physical_network": "physnet2", "address": "*:04:00.1"}]

  We get the following error in the nova compute log if hed1 is not
  present. But compute still shows up and the periodic hypervisor update
  stops working.

  2016-07-13 09:22:42.146 28800 ERROR nova.compute.manager 
[req-0e7e62d5-23c9-48f2-8ca4-b47b763c29df None None] Error updating resources 
for node 

[Yahoo-eng-team] [Bug 1603020] [NEW] There is no help info of the new filter "changed-since" added to Neutron resources list API

2016-07-14 Thread xiewj
Public bug reported:

In Mitaka,
The spec of add-port-timestamp.rst introduces a new filter "changed-since",but 
in the help message of neutron resources list command.
However,this filter can actually work.
I think we should add the new filter "changed-since" to the help message of 
Neutron resources list API,
so can guide users to use correctly


The url add-port-timestamp.rst spec is as follows: 
https://github.com/openstack/neutron-specs/blob/master/specs/mitaka/add-port-timestamp.rst


[root@devstack218 devstack]# neutron port-list --changed-since 
2016-07-14T03:46:37
+--++---++
| id   | name   | mac_address   | 
fixed_ips  |
+--++---++
| ea79eaef-d294-4527-b24c-5b9fe16a1f6c | port2_net1 | fa:16:3e:e7:38:a6 | 
{"subnet_id": "481cadf6-fa52-4739-80b2-331a3b90d7b6",  |
|  ||   | 
"ip_address": "198.51.100.7"}  |
|  ||   | 
{"subnet_id": "60f56f75-ce94-498f-b4ad-0383db2796a8",  |
|  ||   | 
"ip_address":  |
|  ||   | 
"2001:db8:80d2:c4d3:f816:3eff:fee7:38a6"}  |
+--++---++
[root@devstack218 devstack]# 

[root@devstack218 devstack]# neutron help port-list  
usage: neutron port-list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
 [--max-width ] [--noindent]
 [--quote {all,minimal,none,nonnumeric}]
 [--request-format {json}] [-D] [-F FIELD] [-P SIZE]
 [--sort-key FIELD] [--sort-dir {asc,desc}]

List ports that belong to a given tenant.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json}
DEPRECATED! Only JSON request format is supported.
  -D, --show-detailsShow detailed information.
  -F FIELD, --field FIELD
Specify the field(s) to be returned by server. You can
repeat this option.
  -P SIZE, --page-size SIZE
Specify retrieve unit of each request, then split one
request to several requests.
  --sort-key FIELD  Sorts the list by the specified fields in the
specified directions. You can repeat this option, but
you must specify an equal number of sort_dir and
sort_key values. Extra sort_dir options are ignored.
Missing sort_dir options use the default asc value.
  --sort-dir {asc,desc}
Sorts the list in the specified direction. You can
repeat this option.

output formatters:
  output formatter options

  -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width 
Maximum display width, <1 to disable. You can also use
the CLIFF_MAX_TERM_WIDTH environment variable, but the
parameter takes precedence.

json formatter:
  --noindentwhether to disable indenting the JSON

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
when to include quotes, defaults to nonnumeric
[root@devstack218 devstack]#

** Affects: neutron
 Importance: Undecided
 Assignee: Yan Songming (songmingyan)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => xiewj (36429515-3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603020

Title:
  There is no help info of the new filter "changed-since" added to
  Neutron resources list API

Status in neutron:
  New

Bug description:
  In Mitaka,
  The spec of add-port-timestamp.rst introduces a new filter 
"changed-since",but in the help message of neutron resources list command.
  However,this filter can actually work.
  I think we should add the new filter "changed-since" to the help message of 
Neutron resources list API,
  so can guide users to use correctly

  
  The url add-port-timestamp.rst spec is as follows: 
  

[Yahoo-eng-team] [Bug 1603011] [NEW] Horizon falls back to Login screen while Accessing 'Users' or 'Groups'

2016-07-14 Thread Oleksandr Savatieiev
Public bug reported:

Steps:
1. Open Horizon, Login filling Domain: Default, User: admin, Pass: 
2. Open Summary (or whatever page in Admin section). Should work
3. [BUG] Try to open Identity/Groups section - falls to Login with 
'Unautherized' message.
4. Edit address and remove any redirection hops parameters (aka 'next...')
5. Login and navigate to Identity/Domains
6. Push 'Set Context' for Default domain
7. Navigate to Identity/Groups section - works now

Notes:
On step 3 it is impossible to login even with correct credentials

Expected: Horizon either reports about missed context or uses Default one
Actual: Horizon drops session and fails to login user even with correct 
credentials. Login page reload required with removal of 'redirection' parameter 
from and address

Env: MOS 8

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Steps:
  1. Open Horizon, Login filling Domain: Default, User: admin, Pass: 
  2. Open Summary (or whatever page in Admin section). Should work
  3. [BUG] Try to open Identity/Groups section - falls to Login with 
'Unautherized' message.
  4. Edit address and remove any redirection hops parameters (aka 'next...')
  5. Login and navigate to Identity/Domains
  6. Push 'Set Context' for Default domain
  7. Navigate to Identity/Groups section - works now
  
  Notes:
  On step 3 it is impossible to login even with correct credentials
  
  Expected: Horizon either reports about missed context or uses Default one
  Actual: Horizon drops session and fails to login user even with correct 
credentials. Login page reload required with removal of 'redirection' parameter 
from and address
+ 
+ Env: MOS 8

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1603011

Title:
  Horizon falls back to Login screen while Accessing 'Users' or 'Groups'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:
  1. Open Horizon, Login filling Domain: Default, User: admin, Pass: 
  2. Open Summary (or whatever page in Admin section). Should work
  3. [BUG] Try to open Identity/Groups section - falls to Login with 
'Unautherized' message.
  4. Edit address and remove any redirection hops parameters (aka 'next...')
  5. Login and navigate to Identity/Domains
  6. Push 'Set Context' for Default domain
  7. Navigate to Identity/Groups section - works now

  Notes:
  On step 3 it is impossible to login even with correct credentials

  Expected: Horizon either reports about missed context or uses Default one
  Actual: Horizon drops session and fails to login user even with correct 
credentials. Login page reload required with removal of 'redirection' parameter 
from and address

  Env: MOS 8

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584055] Re: Swift UI builds a breadcrumb from the URL regardless of existence

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/341869
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=454faafc44b66f940ef8f846ab0c59a3b57b770b
Submitter: Jenkins
Branch:master

commit 454faafc44b66f940ef8f846ab0c59a3b57b770b
Author: Richard Jones 
Date:   Wed Jul 13 17:14:21 2016 -0700

Fix Django route for swift ui with folder path

The path doesn't terminate in a "/" so that should be removed from the RE.

Change-Id: Ie600636c96d73382bd2092838111aa870c5b019b
Closes-Bug: 1584055


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584055

Title:
  Swift UI builds a breadcrumb from the URL regardless of existence

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Angular Swift UI seems overly optimistic in its construction of
  the breadcrumb. To recreate:

  1) Create a container named "one" and a folder named "two". Notice the URL is 
"/project/containers/container/one/two"
  2) Upload any object, just as a reference point.
  3) Refresh the page. A '/' is added, and now you're in a folder containing no 
objects. Use the breadcrumb to go back to 'one' and then click on 'two'. You 
are now back in your folder.

  Alternatively, go to
  "/project/containers/container/one/two/three". This
  doesn't exist and renders as an empty folder with a constructed
  breadcrumb. Instead, it should redirect (probably to the base URL)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1584055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586268] Re: Unit test: self.assertNotEqual in unit.test_base.BaseTest.test_eq does not work in PY2

2016-07-14 Thread Ji.Wei
** No longer affects: swift

** No longer affects: python-swiftclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586268

Title:
  Unit test: self.assertNotEqual in  unit.test_base.BaseTest.test_eq
  does not work in PY2

Status in Ceilometer:
  In Progress
Status in daisycloud-core:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in keystonemiddleware:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in python-barbicanclient:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  In Progress
Status in python-manilaclient:
  New
Status in python-muranoclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-smaugclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in tempest:
  In Progress

Bug description:
  Version: master(20160527)

  In case cinderclient.tests.unit.test_base.BaseTest.test_eq 
self.assertNotEqual does not work.
  Class base.Resource defines __eq__() built-in function, but does not define 
__ne__() built-in function, so self.assertEqual works but self.assertNotEqual 
does not work at all in this test case.

  steps:
  1 Clone code of python-cinderclient from master.
  2 Modify the case of unit test: cinderclient/tests/unit/test_base.py
    line50--line62.
  def test_eq(self):
  # Two resources with same ID: never equal if their info is not equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hi'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  self.assertNotEqual(r1, r2)

  # Two resources with same ID: equal if their info is equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hello'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

  # Two resoruces of different types: never equal
  r1 = base.Resource(None, {'id': 1})
  r2 = volumes.Volume(None, {'id': 1})
  self.assertNotEqual(r1, r2)

  # Two resources with no ID: equal if their info is equal
  r1 = base.Resource(None, {'name': 'joe', 'age': 12})
  r2 = base.Resource(None, {'name': 'joe', 'age': 12})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

     Modify self.assertEqual(r1, r2) to self.assertNotEqual(r1, r2).

  3 Run unit test, and return success.

  After that, I make a test:

  class Resource(object):
  def __init__(self, person):
  self.person = person

  def __eq__(self, other):
  return self.person == other.person

  r1 = Resource("test")
  r2 = Resource("test")
  r3 = Resource("test_r3")
  r4 = Resource("test_r4")

  print r1 != r2
  print r1 == r2
  print r3 != r4
  print r3 == r4

  The result is :
  True
  True
  True
  False

  Whether r1 is precisely the same to r2 or not, self.assertNotEqual(r1,
  r2) return true.So I think self.assertNotEqual doesn't work at all in
  python2 and  should be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1586268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602974] [NEW] [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-07-14 Thread Prashant Shetty
Public bug reported:

Currently we dont have option to check status of listener. Below is the
output of listener without status.

root@runner:~# neutron lbaas-listener-show 8c0e0289-f85d-4539-8970-467a45a5c191
+---++
| Field | Value  |
+---++
| admin_state_up| True   |
| connection_limit  | -1 |
| default_pool_id   ||
| default_tls_container_ref ||
| description   ||
| id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
| loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
| name  | test-lb-http   |
| protocol  | HTTP   |
| protocol_port | 80 |
| sni_container_refs||
| tenant_id | ce1d087209c64df4b7e8007dc35def22   |
+---++
root@runner:~#

Problem arise when we tried to configure listener and pool back to back
without any delay. Pool create fails saying listener is not ready.

Workaround is to add 3seconds delay between listener and pool creation.

Logs:

root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
Created a new loadbalancer:
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| description |  |
| id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
| listeners   |  |
| name| test-lb  |
| operating_status| OFFLINE  |
| provider| haproxy  |
| provisioning_status | PENDING_CREATE   |
| tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
| vip_address | 20.0.0.62|
| vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
| vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
+-+--+
Created a new listener:
+---++
| Field | Value  |
+---++
| admin_state_up| True   |
| connection_limit  | -1 |
| default_pool_id   ||
| default_tls_container_ref ||
| description   ||
| id| 90260465-934a-44a4-a289-208e5af74cf5   |
| loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
| name  | test-lb-http   |
| protocol  | HTTP   |
| protocol_port | 80 |
| sni_container_refs||
| tenant_id | ce1d087209c64df4b7e8007dc35def22   |
+---++
Invalid state PENDING_UPDATE of loadbalancer resource 
3ed2ff4a-4d87-46da-8e5b-265364dd6861
root@runner:~#


Neutron:

: u'90260465-934a-44a4-a289-208e5af74cf5', u'protocol': u'HTTP', u'name': 
u'test-lb-pool-http', u'admin_state_up': True}} from (pid=7189) 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:657
2016-07-14 07:38:57.268 DEBUG neutron.db.quota.driver 
[req-f65cd995-dab1-4b43-96a0-dcbe5b93 admin 
ce1d087209c64df4b7e8007dc35def22] Resources 
subnet,network,subnetpool,listener,healthmonitor,router,l2-gateway-connection,port,loadbalancer
 have unlimited quota limit. It is not required to calculated headroom  from 

[Yahoo-eng-team] [Bug 1524153] Re: [api-ref] Add "update user of OS-KSCRUD extension" on identity v2 API

2016-07-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/341708
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=8a56b19734ba0ffb1dba9db200feac2c48c12b9e
Submitter: Jenkins
Branch:master

commit 8a56b19734ba0ffb1dba9db200feac2c48c12b9e
Author: Boris Bobrov 
Date:   Wed Jul 13 20:28:50 2016 +0300

Add OS-KSCRUD api-ref

Change-Id: Ibf837eb880ee1811bfc85464b213451ddaf94a0b
Closes-Bug: 1524153


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524153

Title:
  [api-ref] Add "update user of OS-KSCRUD extension" on identity v2 API

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Now keystone supports "update user of OS-KSCRUD extension" on identity v2 
API, and tempest also is testing the API.
  However, the api-site doesn't contain the API description.
  So we need to write the API for API users.

  The URL is "OS-KSCRUD/users/" and the method is PATCH on
  identity v2 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1524153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596124] Re: Python3 do not use dict.iteritems dict.iterkeys dict.itervalues

2016-07-14 Thread Ji.Wei
** No longer affects: tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596124

Title:
  Python3 do not use dict.iteritems dict.iterkeys dict.itervalues

Status in Cinder:
  In Progress
Status in glance_store:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  Python3 do not use dict.iteritemse dict.iterkeys dict.itervalues,
  which would raise AttributeError: 'dict' object has no attribute
  'iterkeys'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1596124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp