[Yahoo-eng-team] [Bug 1435873] Re: Debian - Openstack IceHouse - Neutron Error 1054, "Unknown column 'routers.enable_snat'

2015-11-11 Thread Miguel Angel Ajo
Marked as invalid, please check comment #3.

Feel free to reopen if that does not work.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435873

Title:
  Debian - Openstack IceHouse - Neutron Error 1054, "Unknown column
  'routers.enable_snat'

Status in neutron:
  Invalid

Bug description:
  Im French my english is bad,

  So im on Debian Wheezy 7 and Openstack IceHouse

  I followed these tutorials :

  • https://fosskb.wordpress.com/2014/06/02/openstack-icehouse-on-
  debian-wheezy-single-machine-setup/comment-page-1/#comment-642

  • http://docs.openstack.org/juno/install-guide/install/apt-debian
  /openstack-install-guide-apt-debian-juno.pdf

  And when i arrived on router configuration i have Error, when i try to
  create router with this command : neutron router-create demo-router

  I have "Request Failed: internal server error while processing your
  request."

  and when i show the log in /var/log/neutron-server.log i have this :

  TRACE neutron.api.v2.resource OperationalError: (OperationalError)
  (1054, "Unknown column 'routers.enable_snat' in 'field list'") 'SELECT
  count(*) AS count_1 \nFROM (SELECT routers.tenant_id AS
  routers_tenant_id, routers.id AS routers_id, routers.name AS
  routers_name, routers.status AS routers_status, routers.admin_state_up
  AS routers_admin_state_up, routers.gw_port_id AS routers_gw_port_id,
  routers.enable_snat AS routers_enable_snat \nFROM routers \nWHERE
  routers.tenant_id IN (%s)) AS anon_1'
  ('8348602c43c44c63b5f161a404afe1da',)

  
  ANY ONE CAN HELP ME PLEASE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515209] [NEW] HA router can't associate an external network without gateway ip

2015-11-11 Thread Hong Hui Xiao
Public bug reported:

When associate a HA router with an external network without gateway ip,
I will get the following error:

2015-11-11 03:23:44.599 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '8114b0c3-85e6-4b71-ab74-a0c1437882cd'
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 503, in 
_process_router_update
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 444, in 
_process_router_if_compatible
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_added_router(router)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 452, in 
_process_added_router
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent ri.process(self)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 387, in process
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/common/utils.py", line 366, in call
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent self.logger(e)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 197, in __exit__
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/common/utils.py", line 363, in call
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent return func(*args, 
**kwargs)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 694, in process
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self.process_external(agent)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 660, in 
process_external
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port, agent.pd)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/router_info.py", line 569, in 
_process_external_gateway
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self.external_gateway_added(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 356, in 
external_gateway_added
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._add_gateway_vip(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 249, in 
_add_gateway_vip
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
self._add_default_gw_virtual_route(ex_gw_port, interface_name)
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/ha_router.py", line 205, in 
_add_default_gw_virtual_route
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent 
instance.virtual_routes.gateway_routes = default_gw_rts
2015-11-11 03:23:44.599 TRACE neutron.agent.l3.agent UnboundLocalError: local 
variable 'instance' referenced before assignment


For HA router,  the default gw route should only be added when there is any 
gateway ip present.

https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L248

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: l3-ha

** Description changed:

  When associate a HA router with an external network without gateway ip,
  I will get the following error:
  
- 2015-11-11 03:07:36.198 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '8114b0c3-85e6-4b71-ab74-a0c1437882cd'
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent Traceback (most recent 
call last):
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 503, in 
_process_router_update
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 446, in 
_process_router_if_compatible
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent 
self._process_updated_router(router)
- 2015-11-11 03:07:36.198 TRACE neutron.agent.l3.agent   File 

[Yahoo-eng-team] [Bug 1482436] Re: Support for compressed/archive file is broken with image-create

2015-11-11 Thread Kairat Kushaev
You can upload a compressed image to glance but it is not possible to boot an 
instance from zipped image because it is not supported by Nova.
Glance support zip compression when transmitting an image from Glance to Nova 
but it is decompressed/compressed by wsgi application (not Glance) and that 
wsgi app have no idea about source image encoding.
So I am marking this as Invalid.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482436

Title:
  Support for compressed/archive file is broken with image-create

Status in Glance:
  Invalid

Bug description:
  Discovered via Horizon bug:
  https://bugs.launchpad.net/horizon/+bug/1479966

  Creating image from archive file appears to be broken, evident by
  instance launched from it fails to boot (error: "No bootable device").

  Steps to reproduce:

  1. Have a tar.gz version of a qcow image available locally and/or
  remotely accessible via HTTP.

  2.1 Create an image using copy-from:
  
  $ glance image-create --name="test-remote-compressed" --is-public=true 
--disk-format=qcow2 --container-format=bare --is-public=True --copy-from 
http://example.com/cirros-0.3.0-i386-disk.img.tar.gz
  

  2.2 Create an image from a local file:
  
  $ glance image-create --name="test-remote-compressed2" --is-public=true 
--disk-format=qcow2 --container-format=bare --is-public=True < 
cirros-0.3.0-i386-disk.img.tar.gz
  

  3. Launch instances from created images.

  Expected:

  1. Instances launched and boot successfully.

  Actual:

  1. Console outputs the following:
  
  Booting from Hard Disk...
  Boot failed: not a bootable disk

  No bootable device.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489439] Re: URL patterns are inconsistent and unreliable

2015-11-11 Thread Rob Cresswell
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489439

Title:
  URL patterns are inconsistent and unreliable

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Many details pages are displayed at `//detail`. This
  contrasts with normal navigation patterns, where we expect
  / to show the information about a specific object, not a
  404 page.

  The URLS should be changed across Horizon so that / shows the
  details pages.

  Similarly, there are many inconsistencies in the urls; where possible,
  we should stick to the CRUD style: /,
  //create, //update. This contrasts with some
  of the existing styles, like `networks/ports/addport`, which should be
  `networks//ports/create`.

  This is also a precursor to adding reliable breadcrumb navigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515326] [NEW] nova.api.openstack.extensions AttributeError: '_TransactionFactory' object has no attribute '_writer_maker'

2015-11-11 Thread Yongli He
Public bug reported:

full log:
http://52.27.155.124/240218/5/
http://52.27.155.124/240218/5/screen-logs/n-api.log.gz


pip freeze:
http://52.27.155.124/240218/5/pip-freeze.txt.gz


F.Y.I a bug seems same but it's not:  
https://bugs.launchpad.net/oslo.db/+bug/1477080


related exception for your convenient: 

opt/stack/nova/nova/api/openstack/wsgi.py:792
2015-11-11 22:02:42.644 ERROR nova.api.openstack.extensions 
[req-3e331369-71d7-448f-bb06-6ef34762bb73 tempest-InputScenarioUtils-640491222 
tempest-InputScenarioUtils-1686267846] Unexpected exception in API method
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors.py", line 39, in index
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
limited_flavors = self._get_flavors(req)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors.py", line 110, in 
_get_flavors
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
limit=limit, marker=marker)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/flavors.py", line 200, in 
get_all_flavors_sorted_list
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
marker=marker)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
171, in wrapper
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/flavor.py", line 269, in get_all
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
marker=marker)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/api.py", line 1442, in flavor_get_all
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
sort_dir=sort_dir, limit=limit, marker=marker)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 204, in wrapper
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 4698, in flavor_get_all
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions query = 
_flavor_get_query(context, read_deleted=read_deleted)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 4673, in _flavor_get_query
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
read_deleted=read_deleted).\
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 266, in model_query
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions session = 
get_session(use_slave=use_slave)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 172, in get_session
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
use_slave=use_slave, **kwargs)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 977, in get_session
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
self._factory._writer_maker(**kwargs)
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
AttributeError: '_TransactionFactory' object has no attribute '_writer_maker'
2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515326

Title:
   nova.api.openstack.extensions AttributeError: '_TransactionFactory'
  object has no attribute '_writer_maker'

Status in OpenStack Compute (nova):
  New

Bug description:
  full log:
  http://52.27.155.124/240218/5/
  http://52.27.155.124/240218/5/screen-logs/n-api.log.gz

  
  pip freeze:
  http://52.27.155.124/240218/5/pip-freeze.txt.gz

  
  F.Y.I a bug seems same but it's not:  
https://bugs.launchpad.net/oslo.db/+bug/1477080

  
  related exception for your convenient: 

  

[Yahoo-eng-team] [Bug 1515345] [NEW] pep8-gate is broken

2015-11-11 Thread Manjeet Singh Bhatia
Public bug reported:

I checked some pacthes and found pep8 gate is broken. saying maximum
depth for recursion exceed

check 
http://logs.openstack.org/01/244201/1/check/gate-neutron-lbaas-pep8/e4bdb39/console.html#_2015-11-11_17_35_49_227

and

http://logs.openstack.org/80/244180/1/check/gate-neutron-lbaas-
pep8/4f13512/console.html#_2015-11-11_16_52_40_857

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515345

Title:
  pep8-gate is broken

Status in neutron:
  New

Bug description:
  I checked some pacthes and found pep8 gate is broken. saying maximum
  depth for recursion exceed

  check 
  
http://logs.openstack.org/01/244201/1/check/gate-neutron-lbaas-pep8/e4bdb39/console.html#_2015-11-11_17_35_49_227

  and

  http://logs.openstack.org/80/244180/1/check/gate-neutron-lbaas-
  pep8/4f13512/console.html#_2015-11-11_16_52_40_857

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515335] [NEW] tox 2.2.0+ breaks neutron due to :api setenv section

2015-11-11 Thread Ihar Hrachyshka
Public bug reported:

Example of failure in gate: http://logs.openstack.org/22/244122/2/gate
/gate-neutron-python27/414e071/console.html

I suspect it's this PR:
https://bitbucket.org/hpk42/tox/commits/a9f2579c2505ddfac7201dda089d6c47ae8acf81?at=default
but we need to check with tox devs.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515335

Title:
  tox 2.2.0+ breaks neutron due to :api setenv section

Status in neutron:
  Confirmed

Bug description:
  Example of failure in gate: http://logs.openstack.org/22/244122/2/gate
  /gate-neutron-python27/414e071/console.html

  I suspect it's this PR:
  
https://bitbucket.org/hpk42/tox/commits/a9f2579c2505ddfac7201dda089d6c47ae8acf81?at=default
  but we need to check with tox devs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515302] [NEW] Group membership attribute is hard-coded when using 'user_enable_emulation'

2015-11-11 Thread Nathan Kinder
Public bug reported:

The 'group_member_attribute' is used in Keystone when looking for groups
in LDAP to find membership. But, when using 'user_enable_emulation', the
following code in keystone/common/ldap/core.py instead references a hard
coded 'member' entry instead of 'group_member_attribute'.

---
def _get_enabled(self, object_id):
dn = self._id_to_dn(object_id)
query = '(member=%s)' % dn < Here
with self.get_connection() as conn:
try:
enabled_value = 
conn.search_s(self.enabled_emulation_dn,

  ldap.SCOPE_BASE,

  query, ['cn'])
except ldap.NO_SUCH_OBJECT:
return False
else:
return bool(enabled_value)
---

As a result, when integrating Keystone with an LDAP back-end and using
the 'enabled_user_emulation' feature with a group for which the
membership attribute is 'uniquemember', users are listed as not enabled.

** Affects: keystone
 Importance: Undecided
 Assignee: Nathan Kinder (nkinder)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Nathan Kinder (nkinder)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1515302

Title:
  Group membership attribute is hard-coded when using
  'user_enable_emulation'

Status in OpenStack Identity (keystone):
  New

Bug description:
  The 'group_member_attribute' is used in Keystone when looking for
  groups in LDAP to find membership. But, when using
  'user_enable_emulation', the following code in
  keystone/common/ldap/core.py instead references a hard coded 'member'
  entry instead of 'group_member_attribute'.

  ---
def _get_enabled(self, object_id):
dn = self._id_to_dn(object_id)
query = '(member=%s)' % dn < Here
with self.get_connection() as conn:
try:
enabled_value = 
conn.search_s(self.enabled_emulation_dn,

  ldap.SCOPE_BASE,

  query, ['cn'])
except ldap.NO_SUCH_OBJECT:
return False
else:
return bool(enabled_value)
  ---

  As a result, when integrating Keystone with an LDAP back-end and using
  the 'enabled_user_emulation' feature with a group for which the
  membership attribute is 'uniquemember', users are listed as not
  enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1515302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515305] [NEW] Reactivating a admins image returns a 500

2015-11-11 Thread Niall Bunting
Public bug reported:

When trying to reactivate an admins image it gets a 500 returned.

Console:
2015-11-11 15:25:23.210 ERROR glance.common.wsgi 
[req-14abeda8-f037-4ca2-bbd7-2b02ec25d5df 84405b7957be46d7bd1b59f77c0fbe60 
ee7437c495d9491093d40c73ac54be3f] Caught error: 'ImmutableImageProxy' object 
has no attribute 'reactivate'
2015-11-11 15:25:23.210 TRACE glance.common.wsgi Traceback (most recent call 
last):
2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 886, in __call__
2015-11-11 15:25:23.210 TRACE glance.common.wsgi request, **action_args)
2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 915, in dispatch
2015-11-11 15:25:23.210 TRACE glance.common.wsgi return method(*args, 
**kwargs)
2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/utils.py", line 425, in wrapped
2015-11-11 15:25:23.210 TRACE glance.common.wsgi return func(self, req, 
*args, **kwargs)
2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/api/v2/image_actions.py", line 65, in reactivate
2015-11-11 15:25:23.210 TRACE glance.common.wsgi image.reactivate()
2015-11-11 15:25:23.210 TRACE glance.common.wsgi AttributeError: 
'ImmutableImageProxy' object has no attribute 'reactivate'


How to reproduce:
glance image-reactivate 709d022f-7ff3-438b-b672-cd21a2a7d467

Whats expected:
A forbidden.

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1515305

Title:
  Reactivating a admins image returns a 500

Status in Glance:
  New

Bug description:
  When trying to reactivate an admins image it gets a 500 returned.

  Console:
  2015-11-11 15:25:23.210 ERROR glance.common.wsgi 
[req-14abeda8-f037-4ca2-bbd7-2b02ec25d5df 84405b7957be46d7bd1b59f77c0fbe60 
ee7437c495d9491093d40c73ac54be3f] Caught error: 'ImmutableImageProxy' object 
has no attribute 'reactivate'
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi Traceback (most recent call 
last):
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 886, in __call__
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi request, **action_args)
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/wsgi.py", line 915, in dispatch
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi return method(*args, 
**kwargs)
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/common/utils.py", line 425, in wrapped
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi return func(self, req, 
*args, **kwargs)
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/api/v2/image_actions.py", line 65, in reactivate
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi image.reactivate()
  2015-11-11 15:25:23.210 TRACE glance.common.wsgi AttributeError: 
'ImmutableImageProxy' object has no attribute 'reactivate'

  
  How to reproduce:
  glance image-reactivate 709d022f-7ff3-438b-b672-cd21a2a7d467

  Whats expected:
  A forbidden.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1515305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515239] [NEW] Block live migration fails when vm is being used

2015-11-11 Thread Henrique Truta
Public bug reported:

Block live migration unexpectedly fails when the vm to be migrated has
some memory usage.

This error occurred to me with a Sahara cluster. The steps are:

1 - Create a cluster
2 - Migrating one VM of this idle cluster is ok
3 - Launch a wordcount job on the cluster
4 - From a given time of the job (where the ram is dirtier), trying to migrate 
this same VM fails.

My problem occurred with a specific job, but I think it may occur in any
memory-bound process.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515239

Title:
  Block live migration fails when vm is being used

Status in OpenStack Compute (nova):
  New

Bug description:
  Block live migration unexpectedly fails when the vm to be migrated has
  some memory usage.

  This error occurred to me with a Sahara cluster. The steps are:

  1 - Create a cluster
  2 - Migrating one VM of this idle cluster is ok
  3 - Launch a wordcount job on the cluster
  4 - From a given time of the job (where the ram is dirtier), trying to 
migrate this same VM fails.

  My problem occurred with a specific job, but I think it may occur in
  any memory-bound process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1515239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515243] Re: environment topology rendered incorrectly in ie 10 and 11

2015-11-11 Thread Doug Fish
I'm marking this as "Not a bug" for Horizon since the issue appears
specific to Murano dashboard

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1515243

Title:
  environment topology rendered incorrectly in ie 10 and 11

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Murano:
  Confirmed

Bug description:
  There are bugs when render SVG environment topology In IE 10, 11 and
  possible 12 under windows 7. There are emptiness or gray artifacts
  instead of arrows between components. The components react normally,
  but arrows are not. It is clear visible on attached screenshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1515243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515243] Re: environment topology rendered incorrectly in ie 10 and 11

2015-11-11 Thread Doug Fish
Based on discussion in IRC, it seems that the stack UI in Horizon that
this is based on is working correctly, but the murano dashboard has the
trouble noted in the screenshot.

** Also affects: murano
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1515243

Title:
  environment topology rendered incorrectly in ie 10 and 11

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Murano:
  Confirmed

Bug description:
  There are bugs when render SVG environment topology In IE 10, 11 and
  possible 12 under windows 7. There are emptiness or gray artifacts
  instead of arrows between components. The components react normally,
  but arrows are not. It is clear visible on attached screenshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1515243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515311] [NEW] Instance scheduling based on Neutron properties

2015-11-11 Thread Andreas Scheuring
Public bug reported:

Add support to allow nova scheduler place instances along available
neutron properties.

Use case: 
- Scheduling along physical network: A certain network (e.g. your fast 100Gbit 
network) is only available to a subset of the nodes (e.g. per rack).
- Schedulding along QoS attributes: Schedule instances along available bandwidth

Proposal:
Integration might be challenging, as also nova needs to be enhanced with a new 
filter. Ihar talked to nova folks (Nikola Dipanov, Jay Pipes, Sylvain) but they 
seemed not to be interested in fulfilling this need right now. However they 
have a vague idea of a scheduler hook to influence scheduling decisions.

Alternative to nova scheduler hook:
Today scheduling decision is made on resource data reported from nova-cpu to 
nova scheduler via the message bus. What would be required is a similar 
reporting from neutron-agent to nova-scheduler. However such a private path 
might be against the decoupling of nova and neutron, introduce new message 
topics and so on. 


Note: This bug is intended to track all information from Neutron side. It's not 
meant as a requirement against nova.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515311

Title:
  Instance scheduling based on Neutron properties

Status in neutron:
  New

Bug description:
  Add support to allow nova scheduler place instances along available
  neutron properties.

  Use case: 
  - Scheduling along physical network: A certain network (e.g. your fast 
100Gbit network) is only available to a subset of the nodes (e.g. per rack).
  - Schedulding along QoS attributes: Schedule instances along available 
bandwidth

  Proposal:
  Integration might be challenging, as also nova needs to be enhanced with a 
new filter. Ihar talked to nova folks (Nikola Dipanov, Jay Pipes, Sylvain) but 
they seemed not to be interested in fulfilling this need right now. However 
they have a vague idea of a scheduler hook to influence scheduling decisions.

  Alternative to nova scheduler hook:
  Today scheduling decision is made on resource data reported from nova-cpu to 
nova scheduler via the message bus. What would be required is a similar 
reporting from neutron-agent to nova-scheduler. However such a private path 
might be against the decoupling of nova and neutron, introduce new message 
topics and so on. 

  
  Note: This bug is intended to track all information from Neutron side. It's 
not meant as a requirement against nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515232] [NEW] LBaaS v2 Radware driver fails to provision when no private key passphrase supplied for TLS certificate

2015-11-11 Thread Evgeny Fedoruk
Public bug reported:

When no passphrase exist for TLS certificate associated to listener,
Radware provider fails.

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515232

Title:
  LBaaS v2 Radware driver fails to provision when no private key
  passphrase supplied for TLS certificate

Status in neutron:
  New

Bug description:
  When no passphrase exist for TLS certificate associated to listener,
  Radware provider fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513879] Re: NeutronClientException: 404 Not Found

2015-11-11 Thread Sean Dague
The existing understanding is that this is how python-neutronclient
functions if there is no L3 agent. However, that seems really wrong.
Because it means you have to know the topology of the services on the
neutron side in order to use python-neutronclient correctly.

That someone defeats the purpose of having a library to access your
service. This really should be fixed in python-neutronclient.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513879

Title:
  NeutronClientException: 404 Not Found

Status in OpenStack Compute (nova):
  In Progress
Status in python-neutronclient:
  New
Status in tripleo:
  Triaged

Bug description:
  Tripleo isn't currently working with trunk nova, the undercloud is
  failing to build overcloud instances, nova compute is showing this
  exception


  Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05
  13:10:45.163 21338 ERROR nova.virt.ironic.driver [req-7df4cae6-f00a-
  41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac
  102a2b78e079410f9afd8b8b46278c19 - - -] Error preparing deploy for
  instance 9ae5b605-58e3-40ee-b944-56cbf5806e51 on baremetal node
  f5c30846-4ada-444e-85d9-6e3be2a74782.

  
  Nov 05 13:10:45 instack.localdomain nova-compute[21338]: 2015-11-05 
13:10:45.434 21338 DEBUG nova.virt.ironic.driver 
[req-7df4cae6-f00a-41a2-91e0-db1e6f130059 a800cb834fbd4a70915e2272dce924ac 
102a2b78e079410f9afd8b8b46278c19 - - -] unplug: 
instance_uuid=9ae5b605-58e3-40ee-b944-56cbf5806e51 vif=[] _unplug_vifs 
/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:1093
   Instance failed to spawn
   Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
2165, in _build_resources
   yield resources
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
2012, in _build_and_run_instance
   block_device_info=block_device_info)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
791, in spawn
   flavor=flavor)
 File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 197, 
in __exit__
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
782, in spawn
   self._plug_vifs(node, instance, network_info)
 File "/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 
1058, in _plug_vifs
   network_info_str = str(network_info)
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 515, 
in __str__
   return self._sync_wrapper(fn, *args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 498, 
in _sync_wrapper
   self.wait()
 File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 530, 
in wait
   self[:] = self._gt.wait()
 File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, 
in wait
   return self._exit_event.wait()
 File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in 
wait
   current.throw(*self._exc)
 File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, 
in main
   result = function(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/utils.py", line 1178, in 
context_wrapper
   return func(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
1574, in _allocate_network_async
   six.reraise(*exc_info)
 File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 
1557, in _allocate_network_async
   dhcp_options=dhcp_options)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 733, in allocate_for_instance
   update_cells=True)
 File "/usr/lib/python2.7/site-packages/nova/network/base_api.py", line 
244, in get_instance_nw_info
   result = self._get_instance_nw_info(context, instance, **kwargs)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 930, in _get_instance_nw_info
   preexisting_port_ids)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 1708, in _build_network_info_model
   current_neutron_port)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 1560, in _nw_info_get_ips
   client, fixed_ip['ip_address'], port['id'])
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 1491, in _get_floating_ips_by_fixed_and_port
   port_id=port)
 File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", 
line 1475, in _safe_get_floating_ips
   for k, v in six.iteritems(kwargs)]))
 File 

[Yahoo-eng-team] [Bug 1515243] [NEW] environment topology rendered incorrectly in ie 10 and 11

2015-11-11 Thread Artem Akulshin
Public bug reported:

There are bugs when render SVG environment topology In IE 10, 11 and
possible 12 under windows 7. There are emptiness or gray artifacts
instead of arrows between components. The components react normally, but
arrows are not. It is clear visible on attached screenshots

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ie topology-view

** Attachment added: "ie_horizon_bugs.png"
   
https://bugs.launchpad.net/bugs/1515243/+attachment/4516844/+files/ie_horizon_bugs.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1515243

Title:
  environment topology rendered incorrectly in ie 10 and 11

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are bugs when render SVG environment topology In IE 10, 11 and
  possible 12 under windows 7. There are emptiness or gray artifacts
  instead of arrows between components. The components react normally,
  but arrows are not. It is clear visible on attached screenshots

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1515243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515274] [NEW] SR-IOV agent configuration file does not document extensions = option

2015-11-11 Thread Ihar Hrachyshka
Public bug reported:

The agent supports l2 agent extension manager, so the option should be
there.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: liberty-backport-potential

** Tags added: liberty-backport-potential

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515274

Title:
  SR-IOV agent configuration file does not document extensions = option

Status in neutron:
  In Progress

Bug description:
  The agent supports l2 agent extension manager, so the option should be
  there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515280] [NEW] db_create in impl_vsctl.py throws run-time exception

2015-11-11 Thread David Shaughnessy
Public bug reported:

Summary:
When calling ovsdb.db_create() it will always cause an exception.
Further info:
This is due to it's arguments being concatenated by a function that adds 
'col_values:' to the start of the field=value.
This causes it to throw an exception as there is no field called 'col_values:' 
in the table.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515280

Title:
  db_create in impl_vsctl.py throws run-time exception

Status in neutron:
  New

Bug description:
  Summary:
  When calling ovsdb.db_create() it will always cause an exception.
  Further info:
  This is due to it's arguments being concatenated by a function that adds 
'col_values:' to the start of the field=value.
  This causes it to throw an exception as there is no field called 
'col_values:' in the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515454] [NEW] In LBaaS, DB seems to be updated, even though the actual operation may fail due to driver error

2015-11-11 Thread Reedip
Public bug reported:

High Level Description:
While working on lbaas V2 , found a somewhat strange behavior.( mention below)

Pre-conditions: Enable LBaaS v2 extension
Step - By -Step reproduction:
a) Verify all the members in Pool
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
+--+--+---++--++
| id   | address  | protocol_port | weight 
| subnet_id| admin_state_up |
+--+--+---++--++
| 2644b225-53df-4cdf-9ab3-dea5da1d402c | 172.24.4.120 |90 |  1 
| af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
+--+--+---++--++
b) Create a new member , it fails due to driver error

reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-create --subnet public-subnet --address 172.24.4.121 
--protocol-port 90 testpool
An error happened in the driver

c) List the members in the specified pool
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
+--+--+---++--++
| id   | address  | protocol_port | weight 
| subnet_id| admin_state_up |
+--+--+---++--++
| 2644b225-53df-4cdf-9ab3-dea5da1d402c | 172.24.4.120 |90 |  1 
| af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
| 39d1017e-92ca-40fd-b02d-739189a4b8df | 172.24.4.121 |90 |  1 
| af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
+--+--+---++--++
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$

Expected Output: If the driver error occurs, then the new member should not be 
added
Actual Output: The new member which actually failed due to driver error was 
actually added to the system, which is incorrect behavior.

Version: Ubuntu 14.04, git for Neutron Client: 
3d736107f97c27a35cff2d7ed6c041521be5ab03
git for neutron-lbaas:
321da8f6263d46bf059163bcf7fd005cf68601bd

Environment: Devstack installation of an All-In-One single node, with FWaaS, 
LBaaSv2 and octavia enabled.
Perceived Severity: High ( this is negative behaviour, because  an inoperatable 
member is created and exists in the DB)

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

** Description changed:

  High Level Description:
  While working on lbaas V2 , found a somewhat strange behavior.( mention below)
  
- Pre-conditions: Enable LBaaS v2 extension 
+ Pre-conditions: Enable LBaaS v2 extension
  Step - By -Step reproduction:
  a) Verify all the members in Pool
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
  
+--+--+---++--++
  | id   | address  | protocol_port | 
weight | subnet_id| admin_state_up |
  
+--+--+---++--++
  | 2644b225-53df-4cdf-9ab3-dea5da1d402c | 172.24.4.120 |90 |  
1 | af8b5dfb-732b-4ecd-87f5-10cd4cb0d917 | True   |
  
+--+--+---++--++
  b) Create a new member , it fails due to driver error
  
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-create --subnet public-subnet --address 172.24.4.121 
--protocol-port 90 testpool
  An error happened in the driver
  
  c) List the members in the specified pool
  
reedip@reedip-VirtualBox:/opt/stack/python-neutronclient/neutronclient/tests/unit/lb/v2$
 neutron lbaas-member-list testpool
  
+--+--+---++--++
  | id   | address  

[Yahoo-eng-team] [Bug 1515490] [NEW] It must be less than cores per socket of host,when specify the VM ’s single numa_node vcpus is odd number(cpu_policy=dedicated)

2015-11-11 Thread jinquanni(ZTE)
Public bug reported:


1. version
kilo 2015.1.0

2. Relevant log files:
nova-scheduler.log
2015-11-12 12:56:50.472 27408 INFO nova.filters 
[req-70882f90-d48e-4ca0-8b09-b15a82df792f 9c67877ee37b47e989148a776862c7b8 
40fc54dc632c4a02b44bf31d7ff15c82 - - -] Filter NUMATopologyFilter returned 0 
hosts for instance 890d0215-5033-44ac-bb8c-3b90ce0db6a4

3. Reproduce steps:

3.1  environment described

I have one compute node  with 32 cpus,

It’s numa_topology is as follows:
2 sockets * 8 cores * 2 thread = 32 

this mean this host have 2 numa_node,and have 16 cpu on per numa_node

3.2 create a flavor  with 30 cpus:

[root@nail-SBCJ-5-3-13 nova(keystone_admin)]# nova flavor-show 30cpu
++---+
| Property   | Value |
++---+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0 |
| disk   | 1 |
| extra_specs| {}|
| id | 7 |
| name   | 30cpu |
| os-flavor-access:is_public | True  |
| ram| 512   |
| rxtx_factor| 1.0   |
| swap   |   |
| vcpus  | 30|
++---+
3.3 Set the numa properties as follows(numa_node1:15 vcpu numa_node2:15 
vcpu),then,create vm use the flavor
nova flavor-key 30cpu  set hw:numa_nodes=2 hw:numa_cpus.0="0-14" 
hw:numa_mem.0=256 hw:numa_cpus.1="15-29" hw:numa_mem.1=256 
hw:cpu_policy=dedicated
Expected result: create success
Actual result : create failed  
Error log: > 2. Relevant log files

3.4 Set the numa properties  as follows again(numa_node1: 14 vcpu numa_node2: 
16 vcpu),then,create vm use the flavor
nova flavor-key 30cpu  set hw:numa_nodes=2 hw:numa_cpus.0="0-13" 
hw:numa_mem.0=256 hw:numa_cpus.1="14-29" hw:numa_mem.1=256 
hw:cpu_policy=dedicated
Expected result: create success
Actual result : create success

3.5 create another flavor with 14 cpus(numa_node1:7 vcpu numa_node2:7 vcpu),Set 
the numa properties as follows,then,create vm use the flavor
nova flavor-key 14cpu  set hw:numa_nodes=2 hw:numa_cpus.0="0-6" 
hw:numa_mem.0=256 hw:numa_cpus.1="7-13" hw:numa_mem.1=256 
hw:cpu_policy=dedicated
Expected result: create success
Actual result : create success
 
 OR (numa_node1:6 vcpu numa_node2:8 vcpu)
 
nova flavor-key 14cpu  set hw:numa_nodes=2 hw:numa_cpus.0="0-5" 
hw:numa_mem.0=256 hw:numa_cpus.1="6-13" hw:numa_mem.1=256 
hw:cpu_policy=dedicated
Expected result: create success
Actual result : create success
 
4 
The above results show that:
It must be less than cores per socket of host,when specify the VM ’s single 
numa_node vcpus is odd number (when cpu pining policy = dedicated)。
but if the number is even number does not exist this kind of situation。
I don't think it is reasonable

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: numa

** Summary changed:

- It must be less than cores per socket of host,when specify the VM ’s single 
numa_node vcpus is odd number
+ It must be less than cores per socket of host,when specify the VM ’s single 
numa_node vcpus is odd number(cpu_policy=dedicated)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515490

Title:
  It must be less than cores per socket of host,when specify the VM ’s
  single numa_node vcpus is odd number(cpu_policy=dedicated)

Status in OpenStack Compute (nova):
  New

Bug description:
  
  1. version
  kilo 2015.1.0

  2. Relevant log files:
  nova-scheduler.log
  2015-11-12 12:56:50.472 27408 INFO nova.filters 
[req-70882f90-d48e-4ca0-8b09-b15a82df792f 9c67877ee37b47e989148a776862c7b8 
40fc54dc632c4a02b44bf31d7ff15c82 - - -] Filter NUMATopologyFilter returned 0 
hosts for instance 890d0215-5033-44ac-bb8c-3b90ce0db6a4

  3. Reproduce steps:

  3.1  environment described

  I have one compute node  with 32 cpus,

  It’s numa_topology is as follows:
  2 sockets * 8 cores * 2 thread = 32 

  this mean this host have 2 numa_node,and have 16 cpu on per numa_node

  3.2 create a flavor  with 30 cpus:

  [root@nail-SBCJ-5-3-13 nova(keystone_admin)]# nova flavor-show 30cpu
  ++---+
  | Property   | Value |
  ++---+
  | OS-FLV-DISABLED:disabled   | False |
  | OS-FLV-EXT-DATA:ephemeral  | 0 |
  | disk   | 1 |
  | extra_specs| {}|
  | id | 7 |
  | name   | 30cpu |
  | os-flavor-access:is_public | True  |
  | ram| 512   |
  | rxtx_factor| 1.0   |
  | swap   |   |
  | vcpus  | 30|
  ++---+
  3.3 Set the numa properties as follows(numa_node1:15 vcpu numa_node2:15 
vcpu),then,create vm use 

[Yahoo-eng-team] [Bug 1515457] [NEW] when delete an instance created from the volume.It will be unsuccessful because the connection_info has no volume_id

2015-11-11 Thread jingtao liang
Public bug reported:

description:
when delete an instance created from the volume. It will get the volume 
metadata like this :
volume_id = connection_info['data']['volume_id']
But the connection_info has no volume_id. It will be error.

version :2014.1.

Relevant log :
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2587, in 
do_terminate_instance
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] self._delete_instance(context, 
instance, bdms, quotas)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 103, in inner
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] rv = f(*args, **kwargs)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2556, in 
_delete_instance
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] quotas.rollback()
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] six.reraise(self.type_, self.value, 
self.tb)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2528, in 
_delete_instance
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] self._shutdown_instance(context, 
db_inst, bdms)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2463, in 
_shutdown_instance
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] requested_networks)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] six.reraise(self.type_, self.value, 
self.tb)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2453, in 
_shutdown_instance
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] block_device_info)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1009, in 
destroy
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] destroy_disks)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1098, in 
cleanup
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] volume_meta = 
self._get_volume_metadata(context, connection_info)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3642, in 
_get_volume_metadata
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] raise 
exception.InvalidBDMVolume(id=volume_id)
2015-11-12 17:03:52.676 6137 TRACE nova.compute.manager [instance: 
4d6213cb-4761-49fb-a993-37833f5a6add] UnboundLocalError: local variable 
'volume_id' referenced before assignment

Reproduce steps:
1 create a volume named a from an image
2 create an instance from the volume a
3 delete the instance

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515457

Title:
  when delete an instance created from the volume.It will be
  unsuccessful because the connection_info has no volume_id

Status in OpenStack Compute (nova):
  New

Bug description:
  description:
  when delete an instance created from the volume. It will get the volume 
metadata like this :
 

[Yahoo-eng-team] [Bug 1515476] [NEW] Lack of task status updates at the beginning of the action 'confirm_resize'

2015-11-11 Thread javeme
Public bug reported:

Almost all of the async operations will update the task state, but the
confirm_resize does not.

That would cause some trouble: our timer in the browser updates the instances 
based on the task state until it becomes none,
if the task state is none(any task is completed), we believe the status of an 
instance is stable, and there is no need to refresh the instance. But at the 
beginning of the confirm_resize, we did not update the task status, and the 
task state is still none,
so it cause we cannot know when it's completed unless we poll all of the 
instances of any state.

Moreover it should be at least consistent with the revert_resize,and the
state RESIZE_CONFIRMING has been defined but not used.

** Affects: nova
 Importance: Undecided
 Assignee: javeme (javaloveme)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515476

Title:
  Lack of task status updates at the beginning of the action
  'confirm_resize'

Status in OpenStack Compute (nova):
  New

Bug description:
  Almost all of the async operations will update the task state, but the
  confirm_resize does not.

  That would cause some trouble: our timer in the browser updates the instances 
based on the task state until it becomes none,
  if the task state is none(any task is completed), we believe the status of an 
instance is stable, and there is no need to refresh the instance. But at the 
beginning of the confirm_resize, we did not update the task status, and the 
task state is still none,
  so it cause we cannot know when it's completed unless we poll all of the 
instances of any state.

  Moreover it should be at least consistent with the revert_resize,and
  the state RESIZE_CONFIRMING has been defined but not used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1515476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515506] [NEW] There is no facility to name LBaaS v2 Members and Health Monitors

2015-11-11 Thread Reedip
Public bug reported:

High Level Requirement: 
Currently there is no facility to name LBaaS v2 Members and Health Monitors.
Although optional, having  the NAME field allows the users to remember specific 
objects( in this case Health Monitors and Members) , so that any task related 
to these objects can be done easily , instead of retrieving the IDs of these 
objects everytime.

The following issue is raised to allow a new parameter 'name' to be
added to LBaaS Tables Health Monitors and Members, just like other LBaaS
tables ( listener, loadbalancer, pool) have.

Pre-Conditions:
LBaaS v2 is enabled in the system.

Version: 
Git ID :321da8f6263d46bf059163bcf7fd005cf68601bd

Environment:
Ubuntu 14.04, with Devstack All In One, FWaaS , LBaaSv2 and Octavia enabled.

Perceived Severity: Medium

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New

** Affects: python-neutronclient
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New


** Tags: lbaas rfe

** Project changed: python-neutronclient => neutron

** Summary changed:

- There is no facility to name LBaaS v2 members and Health Monitors
+ There is no facility to name LBaaS v2 Members and Health Monitors

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

** Changed in: python-neutronclient
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515506

Title:
  There is no facility to name LBaaS v2 Members and Health Monitors

Status in neutron:
  New
Status in python-neutronclient:
  New

Bug description:
  High Level Requirement: 
  Currently there is no facility to name LBaaS v2 Members and Health Monitors.
  Although optional, having  the NAME field allows the users to remember 
specific objects( in this case Health Monitors and Members) , so that any task 
related to these objects can be done easily , instead of retrieving the IDs of 
these objects everytime.

  The following issue is raised to allow a new parameter 'name' to be
  added to LBaaS Tables Health Monitors and Members, just like other
  LBaaS tables ( listener, loadbalancer, pool) have.

  Pre-Conditions:
  LBaaS v2 is enabled in the system.

  Version: 
  Git ID :321da8f6263d46bf059163bcf7fd005cf68601bd

  Environment:
  Ubuntu 14.04, with Devstack All In One, FWaaS , LBaaSv2 and Octavia enabled.

  Perceived Severity: Medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485779] [NEW] [neutron-lbaas]Delete member with non existing member id throws incorrect error message.

2015-11-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

neutron lbaas-member-delete with the non existing member id throws
incorrect error message.

For e.g.

neutron lbaas-member-delete 852bfa31-6522-4ccf-b48c-768cd2ab5212
test_pool

Throws the following error message.

Multiple member matches found for name '852bfa31-6522-4ccf-b48c-
768cd2ab5212', use an ID to be more specific.

Example:

$ neutron lbaas-member-list pool1

+--+--+---++--++
| id   | address  | protocol_port | weight | 
subnet_id| admin_state_up |
+--+--+---++--++
| 64e4d9f4-c2c5-4d58-b696-21cb7cff21ad | 10.3.3.5 |80 |  1 | 
e822a77b-5060-4407-a766-930d6fd8b644 | True   |
| a1a9c7a6-f9a5-4c12-9013-f0990a5f2d54 | 10.3.3.3 |80 |  1 | 
e822a77b-5060-4407-a766-930d6fd8b644 | True   |
| d9d060ee-8af3-4d98-9bb9-49bb81bc4c37 | 10.2.2.3 |80 |  1 | 
f6398da5-9234-4ed9-a0ca-29cbd33d44b9 | True   |
+--+--+---++--++

$ neutron lbaas-member-delete non-existing-uuid pool1
Multiple member matches found for name 'non-existing-uuid', use an ID to be 
more specific.

** Affects: neutron
 Importance: Medium
 Assignee: Reedip (reedip-banerjee)
 Status: Confirmed

-- 
[neutron-lbaas]Delete member with non existing member id throws incorrect error 
message.
https://bugs.launchpad.net/bugs/1485779
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515506] [NEW] There is no facility to name LBaaS v2 members and Health Monitors

2015-11-11 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

High Level Requirement: 
Currently there is no facility to name LBaaS v2 Members and Health Monitors.
Although optional, having  the NAME field allows the users to remember specific 
objects( in this case Health Monitors and Members) , so that any task related 
to these objects can be done easily , instead of retrieving the IDs of these 
objects everytime.

The following issue is raised to allow a new parameter 'name' to be
added to LBaaS Tables Health Monitors and Members, just like other LBaaS
tables ( listener, loadbalancer, pool) have.

Pre-Conditions:
LBaaS v2 is enabled in the system.

Version: 
Git ID :321da8f6263d46bf059163bcf7fd005cf68601bd

Environment:
Ubuntu 14.04, with Devstack All In One, FWaaS , LBaaSv2 and Octavia enabled.

Perceived Severity: Medium

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas
-- 
There is no facility to name LBaaS v2 members and Health Monitors
https://bugs.launchpad.net/bugs/1515506
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515326] Re: nova.api.openstack.extensions AttributeError: '_TransactionFactory' object has no attribute '_writer_maker'

2015-11-11 Thread Davanum Srinivas (DIMS)
We've been seeing this in our regular CI as well:
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22AttributeError:%20'_TransactionFactory'%20object%20has%20no%20attribute%20'_writer_maker'%5C%22=864000s

** Also affects: oslo.db
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515326

Title:
   nova.api.openstack.extensions AttributeError: '_TransactionFactory'
  object has no attribute '_writer_maker'

Status in OpenStack Compute (nova):
  New
Status in oslo.db:
  New

Bug description:
  full log:
  http://52.27.155.124/240218/5/
  http://52.27.155.124/240218/5/screen-logs/n-api.log.gz

  
  pip freeze:
  http://52.27.155.124/240218/5/pip-freeze.txt.gz

  
  F.Y.I a bug seems same but it's not:  
https://bugs.launchpad.net/oslo.db/+bug/1477080

  
  related exception for your convenient: 

  opt/stack/nova/nova/api/openstack/wsgi.py:792
  2015-11-11 22:02:42.644 ERROR nova.api.openstack.extensions 
[req-3e331369-71d7-448f-bb06-6ef34762bb73 tempest-InputScenarioUtils-640491222 
tempest-InputScenarioUtils-1686267846] Unexpected exception in API method
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors.py", line 39, in index
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
limited_flavors = self._get_flavors(req)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/flavors.py", line 110, in 
_get_flavors
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
limit=limit, marker=marker)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/flavors.py", line 200, in 
get_all_flavors_sorted_list
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
marker=marker)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
171, in wrapper
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions result 
= fn(cls, context, *args, **kwargs)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/flavor.py", line 269, in get_all
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
marker=marker)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/api.py", line 1442, in flavor_get_all
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
sort_dir=sort_dir, limit=limit, marker=marker)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 204, in wrapper
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 4698, in flavor_get_all
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions query = 
_flavor_get_query(context, read_deleted=read_deleted)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 4673, in _flavor_get_query
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
read_deleted=read_deleted).\
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 266, in model_query
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions session 
= get_session(use_slave=use_slave)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 172, in get_session
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
use_slave=use_slave, **kwargs)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 977, in get_session
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions return 
self._factory._writer_maker(**kwargs)
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions 
AttributeError: '_TransactionFactory' object has no attribute '_writer_maker'
  2015-11-11 22:02:42.644 36750 ERROR nova.api.openstack.extensions

To manage notifications 

[Yahoo-eng-team] [Bug 1515360] [NEW] Add more verbose to Tempest Test Errors that causes "SSHTimeout" seen in CVR and DVR

2015-11-11 Thread Swaminathan Vasudevan
Public bug reported:

Today "SSHTimeout" Errors are seen both in CVR ( Centralized Virtual Routers) 
and DVR ( Distributed Virtual Routers).
The frequency of occurence is more on DVR than the CVR.

But the problem here, is the error statement that is returned and the data that 
is dumped.
SSHTimeout may have occured due to several reasons, since in all our tempest 
test we are trying to ssh to the VM using the public IP ( FloatingIP) 
1. VM did not come up
2. VM does not have a private IP address
3. Security rules in the VM was not applied properly
4. Setting up of Floating IP
5. DNAT rules in the Router Namespace.
6. Scheduling.
7. Namespace Errors etc.,


We need a way to identify through the tempest test exactly were and what went 
wrong.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515360

Title:
  Add more verbose to Tempest Test Errors that causes "SSHTimeout" seen
  in CVR and DVR

Status in neutron:
  New

Bug description:
  Today "SSHTimeout" Errors are seen both in CVR ( Centralized Virtual Routers) 
and DVR ( Distributed Virtual Routers).
  The frequency of occurence is more on DVR than the CVR.

  But the problem here, is the error statement that is returned and the data 
that is dumped.
  SSHTimeout may have occured due to several reasons, since in all our tempest 
test we are trying to ssh to the VM using the public IP ( FloatingIP) 
  1. VM did not come up
  2. VM does not have a private IP address
  3. Security rules in the VM was not applied properly
  4. Setting up of Floating IP
  5. DNAT rules in the Router Namespace.
  6. Scheduling.
  7. Namespace Errors etc.,

  
  We need a way to identify through the tempest test exactly were and what went 
wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515445] [NEW] / and /versions have same reply with another response

2015-11-11 Thread Atsushi SAKAI
Public bug reported:

If this is wrong asking in here, it would be appreciated to notice me
the right place (ML etc).

GET / and /versions have same reply with another response 
one is 300, another is 200.
Is there any reason?

P.S.
I am checking the / and /versions API for API-Ref and found this issue.

Ref
https://review.openstack.org/#/c/244063/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1515445

Title:
  / and /versions have same reply with another response

Status in Glance:
  New

Bug description:
  If this is wrong asking in here, it would be appreciated to notice me
  the right place (ML etc).

  GET / and /versions have same reply with another response 
  one is 300, another is 200.
  Is there any reason?

  P.S.
  I am checking the / and /versions API for API-Ref and found this issue.

  Ref
  https://review.openstack.org/#/c/244063/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1515445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451943] Re: project list cache keeps growing

2015-11-11 Thread Lin Hua Cheng
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451943

Title:
  project list cache keeps growing

Status in django-openstack-auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The project list is cache on the dict per process,  if running on
  multi-process, project switch or logout does not remove the project
  list from the cache.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1451943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515405] [NEW] 'Enum' field tests provide little value

2015-11-11 Thread Stephen Finucane
Public bug reported:

There are tests for a number of 'Enum'-type fields found in the below
file:

https://github.com/openstack/nova/blob/5beca6f332044904156b80a4c395a43a000f4413/nova/tests/unit/objects/test_fields.py#L328

However, it makes little sense to test the implementations of the Enum
field when we can (and do) validate the base class. The only reason for
retaining the tests appears to be to maintain versioning, but
'test_objects.py' takes care of this for us. Therefore, we should delete
the tests and thus reduce both the LOC count and the amount of new code
folks need to write when adding new fields.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  There are tests for a number of 'Enum'-type fields found in the below
  file:
  
  
https://github.com/openstack/nova/blob/5beca6f332044904156b80a4c395a43a000f4413/nova/tests/unit/objects/test_fields.py#L328
  
  However, it makes little sense to test the implementations of the Enum
  field when we can (and do) validate the base class. The only reason for
  retaining the tests appears to be to maintain versioning, but
  'test_objects.py' takes care of this for us. Therefore, we should delete
- the tests and avoid thus both reducing LOC count and avoiding the need
- for folks to add these in future versions.
+ the tests and thus reduce both the LOC count and the amount of new code
+ folks need to write when adding new fields.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515405

Title:
  'Enum' field tests provide little value

Status in OpenStack Compute (nova):
  New

Bug description:
  There are tests for a number of 'Enum'-type fields found in the below
  file:

  
https://github.com/openstack/nova/blob/5beca6f332044904156b80a4c395a43a000f4413/nova/tests/unit/objects/test_fields.py#L328

  However, it makes little sense to test the implementations of the Enum
  field when we can (and do) validate the base class. The only reason
  for retaining the tests appears to be to maintain versioning, but
  'test_objects.py' takes care of this for us. Therefore, we should
  delete the tests and thus reduce both the LOC count and the amount of
  new code folks need to write when adding new fields.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1515405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475570] Re: Horizon fails getting container lists

2015-11-11 Thread Matthias Runge
** Changed in: horizon
   Status: New => Confirmed

** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475570

Title:
  Horizon fails getting container lists

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Object Storage (swift):
  New

Bug description:
  Env: Centos7, Kilo install

  When a user logs in to horizon, he is able to see the object store.
  Can do standard operations, but after a while he got error message "Error 
getting container lists" when accessing the object store via horizon. Other 
access (such as glance) is not affected.
  The error in the log file refers somehow to the authentication:

  


  [Thu Jul 16 13:51:25.567348 2015] [:error] [pid 8912] Deleted Object: 
"CentOS-7-x86_64-Minimal-1503-01.iso"
  [Thu Jul 16 13:53:26.124737 2015] [:error] [pid 8912] REQ: curl -i 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=json=1001
 -X GET -H "X-Auth-Token: 00ef836291544eff88ecc32af3d96a28"
  [Thu Jul 16 13:53:26.124811 2015] [:error] [pid 8912] RESP STATUS: 401 
Unauthorized
  [Thu Jul 16 13:53:26.124882 2015] [:error] [pid 8912] RESP HEADERS: [('date', 
'Thu, 16 Jul 2015 14:51:25 GMT'), ('content-length', '131'), ('content-type', 
'text/html; charset=UTF-8'), ('www-authenticate', 'Swift 
realm="AUTH_845391df95974ed7ac02755b493afb05", Keystone 
uri=\\'http://keystone:5000/v2.0\\''), ('x-trans-id', 
'tx6cd52d254a0c43a8b6cda-0055a7c4ed')]
  [Thu Jul 16 13:53:26.124929 2015] [:error] [pid 8912] RESP BODY: 
UnauthorizedThis server could not verify that you are 
authorized to access the document you requested.
  [Thu Jul 16 13:53:26.125459 2015] [:error] [pid 8912] Account GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=json=1001
 401 Unauthorized  [first 60 chars of response] 
UnauthorizedThis server could not verify t
  [Thu Jul 16 13:53:26.125472 2015] [:error] [pid 8912] Traceback (most recent 
call last):
  [Thu Jul 16 13:53:26.125475 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1261, in _retry
  [Thu Jul 16 13:53:26.125478 2015] [:error] [pid 8912] rv = func(self.url, 
self.token, *args, **kwargs)
  [Thu Jul 16 13:53:26.125481 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 474, in 
get_account
  [Thu Jul 16 13:53:26.125483 2015] [:error] [pid 8912] end_marker, 
http_conn)
  [Thu Jul 16 13:53:26.125485 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 509, in 
get_account
  [Thu Jul 16 13:53:26.125501 2015] [:error] [pid 8912] 
http_response_content=body)
  [Thu Jul 16 13:53:26.125504 2015] [:error] [pid 8912] ClientException: 
Account GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=json=1001
 401 Unauthorized  [first 60 chars of response] 
UnauthorizedThis server could not verify t
  [Thu Jul 16 13:53:26.125740 2015] [:error] [pid 8912] Recoverable error: 
Account GET failed: 
http://swift:8080/v1/AUTH_845391df95974ed7ac02755b493afb05?format=json=1001
 401 Unauthorized  [first 60 chars of response] 
UnauthorizedThis server could not verify t
  [Thu Jul 16 13:53:26.125971 2015] [:error] [pid 8912] No tenant specified
  [Thu Jul 16 13:53:26.125979 2015] [:error] [pid 8912] Traceback (most recent 
call last):
  [Thu Jul 16 13:53:26.125982 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1253, in _retry
  [Thu Jul 16 13:53:26.125985 2015] [:error] [pid 8912] self.url, 
self.token = self.get_auth()
  [Thu Jul 16 13:53:26.125987 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1227, in get_auth
  [Thu Jul 16 13:53:26.125989 2015] [:error] [pid 8912] 
insecure=self.insecure)
  [Thu Jul 16 13:53:26.125992 2015] [:error] [pid 8912]   File 
"/usr/lib/python2.7/site-packages/swiftclient/client.py", line 413, in get_auth
  [Thu Jul 16 13:53:26.125994 2015] [:error] [pid 8912] raise 
ClientException('No tenant specified')
  [Thu Jul 16 13:53:26.125996 2015] [:error] [pid 8912] ClientException: No 
tenant specified
  [Thu Jul 16 13:53:26.126167 2015] [:error] [pid 8912] Recoverable error: No 
tenant specified

  


  
  Glance, that turns to swift, is working fine.
  Also swift from command line is working fine, though I need to force V2 in 
the cmdline (swift -V 2 .).

  Any hint?
  Thanks,
Giuseppe

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475570/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2015-11-11 Thread Jaxon Wang
** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: sahara
 Assignee: (unassigned) => Jaxon Wang (jxwang92)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  In Progress
Status in neutron:
  Fix Committed
Status in Sahara:
  New
Status in senlin:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp