[Yahoo-eng-team] [Bug 1381311] Re: Dashboard internationalization

2014-10-15 Thread Akihiro Motoki
The installation section of Horizon developer docs also covers this:
http://docs.openstack.org/developer/horizon/topics/install.html#installation

** Changed in: horizon
   Status: Fix Committed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381311

Title:
  Dashboard internationalization

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I change the Language to zh-cn,But The Dashboard is also show In
  English?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381379] [NEW] Using postgressql and creating a security group rule with protocol value as integer getting DBAPIError exception

2014-10-15 Thread Ashish Kumar Gupta
Public bug reported:

Using postgressql  and creating a scurity group rule protocol value as
integer getting error DBAPIError exception wrapped from operator does
not exist.

Running the jenkins :check-tempest-dsvm-ironic-pxe_ssh-postgres-nv fails

Code :
curl -i -X POST http://$Server_ip:9696/v2.0/security-group-rules.json -H 
User-Agent: python-neutronclient -H X-Auth-Token: $TOKENID -d 
'{security_group_rule: {ethertype: IPv4, direction: ingress, 
protocol: 17, security_group_id: $Security_goup_id}}'


Error in the log:
2014-10-15 06:24:22.756 23647 DEBUG neutron.policy 
[req-4e3855ad-ef66-4a63-b69d-7351d4a1a4b3 None] Enforcing rules: 
['create_security_group_rule'] _build_match_rule 
/opt/stack/new/neutron/neutron/policy.py:221
2014-10-15 06:24:22.774 23647 ERROR oslo.db.sqlalchemy.exc_filters 
[req-4e3855ad-ef66-4a63-b69d-7351d4a1a4b3 ] DBAPIError exception wrapped from 
(ProgrammingError) operator does not exist: character varying = integer
LINE 3: ...on IN ('ingress') AND securitygrouprules.protocol IN (17) AN...
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.
 'SELECT securitygrouprules.tenant_id AS securitygrouprules_tenant_id, 
securitygrouprules.id AS securitygrouprules_id, 
securitygrouprules.security_group_id AS securitygrouprules_security_group_id, 
securitygrouprules.remote_group_id AS securitygrouprules_remote_group_id, 
securitygrouprules.direction AS securitygrouprules_direction, 
securitygrouprules.ethertype AS securitygrouprules_ethertype, 
securitygrouprules.protocol AS securitygrouprules_protocol, 
securitygrouprules.port_range_min AS securitygrouprules_port_range_min, 
securitygrouprules.port_range_max AS securitygrouprules_port_range_max, 
securitygrouprules.remote_ip_prefix AS securitygrouprules_remote_ip_prefix 
\nFROM securitygrouprules \nWHERE securitygrouprules.tenant_id = 
%(tenant_id_1)s AND securitygrouprules.tenant_id IN (%(tenant_id_2)s) AND 
securitygrouprules.direction IN (%(direction_1)s) AND 
securitygrouprules.protocol IN (%(protocol_1)s) AND 
securitygrouprules.ethertype IN (%(ethertype_1)s) AND securitygrouprules.securi
 ty_group_id IN (%(security_group_id_1)s)' {'direction_1': u'ingress', 
'tenant_id_2': u'a0ec4b20678a472ebbab28526cb53fef', 'ethertype_1': 'IPv4', 
'protocol_1': 17, 'tenant_id_1': u'a0ec4b20678a472ebbab28526cb53fef', 
'security_group_id_1': u'e9936f7a-00dd-4afe-9871-f1ab21fe7ea4'}
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py,
 line 59, in _handle_dbapi_exception
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters e, 
statement, parameters, cursor, context)
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_exception
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters exc_info
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
reraise(type(exception), exception, tb=exc_tb)
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters context)
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
ProgrammingError: (ProgrammingError) operator does not exist: character varying 
= integer
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters LINE 3: 
...on IN ('ingress') AND securitygrouprules.protocol IN (17) AN...
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters  
^
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters HINT:  No 
operator matches the given name and argument type(s). You might need to add 
explicit type casts.
2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters  'SELECT 
securitygrouprules.tenant_id AS securitygrouprules_tenant_id, 
securitygrouprules.id AS securitygrouprules_id, 
securitygrouprules.security_group_id AS securitygrouprules_security_group_id, 
securitygrouprules.remote_group_id AS securitygrouprules_remote_group_id, 
securitygrouprules.direction AS securitygrouprules_direction, 

[Yahoo-eng-team] [Bug 1380689] Re: Deprecated options in etc/glance-api.conf

2014-10-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/128373
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=96e28428655aa7122ed74b045ff1bda1984255b1
Submitter: Jenkins
Branch:proposed/juno

commit 96e28428655aa7122ed74b045ff1bda1984255b1
Author: Nikhil Komawar nikhilskoma...@gmail.com
Date:   Tue Oct 14 13:09:48 2014 -0400

Fix options and their groups - etc/glance-api.conf

As per the docs at [0] , some of the options should have been moved
around in the etc/glance-api.conf. This patch changes the conf file to:

1. indicate new default values
2. change the group of some of the configs in order to adhere to
   new groups as expected by the deployer.
3. deprecated configs have been removed or replaced with new ones.

[0] 
http://docs.openstack.org/trunk/config-reference/content/glance-conf-changes-master.html

Fixes bug: 1380689

Change-Id: I5b5ab96b050b502007e6660a7a613e252404d4e8


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1380689

Title:
  Deprecated options in etc/glance-api.conf

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  According to http://docs.openstack.org/trunk/config-reference/content
  /glance-conf-changes-master.html , a couple of options moved from
  [DEFAULT] to [glance_store] but in etc/glance-api.conf the options are
  still in [DEFAULT]. But the options from [DEFAULT] are deprecated now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1380689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361613] Re: auth fragments deprecated - authentication.rst doc need updating.

2014-10-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/128359
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=9b176a278116849c8f7b7f4d9a987f37ec52779c
Submitter: Jenkins
Branch:proposed/juno

commit 9b176a278116849c8f7b7f4d9a987f37ec52779c
Author: Andy McCrae andy.mcc...@gmail.com
Date:   Sat Oct 11 20:56:36 2014 +0100

Adjust authentication.rst doc to reference identity_uri

The auth_port, auth_host, and auth_protocol variables were
deprecated in favour of a single identity_uri variable.

* Adjust authentication.rst doc to reference identity_uri

Change-Id: I48de53f21b8d767b276858ed274066015d765f0e
Closes-Bug: #1361613


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1361613

Title:
  auth fragments deprecated - authentication.rst doc need updating.

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The auth_port, auth_protocol and auth_host variables is deprecated in favour 
of identity_uri
  .
  2014-08-26 11:13:43.764 8009 WARNING keystonemiddleware.auth_token [-] 
Configuring admin URI using auth fragments. This is deprecated, use 
'identity_uri' instead.

  https://bugs.launchpad.net/glance/+bug/1380032 took care of the sample confs.
  authentication.rst doc now needs to be updated to reflect this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1361613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381413] [NEW] Switch Region dropdown doesn't work

2014-10-15 Thread Timur Sufiev
Public bug reported:

In case Horizon was set up to work with multiple regions (by editing
AVAILABLE_REGIONS in settings.py), region selector drop-down appears in
top right corner. But it doesn't work now.

Suppose I login into the Region1, then if I try to switch to Region2, it
redirects me to the login view of django-openstack-auth
https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11

There I am being immediately redirected to the
settings.LOGIN_REDIRECT_URL because I am already authenticated at
Region1, so I cannot view Region2 resources if I switch to it via top
right dropdown. Selecting region at login page works though.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381413

Title:
  Switch Region dropdown doesn't work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In case Horizon was set up to work with multiple regions (by editing
  AVAILABLE_REGIONS in settings.py), region selector drop-down appears
  in top right corner. But it doesn't work now.

  Suppose I login into the Region1, then if I try to switch to Region2,
  it redirects me to the login view of django-openstack-auth
  
https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11

  There I am being immediately redirected to the
  settings.LOGIN_REDIRECT_URL because I am already authenticated at
  Region1, so I cannot view Region2 resources if I switch to it via top
  right dropdown. Selecting region at login page works though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381412] [NEW] l3-ha sometimes results in non-functional state

2014-10-15 Thread IWAMOTO Toshihiro
Public bug reported:

Create an OpenStack system with 2 nodes with devstack and issue the
following:

$ neutron router-create routerha --ha True
$ neutron net-create priv2
$ neutron subnet-create --ip_version 4 --name priv2-subnet priv2 10.10.0.0/20
$ neutron router-interface-add routerha priv2-subnet
$ neutron router-gateway-set routerha public

Sometimes an IP address is set on the backup router, which means the
router doesn't work.

There seems to be some subtle timing related bug with keepalived.  I'm
having trouble obtaining a useful log, as this problem is not always
reproducible.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381412

Title:
  l3-ha sometimes results in non-functional state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Create an OpenStack system with 2 nodes with devstack and issue the
  following:

  $ neutron router-create routerha --ha True
  $ neutron net-create priv2
  $ neutron subnet-create --ip_version 4 --name priv2-subnet priv2 10.10.0.0/20
  $ neutron router-interface-add routerha priv2-subnet
  $ neutron router-gateway-set routerha public

  Sometimes an IP address is set on the backup router, which means the
  router doesn't work.

  There seems to be some subtle timing related bug with keepalived.  I'm
  having trouble obtaining a useful log, as this problem is not always
  reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381414] [NEW] Unit test failure AssertionError: Expected to be called once. Called 2 times. in test_get_port_vnic_info_3

2014-10-15 Thread Hans Lindgren
Public bug reported:

This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
some code and is easily reproduced by running these two tests alone with
no concurrency.

./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
test_get_port_vnic_info_3

The above always results in:

Traceback (most recent call last):
  File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2615, in 
test_get_port_vnic_info_3
self._test_get_port_vnic_info()
  File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, line 
1201, in patched
return func(*args, **keywargs)
  File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2607, in 
_test_get_port_vnic_info
fields=['binding:vnic_type', 'network_id'])
  File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, line 
845, in assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected to be called once. Called 2 times.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381414

Title:
  Unit test failure AssertionError: Expected to be called once. Called
  2 times. in test_get_port_vnic_info_3

Status in OpenStack Compute (Nova):
  New

Bug description:
  This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
  some code and is easily reproduced by running these two tests alone
  with no concurrency.

  ./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
  test_get_port_vnic_info_3

  The above always results in:

  Traceback (most recent call last):
File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2615, in 
test_get_port_vnic_info_3
  self._test_get_port_vnic_info()
File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, 
line 1201, in patched
  return func(*args, **keywargs)
File /home/hans/nova/nova/tests/network/test_neutronv2.py, line 2607, in 
_test_get_port_vnic_info
  fields=['binding:vnic_type', 'network_id'])
File /home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py, 
line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 2 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381419] [NEW] glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker failed in periodic stable job run

2014-10-15 Thread Ihar Hrachyshka
Public bug reported:

The traceback is as follows:

ft1.1942: 
glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker_StringException:
 Traceback (most recent call last):
  File glance/tests/unit/v2/test_images_resource.py, line 378, in 
test_index_with_marker
self.assertTrue(UUID2 in actual)
  File /usr/lib64/python2.6/unittest.py, line 324, in failUnless
if not expr: raise self.failureException, msg
AssertionError

** Affects: glance
 Importance: Critical
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1381419

Title:
  
glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker
  failed in periodic stable job run

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The traceback is as follows:

  ft1.1942: 
glance.tests.unit.v2.test_images_resource.TestImagesController.test_index_with_marker_StringException:
 Traceback (most recent call last):
File glance/tests/unit/v2/test_images_resource.py, line 378, in 
test_index_with_marker
  self.assertTrue(UUID2 in actual)
File /usr/lib64/python2.6/unittest.py, line 324, in failUnless
  if not expr: raise self.failureException, msg
  AssertionError

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1381419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328540] Re: Clicks on ajax-modal buttons won't work as expected until portion of code from horizon.modals.js is loaded

2014-10-15 Thread Timur Sufiev
** Changed in: horizon
   Status: In Progress = Opinion

** Changed in: horizon
 Assignee: Timur Sufiev (tsufiev-x) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1328540

Title:
  Clicks on ajax-modal buttons won't work as expected until portion of
  code from horizon.modals.js is loaded

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  It was reported on Safari browser: if some parts of the page are
  already loaded (including some buttons with ajax-modal class), then
  the click on the ajax-modal button before the js-handler for .ajax-
  modal is loaded will cause non-ajax request, which will probably fail
  if no special template for non-ajax request was provided.

  One solution is to provide non-ajax template for each .ajax-modal link
  for that case - but it seems a bit ugly to me (no fun in showing non-
  modal form where a modal one should be).

  The solution I've adopted in Murano for that problem is to temporarily
  disable all clicks until the page is finally loaded (with a
  transparent div covering all the body contents, which is hidden once
  page is loaded). I'd like to propose the same approach for Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1328540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381425] [NEW] nova cells, force-delete VM throws error even if VM gets deleted

2014-10-15 Thread Rajesh Tailor
Public bug reported:

In nova cells environment, when try to force-delete an instance which is in 
'active' state, the instance gets deleted successfully, but in nova-cells 
service in compute cell (n-cell-child), it throws the following error:
InvalidRequestError: Object 'Instance at 0x7fc5fcf581d0' is already attached 
to session '75' (this is '79')

Reproduction steps:
1) Create instance.
2) Wait until instance becomes 'active'.
3) Try to force-delete the instance.
$ nova force-delete instance_id

Found this error in nova-cells service in compute cell (n-cell-child
service):

2014-10-15 01:59:36.742 ERROR nova.cells.messaging 
[req-7c1615ad-491d-4af8-88d7-ff83563ef429 admin admin] Error processing message 
locally: Object 'Ins
tance at 0x7fc5fcf581d0' is already attached to session '75' (this is '79')
2014-10-15 01:59:36.742 TRACE nova.cells.messaging Traceback (most recent call 
last):
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 199, in _process_locally
2014-10-15 01:59:36.742 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 1293, in 
_process_message_locally
2014-10-15 01:59:36.742 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 698, in run_compute_api_method
2014-10-15 01:59:36.742 TRACE nova.cells.messaging return fn(message.ctxt, 
*args, **method_info['method_kwargs'])
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 219, in wrapped
2014-10-15 01:59:36.742 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 209, in inner
2014-10-15 01:59:36.742 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 190, in inner
2014-10-15 01:59:36.742 TRACE nova.cells.messaging return f(self, context, 
instance, *args, **kw)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 1836, in force_delete
2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._delete_instance(context, instance)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 1790, in _delete_instance2014-10-15 
01:59:36.742 TRACE nova.cells.messaging task_state=task_states.DELETING)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 1622, in _delete
2014-10-15 01:59:36.742 TRACE nova.cells.messaging quotas.rollback()
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py, line 82, in 
__exit__
2014-10-15 01:59:36.742 TRACE nova.cells.messaging six.reraise(self.type_, 
self.value, self.tb)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 1550, in _delete
2014-10-15 01:59:36.742 TRACE nova.cells.messaging instance.save()
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/db/sqlalchemy/models.py, line 52, in save
2014-10-15 01:59:36.742 TRACE nova.cells.messaging super(NovaBase, 
self).save(session=session)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/models.py, line 47, 
in save
2014-10-15 01:59:36.742 TRACE nova.cells.messaging session.add(self)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1399, in add
2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._save_or_update_state(state)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1411, in 
_save_or_update_state
2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._save_or_update_impl(state)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1667, in 
_save_or_update_impl
2014-10-15 01:59:36.742 TRACE nova.cells.messaging self._update_impl(state)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1661, in 
_update_impl
2014-10-15 01:59:36.742 TRACE nova.cells.messaging self._attach(state)
2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1749, in 
_attach
2014-10-15 01:59:36.742 TRACE nova.cells.messaging state.session_id, 
self.hash_key))
2014-10-15 

[Yahoo-eng-team] [Bug 1381468] [NEW] Type conflict in nova/nova/scheduler/filters/trusted_filter.py using attestation_port default value

2014-10-15 Thread Bartosz Fic
Public bug reported:

When trusted filter in nova scheduler is running with default value of
attestation_port:

cfg.StrOpt('attestation_port', default='8443', help='Attestation server
port'),

method _do_request() in AttestationService class has this line:

action_url = https://%s:%d%s/%s; % (self.host, self.port, self.api_url,
action_url)

It is easy to see that default type of attestation_port is string. 
But in action_url self.port is required as integer (%d). It leads to conflict.

** Affects: nova
 Importance: Undecided
 Assignee: Bartosz Fic (bartosz-fic)
 Status: In Progress


** Tags: low-hanging-fruit nova

** Changed in: nova
 Assignee: (unassigned) = Bartosz Fic (bartosz-fic)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381468

Title:
  Type conflict in nova/nova/scheduler/filters/trusted_filter.py using
  attestation_port default value

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When trusted filter in nova scheduler is running with default value of
  attestation_port:

  cfg.StrOpt('attestation_port', default='8443', help='Attestation
  server port'),

  method _do_request() in AttestationService class has this line:

  action_url = https://%s:%d%s/%s; % (self.host, self.port,
  self.api_url, action_url)

  It is easy to see that default type of attestation_port is string. 
  But in action_url self.port is required as integer (%d). It leads to conflict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381472] [NEW] Error on compress: Invalid block tag: 'blocktrans', expected 'empty' or 'endfor'

2014-10-15 Thread Doug Fish
Public bug reported:

While investigating https://bugs.launchpad.net/horizon/+bug/1379761
Gloria Gu discovered this problem:

[stack@gloria-stack:/home/stack/horizon]↥ master+ ± tools/with_venv.sh python 
manage.py compress --force
Invalid template 
/home/stack/horizon/horizon/templates/horizon/common/_formset_table_row.html: 
Invalid block tag: 'blocktrans', expected 'empty' or 'endfor'
Found 'compress' tags in:
/home/stack/horizon/openstack_dashboard/templates/_stylesheets.html
/home/stack/horizon/horizon/templates/horizon/_scripts.html
/home/stack/horizon/horizon/templates/horizon/_conf.html
Compressing... No handlers could be found for logger scss.expression
done
Compressed 4 block(s) from 3 template(s).

** Affects: horizon
 Importance: Medium
 Assignee: Doug Fish (drfish)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

** Changed in: horizon
   Status: New = Triaged

** Changed in: horizon
   Importance: Undecided = Medium

** Changed in: horizon
Milestone: None = kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381472

Title:
  Error on compress:  Invalid block tag: 'blocktrans', expected 'empty'
  or 'endfor'

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  While investigating https://bugs.launchpad.net/horizon/+bug/1379761
  Gloria Gu discovered this problem:

  [stack@gloria-stack:/home/stack/horizon]↥ master+ ± tools/with_venv.sh python 
manage.py compress --force
  Invalid template 
/home/stack/horizon/horizon/templates/horizon/common/_formset_table_row.html: 
Invalid block tag: 'blocktrans', expected 'empty' or 'endfor'
  Found 'compress' tags in:
  /home/stack/horizon/openstack_dashboard/templates/_stylesheets.html
  /home/stack/horizon/horizon/templates/horizon/_scripts.html
  /home/stack/horizon/horizon/templates/horizon/_conf.html
  Compressing... No handlers could be found for logger scss.expression
  done
  Compressed 4 block(s) from 3 template(s).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381504] [NEW] payload in notifier messages for object.create.end and object.delete.end very different.

2014-10-15 Thread Vadim Rovachev
Public bug reported:

payload in notifier message for object.create.end contains all
information for object, but payload in notifier message for
object.delete.end contains olny object id.

Example(rpc backend: rabbit, queue:notifications.info, object: subnet):
http://paste.openstack.org/show/121233/

Example(rpc backend: rabbit, queue:notifications.info, object: network):
http://paste.openstack.org/show/121232/


Expected payload on notifier message subnet.delete.end:

payload: {subnet: {name: qwe-sub, enable_dhcp: true,
network_id: 41de06fa-c565-4393-b17e-662cf2ee6c6e, tenant_id:
fdf483b8a1074d6dbff849d8aab49415, dns_nameservers: [], gateway_ip:
100.1.1.1, ipv6_ra_mode: null, allocation_pools: [{start:
100.1.1.2, end: 100.1.1.254}], host_routes: [], ip_version: 4,
ipv6_address_mode: null, cidr: 100.1.1.0/24, id:
54a1b017-418d-4169-b86a-e6c62cc58e19}}

Actual payload on notifier message subnet.delete.end:

payload: {subnet_id: 54a1b017-418d-4169-b86a-e6c62cc58e19}

Payload in notify messages object.create.end and object.delete.end
should be equal

** Affects: neutron
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381504

Title:
  payload in notifier messages for object.create.end and
  object.delete.end very different.

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  payload in notifier message for object.create.end contains all
  information for object, but payload in notifier message for
  object.delete.end contains olny object id.

  Example(rpc backend: rabbit, queue:notifications.info, object: subnet):
  http://paste.openstack.org/show/121233/

  Example(rpc backend: rabbit, queue:notifications.info, object: network):
  http://paste.openstack.org/show/121232/

  
  Expected payload on notifier message subnet.delete.end:

  payload: {subnet: {name: qwe-sub, enable_dhcp: true,
  network_id: 41de06fa-c565-4393-b17e-662cf2ee6c6e, tenant_id:
  fdf483b8a1074d6dbff849d8aab49415, dns_nameservers: [],
  gateway_ip: 100.1.1.1, ipv6_ra_mode: null, allocation_pools:
  [{start: 100.1.1.2, end: 100.1.1.254}], host_routes: [],
  ip_version: 4, ipv6_address_mode: null, cidr: 100.1.1.0/24,
  id: 54a1b017-418d-4169-b86a-e6c62cc58e19}}

  Actual payload on notifier message subnet.delete.end:

  payload: {subnet_id: 54a1b017-418d-4169-b86a-e6c62cc58e19}

  Payload in notify messages object.create.end and object.delete.end
  should be equal

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379761] Re: Asset compression does not happen unless debug mode is enabled

2014-10-15 Thread Launchpad Bug Tracker
This bug was fixed in the package horizon - 1:2014.2~rc2-0ubuntu2

---
horizon (1:2014.2~rc2-0ubuntu2) utopic; urgency=medium

  * Resolve issues with missing static assets and failing compression
(LP: #1379761):
- d/openstack-dashboard*.postinst: Collect and compress static assets
  during installation.
- d/rules: Drop explicit link to bootstrap scss resources
- d/p/ubuntu_settings.patch: Switch back to using offline compression.
 -- James Page james.p...@ubuntu.com   Wed, 15 Oct 2014 10:10:08 +0100

** Changed in: horizon (Ubuntu)
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379761

Title:
  Asset compression does not happen unless debug mode is enabled

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in “horizon” package in Ubuntu:
  Fix Released
Status in “python-django-pyscss” package in Ubuntu:
  Invalid

Bug description:
  Juno/rc1 of OpenStack on utopic; the dashboard is unthemed and the
  compressed assets are missing unless DEBUG = True in local settings,
  at which point things look much better.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.10
  Package: openstack-dashboard 1:2014.2~rc1-0ubuntu3 [modified: 
usr/share/openstack-dashboard/openstack_dashboard/enabled/_40_router.py]
  ProcVersionSignature: User Name 3.16.0-20.27-generic 3.16.3
  Uname: Linux 3.16.0-20-generic x86_64
  ApportVersion: 2.14.7-0ubuntu5
  Architecture: amd64
  Date: Fri Oct 10 11:26:21 2014
  Ec2AMI: ami-00af
  Ec2AMIManifest: FIXME
  Ec2AvailabilityZone: nova
  Ec2InstanceType: m1.small
  Ec2Kernel: aki-0002
  Ec2Ramdisk: ari-0002
  PackageArchitecture: all
  SourcePackage: horizon
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.apache2.conf.available.openstack.dashboard.conf: 
2014-10-10T11:25:49.335633
  mtime.conffile..etc.openstack.dashboard.local.settings.py: 
2014-10-10T11:25:49.307619

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1379761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381531] [NEW] Add graphic distinction between external and internal networks at 'Network Topology' page

2014-10-15 Thread Ekaterina Chernova
Public bug reported:

Fo admin users it would be good to distinguish  different kind of
networks from the Topology page. Network could have any name, so to make
sure the network is external user need to go the the Admin page (Network
Topology is in Project), pick Networks and click to the network name to
see the details, And only from that page user could find out the type of
network.

So I suggest to add some kind of mark to distinguish external and
internal networks. It could be an image of globe  or juat 'ext' letters.

+ different colors of networks bring confusion seens there is no special
meaning. Maybe we should get rid of different colors in network
topology?

** Affects: horizon
 Importance: Wishlist
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381531

Title:
  Add graphic distinction between external and internal networks at
  'Network Topology' page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Fo admin users it would be good to distinguish  different kind of
  networks from the Topology page. Network could have any name, so to
  make sure the network is external user need to go the the Admin page
  (Network Topology is in Project), pick Networks and click to the
  network name to see the details, And only from that page user could
  find out the type of network.

  So I suggest to add some kind of mark to distinguish external and
  internal networks. It could be an image of globe  or juat 'ext'
  letters.

  + different colors of networks bring confusion seens there is no
  special meaning. Maybe we should get rid of different colors in
  network topology?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381536] [NEW] ResourceClosedError occurs when create_port from DHCP agent and create_port from neutron API run in parallel

2014-10-15 Thread Ian Ong
Public bug reported:

When DHCP agent creates port from neutron and another create port
request is received, ResourceClosedError occurs in sqlalchemy.

this may be related to bug #1282922 https://bugs.launchpad.net/bugs/1282922
Above bug is related to nec plugin and it is mentioned that other plugins may 
be affected.
This error occurred in ML2 plugin both for create and delete ports.
Tested using 2014.3 Icehouse

==
2014-10-15 21:58:59.837 26167 INFO neutron.wsgi [-] (26167) accepted 
('172.16.2.86', 47007)

2014-10-15 21:58:59.870 26167 INFO neutron.wsgi [req-424a01ca-f52b-
43a6-8844-d0d3590feb8d None] 172.16.2.86 - - [15/Oct/2014 21:58:59] GET
/v2.0/networks.json?fields=idname=testnw2 HTTP/1.1 200 251 0.031936

2014-10-15 21:58:59.872 26167 INFO neutron.wsgi [req-424a01ca-f52b-
43a6-8844-d0d3590feb8d None] (26167) accepted ('172.16.2.86', 47008)

2014-10-15 21:58:59.950 26167 INFO neutron.wsgi [req-7ee742ef-6370-46b3
-8f8b-f46ae5d262bc None] 172.16.2.86 - - [15/Oct/2014 21:58:59] POST
/v2.0/subnets.json HTTP/1.1 201 572 0.076879

2014-10-15 21:59:00.074 26167 INFO neutron.wsgi [req-a6ef6c65-811f-
40d8-9443-b9590809994a None] (26167) accepted ('172.16.2.86', 47010)

2014-10-15 21:59:00.088 26167 INFO urllib3.connectionpool [-] Starting new 
HTTPS connection (1): 10.68.42.86
2014-10-15 21:59:00.111 26167 INFO neutron.wsgi 
[req-22a84d34-f454-423d-bb7b-b4c7e2e6e08c None] 172.16.2.86 - - [15/Oct/2014 
21:59:00] GET /v2.0/networks.json?fields=idname=testnw2 HTTP/1.1 200 251 
0.033298

2014-10-15 21:59:00.113 26167 INFO neutron.wsgi [req-22a84d34-f454-423d-
bb7b-b4c7e2e6e08c None] (26167) accepted ('172.16.2.86', 47012)

2014-10-15 21:59:51.165 26167 ERROR neutron.api.v2.resource [-] create failed
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 87, in 
resource
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 448, in create
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 632, in 
create_port
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_port(context, port)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 1371, 
in create_port
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource ips = 
self._allocate_ips_for_port(context, network, port)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 678, 
in _allocate_ips_for_port
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource result = 
NeutronDbPluginV2._generate_ip(context, subnets)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 359, 
in _generate_ip
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource return 
NeutronDbPluginV2._try_generate_ip(context, subnets)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py, line 376, 
in _try_generate_ip
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource range = 
range_qry.filter_by(subnet_id=subnet['id']).first()
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2282, in 
first
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource ret = 
list(self[0:1])
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2149, in 
__getitem__
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource return list(res)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py, line 65, in 
instances
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource fetch = 
cursor.fetchall()
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 752, in 
fetchall
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource self.cursor, 
self.context)
2014-10-15 21:59:51.165 26167 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 1027, in 
_handle_dbapi_exception
2014-10-15 21:59:51.165 

[Yahoo-eng-team] [Bug 1357055] Re: Race to delete shared subnet in Tempest neutron full jobs

2014-10-15 Thread Alex Xu
As I and Salvatore get same conclusions. This isn't bug neutron and
nova. It should be tempest bug. I removed nova from affected projects.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357055

Title:
  Race to delete shared subnet in Tempest neutron full jobs

Status in Tempest:
  New

Bug description:
  This seems to show up in several different tests, basically anything
  using neutron.  I noticed it here:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/console.html#_2014-08-14_17_03_10_330

  That's on a stable/icehouse change, but logstash shows this on master
  mostly.

  I see this in the neutron server logs:

  http://logs.openstack.org/89/112889/1/gate/check-tempest-dsvm-neutron-
  full/21fcf50/logs/screen-q-svc.txt.gz#_2014-08-14_16_45_02_101

  This query shows 82 hits in 10 days:

  message:delete failed \(client error\)\: Unable to complete operation
  on subnet AND message:One or more ports have an IP allocation from
  this subnet AND tags:screen-q-svc.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZGVsZXRlIGZhaWxlZCBcXChjbGllbnQgZXJyb3JcXClcXDogVW5hYmxlIHRvIGNvbXBsZXRlIG9wZXJhdGlvbiBvbiBzdWJuZXRcIiBBTkQgbWVzc2FnZTpcIk9uZSBvciBtb3JlIHBvcnRzIGhhdmUgYW4gSVAgYWxsb2NhdGlvbiBmcm9tIHRoaXMgc3VibmV0XCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1zdmMudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wNy0zMVQxOTo0Mzo0NSswMDowMCIsInRvIjoiMjAxNC0wOC0xNFQxOTo0Mzo0NSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4MDQ1NDY1OTU2fQ==

  Logstash doesn't show this in the gate queue but it does show up in
  the uncategorized bugs list which is in the gate queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1357055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381562] [NEW] Add functional tests for metadata agent

2014-10-15 Thread Oleg Bondarev
Public bug reported:

As per discussion on
https://review.openstack.org/#/c/121782/8/neutron/tests/unit/test_metadata_agent.py:

Tests could do something like sending an HTTP request to a proxy, while
mocking the API (and then potential RPC, if rpc is merged in metadata
agent) response, then assertiwg that the agent forwarded the correct
HTTP request to Nova.

** Affects: neutron
 Importance: Wishlist
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381562

Title:
  Add functional tests for metadata agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As per discussion on
  
https://review.openstack.org/#/c/121782/8/neutron/tests/unit/test_metadata_agent.py:

  Tests could do something like sending an HTTP request to a proxy,
  while mocking the API (and then potential RPC, if rpc is merged in
  metadata agent) response, then assertiwg that the agent forwarded the
  correct HTTP request to Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381568] [NEW] Cannot start the VM console when VM is launched at Compute node

2014-10-15 Thread Danny Choi
Public bug reported:

OpenStack version: Juno

localadmin@qa4:/etc/nova$ nova-manage version
2015.1


I used devstack to deploy multi-node OpenStack, with Controller + nova-compute 
+ Network on one physical node (qa4),
and Compute on a separate physical node (qa5).

When I launch a VM which spun up on the Compute node (qa5), I cannot
launch the VM console, in both CLI and Horizon.

localadmin@qa4:/etc/nova$ nova list
+--+--+++-+---+
| ID   | Name | Status | Task State | Power 
State | Networks  |
+--+--+++-+---+
| 86c3519d-cd22-4cc9-bac7-6444749ff41b | vm1  | ACTIVE | -  | Running   
  | private=10.0.0.6  |
| 1642393c-edce-4af2-837c-ca61d464688b | vm5  | ACTIVE | -  | Running   
  | private=10.0.0.10 |
+--+--+++-+---+

localadmin@qa4:/etc/nova$ nova get-vnc-console vm5 novnc
ERROR (BadRequest): Unavailable console type novnc. (HTTP 400) (Request-ID: 
req-f5944644-46ae-4051-b0cb-7562f1308893)
localadmin@qa4:/etc/nova$ nova get-vnc-console vm5 xvpvnc
ERROR (BadRequest): Unavailable console type xvpvnc. (HTTP 400) (Request-ID: 
req-3416985d-0b76-4a31-9e48-cc597e8698eb)

This does not happen if the VM resides at the Controlller (qa5).

localadmin@qa4:/etc/nova$ nova get-vnc-console vm1 novnc
+---+-+
| Type  | Url   
  |
+---+-+
| novnc | 
http://172.29.172.161:6080/vnc_auto.html?token=0abd9c75-4a81-4bd2-a802-785dab61c82a
 |
+---+-+
localadmin@qa4:/etc/nova$ nova get-vnc-console vm1 xvpvnc
++---+
| Type   | Url  
 |
++---+
| xvpvnc | 
http://172.29.172.161:6081/console?token=c2cb837f-e4ca-4347-8661-32e55f8adf06 |
++---+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381568

Title:
  Cannot start the VM console when VM is launched at Compute node

Status in OpenStack Compute (Nova):
  New

Bug description:
  OpenStack version: Juno

  localadmin@qa4:/etc/nova$ nova-manage version
  2015.1

  
  I used devstack to deploy multi-node OpenStack, with Controller + 
nova-compute + Network on one physical node (qa4),
  and Compute on a separate physical node (qa5).

  When I launch a VM which spun up on the Compute node (qa5), I cannot
  launch the VM console, in both CLI and Horizon.

  localadmin@qa4:/etc/nova$ nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | 86c3519d-cd22-4cc9-bac7-6444749ff41b | vm1  | ACTIVE | -  | Running 
| private=10.0.0.6  |
  | 1642393c-edce-4af2-837c-ca61d464688b | vm5  | ACTIVE | -  | Running 
| private=10.0.0.10 |
  
+--+--+++-+---+

  localadmin@qa4:/etc/nova$ nova get-vnc-console vm5 novnc
  ERROR (BadRequest): Unavailable console type novnc. (HTTP 400) (Request-ID: 
req-f5944644-46ae-4051-b0cb-7562f1308893)
  localadmin@qa4:/etc/nova$ nova get-vnc-console vm5 xvpvnc
  ERROR (BadRequest): Unavailable console type xvpvnc. (HTTP 400) (Request-ID: 
req-3416985d-0b76-4a31-9e48-cc597e8698eb)

  This does not happen if the VM resides at the Controlller (qa5).

  localadmin@qa4:/etc/nova$ nova get-vnc-console vm1 novnc
  
+---+-+
  | Type  | Url 
|
  
+---+-+
  | novnc | 
http://172.29.172.161:6080/vnc_auto.html?token=0abd9c75-4a81-4bd2-a802-785dab61c82a
 |
  
+---+-+
  localadmin@qa4:/etc/nova$ nova 

[Yahoo-eng-team] [Bug 1381575] [NEW] In a Scale setup nova list returns only the 1000 odd vm's at one shot and not the whole list

2014-10-15 Thread sahana alva
Public bug reported:

In the scale tests it is usually seen that the nova list returns only
the 1000 odd vm's and not the whole list even though the instances would
have been provisioned above 2000 +

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  In the scale tests it is usually seen that the nova list returns only
- the 1000 odd vm's and not trhe whole list even though the instances
- would have been provisioed above 2000 +
+ the 1000 odd vm's and not the whole list even though the instances would
+ have been provisioned above 2000 +

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381575

Title:
  In a Scale setup nova list returns only the 1000 odd vm's at one shot
  and not the whole list

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the scale tests it is usually seen that the nova list returns only
  the 1000 odd vm's and not the whole list even though the instances
  would have been provisioned above 2000 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381574] [NEW] Remove use of center tag in template

2014-10-15 Thread Chad Roberts
Public bug reported:

In
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/grid.html
the center tag is used.  As discussed in
https://review.openstack.org/#/c/126141/10 the center tag should be
stripped and the formatting should be handled via CSS.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381574

Title:
  Remove use of center tag in template

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In
  
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/grid.html
  the center tag is used.  As discussed in
  https://review.openstack.org/#/c/126141/10 the center tag should be
  stripped and the formatting should be handled via CSS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381598] [NEW] boot from image created with nova image-create from a volume backed instance is rejected

2014-10-15 Thread Balazs Gibizer
Public bug reported:

It is not possible to boot the image that was created with nova image-
create from a volume backed instance.

Steps to reproduce:
stack@stack:~/devstack$ nova boot --flavor 100 --block-device 
source=image,id=70b5a8e8-846f-40dc-a52d-558d37dfc7f1,dest=volume,bootindex=0,size=1
 volume-backed
+--+-+
| Property | Value  
 |
+--+-+
| OS-DCF:diskConfig| MANUAL 
 |
| OS-EXT-AZ:availability_zone  | nova   
 |
| OS-EXT-SRV-ATTR:host | -  
 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
 |
| OS-EXT-SRV-ATTR:instance_name| instance-0017  
 |
| OS-EXT-STS:power_state   | 0  
 |
| OS-EXT-STS:task_state| scheduling 
 |
| OS-EXT-STS:vm_state  | building   
 |
| OS-SRV-USG:launched_at   | -  
 |
| OS-SRV-USG:terminated_at | -  
 |
| accessIPv4   |
 |
| accessIPv6   |
 |
| adminPass| wvUa22QCTaoR   
 |
| config_drive |
 |
| created  | 2014-10-15T15:07:39Z   
 |
| flavor   | install-test (100) 
 |
| hostId   |
 |
| id   | 9ad985f6-5e76-4545-9702-0b8a6058ef57   
 |
| image| Attempt to boot from volume - no image 
supplied |
| key_name | -  
 |
| metadata | {} 
 |
| name | volume-backed  
 |
| os-extended-volumes:volumes_attached | [] 
 |
| progress | 0  
 |
| security_groups  | default
 |
| status   | BUILD  
 |
| tenant_id| 89dda4659c7e403392e9bcfc14ca6c80   
 |
| updated  | 2014-10-15T15:07:39Z   
 |
| user_id  | 4c9283c1cbc54d688e2dda83fbc4aa11   
 |
+--+-+


stack@stack:~/devstack$ nova show 9ad985f6-5e76-4545-9702-0b8a6058ef57
+--+--+
| Property | Value  
  |
+--+--+
| OS-DCF:diskConfig| MANUAL 
  |
| OS-EXT-AZ:availability_zone  | nova   
  |
| OS-EXT-SRV-ATTR:host | stack  
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | stack  
  |
| OS-EXT-SRV-ATTR:instance_name| instance-0017  
  |
| OS-EXT-STS:power_state   | 1  
  |
| OS-EXT-STS:task_state| -  
  |
| OS-EXT-STS:vm_state  | active 
  |
| OS-SRV-USG:launched_at   | 2014-10-15T15:07:47.00 
  |
| OS-SRV-USG:terminated_at | -  
  |
| accessIPv4   |
  |
| accessIPv6   |
  |
| config_drive |  

[Yahoo-eng-team] [Bug 1381617] [NEW] status check for FloatingIP in network scenario showed instability in DVR

2014-10-15 Thread Armando Migliaccio
Public bug reported:

Since this  Tempest change:

https://review.openstack.org/#/c/102700/

We verify that FloatingIP changes status correctly on association and
dis-association.

This has shown instability of the dvr tempest job:


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaXMgYXQgc3RhdHVzOiBBQ1RJVkUuIGZhaWxlZCAgdG8gcmVhY2ggc3RhdHVzOiBET1dOXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTMzODg5NTYxMTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381617

Title:
  status check for FloatingIP in network scenario showed instability in
  DVR

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Since this  Tempest change:

  https://review.openstack.org/#/c/102700/

  We verify that FloatingIP changes status correctly on association and
  dis-association.

  This has shown instability of the dvr tempest job:

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaXMgYXQgc3RhdHVzOiBBQ1RJVkUuIGZhaWxlZCAgdG8gcmVhY2ggc3RhdHVzOiBET1dOXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTMzODg5NTYxMTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381633] [NEW] add help tooltip to column headings

2014-10-15 Thread Cindy Lu
Public bug reported:

Based on Liz's suggestion
(https://bugs.launchpad.net/horizon/+bug/1267362):

If a clear, concise term can not be created I think the idea of adding
a help ? icon with hover over text next to the column header would be
a fine solution. I don't know that we do this anywhere yet, but it would
be something that could be added and used throughout Horizon where
needed. I think simply adding the ? icon would not clutter things too
much.

===

Add help tooltip if user specifies it when creating a Column.

** Affects: horizon
 Importance: Wishlist
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

** Changed in: horizon
   Importance: Undecided = Wishlist

** Description changed:

  Based on Liz's suggestion
  (https://bugs.launchpad.net/horizon/+bug/1267362):
  
  If a clear, concise term can not be created I think the idea of adding
  a help ? icon with hover over text next to the column header would be
  a fine solution. I don't know that we do this anywhere yet, but it would
  be something that could be added and used throughout Horizon where
  needed. I think simply adding the ? icon would not clutter things too
  much.
+ 
+ ===
+ 
+ Add help tooltip if user specifies it when creating a Column.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381633

Title:
  add help tooltip to column headings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Based on Liz's suggestion
  (https://bugs.launchpad.net/horizon/+bug/1267362):

  If a clear, concise term can not be created I think the idea of
  adding a help ? icon with hover over text next to the column header
  would be a fine solution. I don't know that we do this anywhere yet,
  but it would be something that could be added and used throughout
  Horizon where needed. I think simply adding the ? icon would not
  clutter things too much.

  ===

  Add help tooltip if user specifies it when creating a Column.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381641] [NEW] add help tooltips to menu navigation

2014-10-15 Thread Cindy Lu
Public bug reported:

if specified

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381641

Title:
  add help tooltips to menu navigation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  if specified

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381647] [NEW] nova [scheduler] tries to schedule an instance on a out of service compute host

2014-10-15 Thread Behzad
Public bug reported:

I have an openstack setup (icehouse) with 1 controller + 2 compute
nodes, where one of the compute nodes was taken out of service. Now when
running our tests, I observe that a new instances is trying to launch on
the down compute node.

How can I debug this further?

Here are the relevant details:

[root@HH24-5 ~]# nova-manage service list
Binary   Host Zone Status   
  State Updated_At
nova-consoleauth HH24-5.ctocllab.cisco.cominternal enabled  
  :-)   2014-10-15 16:07:18
nova-scheduler   HH24-5.ctocllab.cisco.cominternal enabled  
  :-)   2014-10-15 16:07:19
nova-conductor   HH24-5.ctocllab.cisco.cominternal enabled  
  :-)   2014-10-15 16:07:20
nova-certHH24-5.ctocllab.cisco.cominternal enabled  
  :-)   2014-10-15 16:07:20
nova-compute HH24-4.ctocllab.cisco.comnova enabled  
  :-)   2014-10-15 16:07:18
nova-compute HH24-8.ctocllab.cisco.comnova enabled  
  XXX   2014-10-15 00:33:58- compute node which is down.
[root@HH24-5 ~]# 


[root@HH24-5 ~(keystone_admin)]# nova list
+--+-+++-+--+
| ID   | Name| Status | Task State | 
Power State | Networks |
+--+-+++-+--+
| 455c8a6d-f729-4021-9678-5197ba2441e8 | TestClient2 | BUILD  | scheduling | 
NOSTATE |  |
+--+-+++-+--+
[root@HH24-5 ~(keystone_admin)]# 


[root@HH24-5 nova]# 
[root@HH24-5 nova]# grep 455c8a6d-f729-4021-9678-5197ba2441e8 *
nova-api.log:2014-10-15 12:35:01.518 21348 INFO nova.osapi_compute.wsgi.server 
[req-d1bd3939-04d8-4be9-b257-aba354c7c58 630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0914750
nova-api.log:2014-10-15 12:35:04.339 21349 INFO nova.osapi_compute.wsgi.server 
[req-2ef598c
d-25e3-46d9-b352-a5a50dfebd41 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0895979
nova-api.log:2014-10-15 12:35:09.612 21349 INFO nova.osapi_compute.wsgi.server 
[req-5664a87
0-d814-4677-9d8b-d59feff95a4b e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0730219
nova-api.log:2014-10-15 12:35:17.427 21348 INFO nova.osapi_compute.wsgi.server 
[req-9fc5c36
3-249c-4f1b-9ef7-ffcd8b31f432 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0753710
nova-api.log:2014-10-15 12:35:27.858 21351 INFO nova.osapi_compute.wsgi.server 
[req-29e919a
d-2453-4c1e-900d-0d0cb67fb22d e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0827179
nova-api.log:2014-10-15 12:35:40.854 21355 INFO nova.osapi_compute.wsgi.server 
[req-e643f98
f-e10d-4533-9f41-64722864ddec e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0998490
nova-api.log:2014-10-15 12:35:56.119 21348 INFO nova.osapi_compute.wsgi.server 
[req-a65c7e8
b-0d87-49f8-a4c4-75f52c9cc208 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0726490
nova-api.log:2014-10-15 12:36:13.971 21353 INFO nova.osapi_compute.wsgi.server 
[req-3eb0e59
9-14aa-44b8-9346-0a3732dd5cef e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0960729
nova-api.log:2014-10-15 12:36:34.257 21353 INFO nova.osapi_compute.wsgi.server 
[req-7f298ca
f-49b7-45ed-922c-d9a6a5b8e8cf e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0775731
nova-api.log:2014-10-15 12:36:57.019 21353 INFO 

[Yahoo-eng-team] [Bug 1377981] Re: [OSSA 2014-036] Missing fix for ssh_execute (Exceptions thrown may contain passwords) (CVE-2014-7230, CVE-2014-7231)

2014-10-15 Thread Tristan Cacqueray
** Summary changed:

- Missing fix for ssh_execute (Exceptions thrown may contain passwords) 
(CVE-2014-7230, CVE-2014-7231)
+ [OSSA 2014-036] Missing fix for ssh_execute (Exceptions thrown may contain 
passwords) (CVE-2014-7230, CVE-2014-7231)

** Changed in: ossa
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377981

Title:
  [OSSA 2014-036] Missing fix for ssh_execute (Exceptions thrown may
  contain passwords) (CVE-2014-7230, CVE-2014-7231)

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Former bugs:
https://bugs.launchpad.net/ossa/+bug/1343604
https://bugs.launchpad.net/ossa/+bug/1345233

  The ssh_execute method is still affected in Cinder and Nova Icehouse release.
  It is prone to password leak if:
  - passwords are used on the command line
  - execution fail
  - calling code catch and log the exception

  The missing fix from oslo-incubator to be merged is:
  6a60f84258c2be3391541dbe02e30b8e836f6c22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1377981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381679] [NEW] VLAN, VXLAN and GRE are hardcoded

2014-10-15 Thread Doug Fish
Public bug reported:

VLAN, VXLAN and GRE in create page under project-network-network
are in English.

When this was first brought to my attention I rejected it, suggesting that this 
sort of thing should never be translated.  However, on closer inspection, I see 
that in openstack_dashboard/locale/zh_TW/LC_MESSAGES/django.po VLAN is 
translated
msgid VLAN
msgstr 虛擬區域網路

We should enable this sort of translation.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: In Progress


** Tags: i18n

** Tags added: i18n

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381679

Title:
  VLAN, VXLAN and GRE are hardcoded

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  VLAN, VXLAN and GRE in create page under project-network-network
  are in English.

  When this was first brought to my attention I rejected it, suggesting that 
this sort of thing should never be translated.  However, on closer inspection, 
I see that in openstack_dashboard/locale/zh_TW/LC_MESSAGES/django.po VLAN is 
translated
  msgid VLAN
  msgstr 虛擬區域網路

  We should enable this sort of translation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381575] Re: In a Scale setup nova list returns only the 1000 odd vm's at one shot and not the whole list

2014-10-15 Thread Abhishek Chanda
Added novaclient since this might be a client issue

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381575

Title:
  In a Scale setup nova list returns only the 1000 odd vm's at one shot
  and not the whole list

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  In the scale tests it is usually seen that the nova list returns only
  the 1000 odd vm's and not the whole list even though the instances
  would have been provisioned above 2000 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381575] Re: In a Scale setup nova list returns only the 1000 odd vm's at one shot and not the whole list

2014-10-15 Thread Joe Gordon
This is actually by design we have pagination. See
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/common.py#n39

** Changed in: nova
   Status: New = Incomplete

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: python-novaclient
   Status: New = Invalid

** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381575

Title:
  In a Scale setup nova list returns only the 1000 odd vm's at one shot
  and not the whole list

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In the scale tests it is usually seen that the nova list returns only
  the 1000 odd vm's and not the whole list even though the instances
  would have been provisioned above 2000 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381647] Re: nova [scheduler] tries to schedule an instance on a out of service compute host

2014-10-15 Thread Joe Gordon
This sounds like a question and not a bug. Please use
https://ask.openstack.org/en/questions/

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381647

Title:
  nova [scheduler] tries to schedule an instance on a out of service
  compute host

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have an openstack setup (icehouse) with 1 controller + 2 compute
  nodes, where one of the compute nodes was taken out of service. Now
  when running our tests, I observe that a new instances is trying to
  launch on the down compute node.

  How can I debug this further?

  Here are the relevant details:

  [root@HH24-5 ~]# nova-manage service list
  Binary   Host Zone Status 
State Updated_At
  nova-consoleauth HH24-5.ctocllab.cisco.cominternal 
enabled:-)   2014-10-15 16:07:18
  nova-scheduler   HH24-5.ctocllab.cisco.cominternal 
enabled:-)   2014-10-15 16:07:19
  nova-conductor   HH24-5.ctocllab.cisco.cominternal 
enabled:-)   2014-10-15 16:07:20
  nova-certHH24-5.ctocllab.cisco.cominternal 
enabled:-)   2014-10-15 16:07:20
  nova-compute HH24-4.ctocllab.cisco.comnova 
enabled:-)   2014-10-15 16:07:18
  nova-compute HH24-8.ctocllab.cisco.comnova 
enabledXXX   2014-10-15 00:33:58- compute node which is down.
  [root@HH24-5 ~]# 

  
  [root@HH24-5 ~(keystone_admin)]# nova list
  
+--+-+++-+--+
  | ID   | Name| Status | Task State | 
Power State | Networks |
  
+--+-+++-+--+
  | 455c8a6d-f729-4021-9678-5197ba2441e8 | TestClient2 | BUILD  | scheduling | 
NOSTATE |  |
  
+--+-+++-+--+
  [root@HH24-5 ~(keystone_admin)]# 

  
  [root@HH24-5 nova]# 
  [root@HH24-5 nova]# grep 455c8a6d-f729-4021-9678-5197ba2441e8 *
  nova-api.log:2014-10-15 12:35:01.518 21348 INFO 
nova.osapi_compute.wsgi.server [req-d1bd3939-04d8-4be9-b257-aba354c7c58 
630627f2d4f46099c23cd5b5ebc504b 47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0914750
  nova-api.log:2014-10-15 12:35:04.339 21349 INFO 
nova.osapi_compute.wsgi.server [req-2ef598c
  d-25e3-46d9-b352-a5a50dfebd41 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0895979
  nova-api.log:2014-10-15 12:35:09.612 21349 INFO 
nova.osapi_compute.wsgi.server [req-5664a87
  0-d814-4677-9d8b-d59feff95a4b e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0730219
  nova-api.log:2014-10-15 12:35:17.427 21348 INFO 
nova.osapi_compute.wsgi.server [req-9fc5c36
  3-249c-4f1b-9ef7-ffcd8b31f432 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0753710
  nova-api.log:2014-10-15 12:35:27.858 21351 INFO 
nova.osapi_compute.wsgi.server [req-29e919a
  d-2453-4c1e-900d-0d0cb67fb22d e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0827179
  nova-api.log:2014-10-15 12:35:40.854 21355 INFO 
nova.osapi_compute.wsgi.server [req-e643f98
  f-e10d-4533-9f41-64722864ddec e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0998490
  nova-api.log:2014-10-15 12:35:56.119 21348 INFO 
nova.osapi_compute.wsgi.server [req-a65c7e8
  b-0d87-49f8-a4c4-75f52c9cc208 e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
  9025] 172.22.191.138 GET 
/v2/47c58dc66f7648128377db8f90599025/servers/455c8a6d-f729-4021-9
  678-5197ba2441e8 HTTP/1.1 status: 200 len: 1624 time: 0.0726490
  nova-api.log:2014-10-15 12:36:13.971 21353 INFO 
nova.osapi_compute.wsgi.server [req-3eb0e59
  9-14aa-44b8-9346-0a3732dd5cef e630627f2d4f46099c23cd5b5ebc504b 
47c58dc66f7648128377db8f9059
 

[Yahoo-eng-team] [Bug 1381749] [NEW] Can not create usable image in glance for vmdk images

2014-10-15 Thread Doug Fish
Public bug reported:

This is because the image needs to have two options: --property 
vmware_disktype=sparse --property vmware_adaptertype=ide
Horizon does not provide the capability set those options.

However, it can be created using glance CLI:
glance image-create --name x --is-public=True --container-format=bare 
--disk-format=vmdk --property --image-location 
http://172.19.11.252/SCP_VM_images/fedora-amd64.vmdk

The UI provided by https://blueprints.launchpad.net/horizon/+spec
/manage-image-custom-properties helps, but the scenario is still
perceived to be complex. A behavior is expected which provides the user
a UI to choose vmware properties, eg: vmware_disktype=sparse
--property vmware_adaptertype=ide, when create image with vmdk format.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1381749

Title:
  Can not create usable image in glance for vmdk images

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is because the image needs to have two options: --property 
vmware_disktype=sparse --property vmware_adaptertype=ide
  Horizon does not provide the capability set those options.

  However, it can be created using glance CLI:
  glance image-create --name x --is-public=True --container-format=bare 
--disk-format=vmdk --property --image-location 
http://172.19.11.252/SCP_VM_images/fedora-amd64.vmdk

  The UI provided by https://blueprints.launchpad.net/horizon/+spec
  /manage-image-custom-properties helps, but the scenario is still
  perceived to be complex. A behavior is expected which provides the
  user a UI to choose vmware properties, eg: vmware_disktype=sparse
  --property vmware_adaptertype=ide, when create image with vmdk
  format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1381749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303533] Re: nova.tests.integrated.v3.test_suspend_server.SuspendServerSamplesJsonTest.test_post_resume fails sporadically

2014-10-15 Thread Matt Riedemann
This doesn't appear to be a problem anymore.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303533

Title:
  
nova.tests.integrated.v3.test_suspend_server.SuspendServerSamplesJsonTest.test_post_resume
  fails sporadically

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hit a failure here:

  http://logs.openstack.org/88/73788/11/check/gate-nova-
  python27/56ed0eb/console.html

  The patch is unrelated to what this is testing, it's also showing up
  in the check queue for other changes:

  message:FAIL\:
  
nova.tests.integrated.v3.test_suspend_server.SuspendServerSamplesJsonTest.test_post_resume
  AND tags:console AND (build_name:gate-nova-python26 OR build_name
  :gate-nova-python27)

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTFxcOiBub3ZhLnRlc3RzLmludGVncmF0ZWQudjMudGVzdF9zdXNwZW5kX3NlcnZlci5TdXNwZW5kU2VydmVyU2FtcGxlc0pzb25UZXN0LnRlc3RfcG9zdF9yZXN1bWVcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjZcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI3XCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMDMtMjRUMDE6MzE6NDQrMDA6MDAiLCJ0byI6IjIwMTQtMDQtMDdUMDE6MzE6NDQrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTM5NjgzNDQ2NDI5M30=

  There are 12 hits in the last 2 weeks going back to 3/25.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381768] [NEW] AttributeError: 'module' object has no attribute 'LDAP_CONTROL_PAGE_OID' with python-ldap 2.4

2014-10-15 Thread Matt Fischer
Public bug reported:

When using LDAP backend with keystone Juno RC2, the following error
occurs:

AttributeError: 'module' object has no attribute 'LDAP_CONTROL_PAGE_OID'

It looks like that attribute was removed in python-ldap 2.4 which breaks
Ubuntu Trusty and Utopic and probably RHEL7.


More details on this change here in the library are here:

https://mail.python.org/pipermail//python-ldap/2012q1/003105.html

** Affects: keystone
 Importance: Undecided
 Assignee: Nathan Kinder (nkinder)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1381768

Title:
  AttributeError: 'module' object has no attribute
  'LDAP_CONTROL_PAGE_OID' with python-ldap 2.4

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When using LDAP backend with keystone Juno RC2, the following error
  occurs:

  AttributeError: 'module' object has no attribute
  'LDAP_CONTROL_PAGE_OID'

  It looks like that attribute was removed in python-ldap 2.4 which
  breaks Ubuntu Trusty and Utopic and probably RHEL7.

  
  More details on this change here in the library are here:

  https://mail.python.org/pipermail//python-ldap/2012q1/003105.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381809] [NEW] Domain aware policy shoule restrict certain operations to cloud admin

2014-10-15 Thread Nathan Kinder
Public bug reported:

The domain aware policy that is provided as a part of keystone
(policy.v3cloudsample.json) attempts to define a few layers of
administrative roles:

  cloud admin - responsible for overall cloud management
  domain admin - responsible for management within a domain
  project admin/owner - responsible for management of a project

There are some APIs that should be restricted to the cloud admin, but
they are currently allowed to any user with the admin role that is
defined at any scope, such as the administrator of a project.  Some
examples are the region and federation APIs:

-
identity:get_region: ,
identity:list_regions: ,
identity:create_region: rule:admin_or_cloud_admin,
identity:update_region: rule:admin_or_cloud_admin,
identity:delete_region: rule:admin_or_cloud_admin,


identity:create_identity_provider: rule:admin_required,
identity:list_identity_providers: rule:admin_required,
identity:get_identity_providers: rule:admin_required,
identity:update_identity_provider: rule:admin_required,
identity:delete_identity_provider: rule:admin_required,

identity:create_protocol: rule:admin_required,
identity:update_protocol: rule:admin_required,
identity:get_protocol: rule:admin_required,
identity:list_protocols: rule:admin_required,
identity:delete_protocol: rule:admin_required,

identity:create_mapping: rule:admin_required,
identity:get_mapping: rule:admin_required,
identity:list_mappings: rule:admin_required,
identity:delete_mapping: rule:admin_required,
identity:update_mapping: rule:admin_required,
---

** Affects: keystone
 Importance: Undecided
 Assignee: Nathan Kinder (nkinder)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Nathan Kinder (nkinder)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1381809

Title:
  Domain aware policy shoule restrict certain operations to cloud admin

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The domain aware policy that is provided as a part of keystone
  (policy.v3cloudsample.json) attempts to define a few layers of
  administrative roles:

cloud admin - responsible for overall cloud management
domain admin - responsible for management within a domain
project admin/owner - responsible for management of a project

  There are some APIs that should be restricted to the cloud admin, but
  they are currently allowed to any user with the admin role that is
  defined at any scope, such as the administrator of a project.  Some
  examples are the region and federation APIs:

  -
  identity:get_region: ,
  identity:list_regions: ,
  identity:create_region: rule:admin_or_cloud_admin,
  identity:update_region: rule:admin_or_cloud_admin,
  identity:delete_region: rule:admin_or_cloud_admin,

  
  identity:create_identity_provider: rule:admin_required,
  identity:list_identity_providers: rule:admin_required,
  identity:get_identity_providers: rule:admin_required,
  identity:update_identity_provider: rule:admin_required,
  identity:delete_identity_provider: rule:admin_required,

  identity:create_protocol: rule:admin_required,
  identity:update_protocol: rule:admin_required,
  identity:get_protocol: rule:admin_required,
  identity:list_protocols: rule:admin_required,
  identity:delete_protocol: rule:admin_required,

  identity:create_mapping: rule:admin_required,
  identity:get_mapping: rule:admin_required,
  identity:list_mappings: rule:admin_required,
  identity:delete_mapping: rule:admin_required,
  identity:update_mapping: rule:admin_required,
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381843] [NEW] keystone isn't compatible with python-ldap 2.4.* when enable paging

2014-10-15 Thread Yaguang Tang
Public bug reported:

ubuntu 14.04 Icehouse

ERROR keystone.common.wsgi [-] 'module' object has no attribute 
'LDAP_CONTROL_PAGE_OID'
TRACE keystone.common.wsgi Traceback (most recent call last):
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 207, in 
__call__
TRACE keystone.common.wsgi result = method(context, **params)
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py, line 112, 
in get_users
TRACE keystone.common.wsgi user_list = self.identity_api.list_users()
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/manager.py, line 47, in 
wrapper
TRACE keystone.common.wsgi return f(self, *args, **kwargs)
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 189, in 
wrapper
TRACE keystone.common.wsgi return f(self, *args, **kwargs)
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 328, in 
list_users
TRACE keystone.common.wsgi ref_list = driver.list_users(hints or 
driver_hints.Hints())
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 81, 
in list_users
TRACE keystone.common.wsgi return self.user.get_all_filtered()
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 
245, in get_all_filtered
TRACE keystone.common.wsgi return [identity.filter_user(user) for user in 
self.get_all()]
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 786, in 
get_all
TRACE keystone.common.wsgi return super(EnabledEmuMixIn, 
self).get_all(ldap_filter)
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 418, in 
get_all
TRACE keystone.common.wsgi for x in self._ldap_get_all(ldap_filter)]
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 394, in 
_ldap_get_all
TRACE keystone.common.wsgi self.attribute_mapping.values())
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 594, in 
search_s
TRACE keystone.common.wsgi res = self.paged_search_s(dn, scope, query, 
attrlist)
TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 618, in 
paged_search_s
TRACE keystone.common.wsgi controlType=ldap.LDAP_CONTROL_PAGE_OID,
TRACE keystone.common.wsgi AttributeError: 'module' object has no attribute 
'LDAP_CONTROL_PAGE_OID'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1381843

Title:
  keystone isn't compatible with python-ldap 2.4.* when enable paging

Status in OpenStack Identity (Keystone):
  New

Bug description:
  ubuntu 14.04 Icehouse

  ERROR keystone.common.wsgi [-] 'module' object has no attribute 
'LDAP_CONTROL_PAGE_OID'
  TRACE keystone.common.wsgi Traceback (most recent call last):
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 207, in 
__call__
  TRACE keystone.common.wsgi result = method(context, **params)
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py, line 112, 
in get_users
  TRACE keystone.common.wsgi user_list = self.identity_api.list_users()
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/manager.py, line 47, in 
wrapper
  TRACE keystone.common.wsgi return f(self, *args, **kwargs)
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 189, in 
wrapper
  TRACE keystone.common.wsgi return f(self, *args, **kwargs)
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 328, in 
list_users
  TRACE keystone.common.wsgi ref_list = driver.list_users(hints or 
driver_hints.Hints())
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 81, 
in list_users
  TRACE keystone.common.wsgi return self.user.get_all_filtered()
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 
245, in get_all_filtered
  TRACE keystone.common.wsgi return [identity.filter_user(user) for user in 
self.get_all()]
  TRACE keystone.common.wsgi File 
/usr/lib/python2.7/dist-packages/keystone/common/ldap/core.py, line 786, in 
get_all
  TRACE keystone.common.wsgi return super(EnabledEmuMixIn, 
self).get_all(ldap_filter)
  TRACE keystone.common.wsgi File 

[Yahoo-eng-team] [Bug 1381870] [NEW] Do not use obsolete modules from oslo-incubator

2014-10-15 Thread Zhi Yan Liu
Public bug reported:

The follow obsolete oslo-inc modules need to be removed from glance,
update project moved to own graduated libraries instead.

openstack/glance: gettextutils
openstack/glance: test

For details, pls ref http://lists.openstack.org/pipermail/openstack-
dev/2014-October/048303.html

** Affects: glance
 Importance: High
 Assignee: Zhi Yan Liu (lzy-dev)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1381870

Title:
  Do not use obsolete modules from oslo-incubator

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  The follow obsolete oslo-inc modules need to be removed from glance,
  update project moved to own graduated libraries instead.

  openstack/glance: gettextutils
  openstack/glance: test

  For details, pls ref http://lists.openstack.org/pipermail/openstack-
  dev/2014-October/048303.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1381870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp