[Yahoo-eng-team] [Bug 1342016] [NEW] race windown in volume attach and spawn with volumes

2014-07-15 Thread jichenjc
Public bug reported:

there is race window between attach volume and spawn with volumes
we should reserve volumes when spawn

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342016

Title:
  race windown in volume attach and spawn with volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  there is race window between attach volume and spawn with volumes
  we should reserve volumes when spawn

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342036] [NEW] Captured image property instance_type_rxtx_factor display incorrectly

2014-07-15 Thread Ma Wen Cheng
Public bug reported:

Snapshot an instance successfully,  but the captured image Property 
'instance_type_rxtx_factor'  show 1.00E+0,
it should be 1.0, which is the right format. 


[root@localhost ~]# nova image-create cirros1 snapshot_cirros1
[root@localhost ~]# glance image-show snapshot_cirros1
+---+--+
| Property  | Value|
+---+--+
| Property 'base_image_ref' | 7ce49b51-624a-4493-b22c-442a899091d1 |
| Property 'image_type' | snapshot |
| Property 'instance_type_ephemeral_gb' | 0|
| Property 'instance_type_flavorid' | 1|
| Property 'instance_type_id'   | 2|
| Property 'instance_type_memory_mb'| 512  |
| Property 'instance_type_name' | m1.tiny  |
| Property 'instance_type_root_gb'  | 1|
| Property 'instance_type_rxtx_factor'  | 1.00E+0  |
| Property 'instance_type_swap' | 0|
| Property 'instance_type_vcpus'| 1|
| Property 'instance_uuid'  | cc390afd-7ddd-4861-bbe0-8b5b094e7263 |
| Property 'user_id'| 6673d831361c4208819fd73b2698a81c |
| container_format  | bare |
| created_at| 2014-07-15T08:51:29.989611   |
| deleted   | False|
| disk_format   | qcow2|
| id| bb02b8aa-2ce4-4ebc-aaf6-d0274d196792 |
| is_public | False|
| min_disk  | 1|
| min_ram   | 0|
| name  | snapshot_cirros1 |
| owner | d4d503cc1b1944e9838190d61fc70083 |
| protected | False|
| size  | 0|
| status| queued   |
| updated_at| 2014-07-15T08:51:29.989620   |
+---+--+

** Affects: nova
 Importance: Undecided
 Assignee: Ma Wen Cheng (mars914)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Ma Wen Cheng (mars914)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342036

Title:
  Captured image property instance_type_rxtx_factor display incorrectly

Status in OpenStack Compute (Nova):
  New

Bug description:
  Snapshot an instance successfully,  but the captured image Property 
'instance_type_rxtx_factor'  show 1.00E+0,
  it should be 1.0, which is the right format. 

  
  [root@localhost ~]# nova image-create cirros1 snapshot_cirros1
  [root@localhost ~]# glance image-show snapshot_cirros1
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | Property 'base_image_ref' | 
7ce49b51-624a-4493-b22c-442a899091d1 |
  | Property 'image_type' | snapshot
 |
  | Property 'instance_type_ephemeral_gb' | 0   
 |
  | Property 'instance_type_flavorid' | 1   
 |
  | Property 'instance_type_id'   | 2   
 |
  | Property 'instance_type_memory_mb'| 512 
 |
  | Property 'instance_type_name' | m1.tiny 
 |
  | Property 'instance_type_root_gb'  | 1   
 |
  | Property 'instance_type_rxtx_factor'  | 1.00E+0 
 |
  | Property 'instance_type_swap' | 0   
 |
  | Property 'instance_type_vcpus'| 1   
 |
  | Property 'instance_uuid'  | 
cc390afd-7ddd-4861-bbe0-8b5b094e7263 |
  | Property 'user_id'| 6673d831361c4208819fd73b2698a81c
 |
  | container_format   

[Yahoo-eng-team] [Bug 1342055] [NEW] Suspending and restoring a rescued instance restores it to ACTIVE rather than RESCUED

2014-07-15 Thread Matthew Booth
Public bug reported:

If you suspend a rescued instance, resume returns it to the ACTIVE state
rather than the RESCUED state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342055

Title:
  Suspending and restoring a rescued instance restores it to ACTIVE
  rather than RESCUED

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you suspend a rescued instance, resume returns it to the ACTIVE
  state rather than the RESCUED state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1101404] Re: nova syslog logging to /dev/log race condition in python 2.6

2014-07-15 Thread Dmitry Mescheryakov
** No longer affects: mos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1101404

Title:
  nova syslog logging to /dev/log race condition in python 2.6

Status in OpenStack Identity (Keystone):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed

Bug description:
  
  running nova-api-ec2
  running rsyslog

  service rsyslog restart ; service nova-api-ec2 restart

  nova-api-ec2 consumes up to 100% of the available CPU (or at least a
  full core) and s not responsive.  /var/log/nova/nova-api-ec2.lgo
  states the socket is already in use.

  strace the process

  sendto(3, 1422013-01-18 20:00:22 24882 INFO nova.service [-] Caught
  SIGTERM, exiting\0, 77, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint
  is not connected)

  service nova-api-ec2 restart fails as upstart already thinks the
  process has been terminated.

  The only way to recover is to pkill -9 nova-api-ec2 and then restart
  it with 'service nova-api-ec2 restart'.

  The same behavior has been seen in all nova-api services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1101404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193531] Re: Fragile test: glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized

2014-07-15 Thread Zhi Yan Liu
We don't meet this test failure in jenkins for a long time, so I'm going
to close this bug, if you can catch it again pls reopen it as free,
thanks.

** Changed in: glance
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1193531

Title:
  Fragile test:
  
glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  From http://logs.openstack.org/34030/2/check/gate-glance-
  python26/4449/console.html.gz

  
  2013-06-21 19:27:26.230 | ERROR: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized
  2013-06-21 19:27:26.230 | 
--
  2013-06-21 19:27:26.230 | _StringException: Traceback (most recent call last):
  2013-06-21 19:27:26.230 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/tests/unit/v1/test_api.py,
 line 548, in test_add_copy_from_image_authorized
  2013-06-21 19:27:26.230 | res = req.get_response(self.api)
  2013-06-21 19:27:26.230 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py,
 line 1296, in send
  2013-06-21 19:27:26.230 | application, catch_exc_info=False)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py,
 line 1260, in call_application
  2013-06-21 19:27:26.231 | app_iter = application(self.environ, 
start_response)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 130, in __call__
  2013-06-21 19:27:26.231 | resp = self.call_func(req, *args, **self.kwargs)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 195, in call_func
  2013-06-21 19:27:26.231 | return self.func(req, *args, **kwargs)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/common/wsgi.py, line 367, 
in __call__
  2013-06-21 19:27:26.231 | response = req.get_response(self.application)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py,
 line 1296, in send
  2013-06-21 19:27:26.231 | application, catch_exc_info=False)
  2013-06-21 19:27:26.231 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py,
 line 1260, in call_application
  2013-06-21 19:27:26.231 | app_iter = application(self.environ, 
start_response)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 144, in __call__
  2013-06-21 19:27:26.232 | return resp(environ, start_response)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/routes/middleware.py,
 line 131, in __call__
  2013-06-21 19:27:26.232 | response = self.app(environ, start_response)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 144, in __call__
  2013-06-21 19:27:26.232 | return resp(environ, start_response)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 130, in __call__
  2013-06-21 19:27:26.232 | resp = self.call_func(req, *args, **self.kwargs)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py,
 line 195, in call_func
  2013-06-21 19:27:26.232 | return self.func(req, *args, **kwargs)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/common/wsgi.py, line 591, 
in __call__
  2013-06-21 19:27:26.232 | request, **action_args)
  2013-06-21 19:27:26.232 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/common/wsgi.py, line 608, 
in dispatch
  2013-06-21 19:27:26.232 | return method(*args, **kwargs)
  2013-06-21 19:27:26.233 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/common/utils.py, line 
407, in wrapped
  2013-06-21 19:27:26.233 | return func(self, req, *args, **kwargs)
  2013-06-21 19:27:26.233 |   File 
/home/jenkins/workspace/gate-glance-python26/glance/api/v1/images.py, line 
593, in create
  2013-06-21 19:27:26.233 | location_uri = image_meta.get('location')
  2013-06-21 19:27:26.233 | AttributeError: 'NoneType' object has no attribute 
'get'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1193531/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1342079] [NEW] There are hard-coded '/static/' url prefix in CSS (SCSS) and JS files

2014-07-15 Thread Timur Sufiev
Public bug reported:

Among the examples are:
https://github.com/openstack/horizon/blob/2014.2.b1/horizon/static/horizon/js/horizon.tables.js#L70
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/dashboard/scss/_accordion_nav.scss#L17

This approach has the problem that if someone decides to serve static
files from an url different from '/static/...', he would surely catch
some 404 errors.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342079

Title:
  There are hard-coded '/static/' url prefix in CSS (SCSS) and JS files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Among the examples are:
  
https://github.com/openstack/horizon/blob/2014.2.b1/horizon/static/horizon/js/horizon.tables.js#L70
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/dashboard/scss/_accordion_nav.scss#L17

  This approach has the problem that if someone decides to serve static
  files from an url different from '/static/...', he would surely catch
  some 404 errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342080] [NEW] glance api is tracebacking with error: [Errno 32] Broken pipe

2014-07-15 Thread Joe Gordon
Public bug reported:

127.0.0.1 - - [15/Jul/2014 10:55:39] code 400, message Bad request syntax ('0')
127.0.0.1 - - [15/Jul/2014 10:55:39] 0 400 -
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
func(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 584, in 
process_request
proto.__init__(socket, address, self)
  File /usr/lib/python2.7/SocketServer.py, line 649, in __init__
self.handle()
  File /usr/lib/python2.7/BaseHTTPServer.py, line 342, in handle
self.handle_one_request()
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 247, in 
handle_one_request
if not self.parse_request():
  File /usr/lib/python2.7/BaseHTTPServer.py, line 286, in parse_request
self.send_error(400, Bad request syntax (%r) % requestline)
  File /usr/lib/python2.7/BaseHTTPServer.py, line 368, in send_error
self.send_response(code, message)
  File /usr/lib/python2.7/BaseHTTPServer.py, line 395, in send_response
self.send_header('Server', self.version_string())
  File /usr/lib/python2.7/BaseHTTPServer.py, line 401, in send_header
self.wfile.write(%s: %s\r\n % (keyword, value))
  File /usr/lib/python2.7/socket.py, line 324, in write
self.flush()
  File /usr/lib/python2.7/socket.py, line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
  File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 307, in 
sendall
tail = self.send(data, flags)
  File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 293, in send
total_sent += fd.send(data[total_sent:], flags)
error: [Errno 32] Broken pipe


http://logs.openstack.org/62/100162/3/check/check-tempest-dsvm-full/77badd4/logs/screen-g-api.txt.gz?level=INFO#_2014-07-15_10_55_39_729


Seen all over the gate. Seeing stacktraces like thi

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1342080

Title:
  glance api is tracebacking with error: [Errno 32] Broken pipe

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  127.0.0.1 - - [15/Jul/2014 10:55:39] code 400, message Bad request syntax 
('0')
  127.0.0.1 - - [15/Jul/2014 10:55:39] 0 400 -
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/eventlet/greenpool.py, line 80, in 
_spawn_n_impl
  func(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 584, in 
process_request
  proto.__init__(socket, address, self)
File /usr/lib/python2.7/SocketServer.py, line 649, in __init__
  self.handle()
File /usr/lib/python2.7/BaseHTTPServer.py, line 342, in handle
  self.handle_one_request()
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 247, in 
handle_one_request
  if not self.parse_request():
File /usr/lib/python2.7/BaseHTTPServer.py, line 286, in parse_request
  self.send_error(400, Bad request syntax (%r) % requestline)
File /usr/lib/python2.7/BaseHTTPServer.py, line 368, in send_error
  self.send_response(code, message)
File /usr/lib/python2.7/BaseHTTPServer.py, line 395, in send_response
  self.send_header('Server', self.version_string())
File /usr/lib/python2.7/BaseHTTPServer.py, line 401, in send_header
  self.wfile.write(%s: %s\r\n % (keyword, value))
File /usr/lib/python2.7/socket.py, line 324, in write
  self.flush()
File /usr/lib/python2.7/socket.py, line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 307, in 
sendall
  tail = self.send(data, flags)
File /usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 293, in 
send
  total_sent += fd.send(data[total_sent:], flags)
  error: [Errno 32] Broken pipe

  
  
http://logs.openstack.org/62/100162/3/check/check-tempest-dsvm-full/77badd4/logs/screen-g-api.txt.gz?level=INFO#_2014-07-15_10_55_39_729

  
  Seen all over the gate. Seeing stacktraces like thi

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1342080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251105] Re: test-requirements.txt seems to need correcting

2014-07-15 Thread Zhi Yan Liu
Currently, glance is using psutil=1.1.1,2.0.0 and testtools=0.9.32 ,
those work for us.

** Changed in: glance
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1251105

Title:
  test-requirements.txt seems to need correcting

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  from  glance-2013.2/test-requirements.txt

  testtools=0.9.32
  psutil=0.6.1,1.0

  This is the SECOND time I have found stipulation of testtools version to be 
wrong (in an openstack package source code).  With  testtools-0.9.32 installed 
the testsuite simply goes belly up.  It works fine with 
=dev-python/testtools-0.9.24-r1 which would translate into =testtools-0.9.24.
  It reports missing a module to import.  I wonder, is the installed 
testtools-0.9.32 a different package to the one you use?

  Second; our versions of psutil from 0.6.1 up all seem to do fine.

  Hmm. glance-2013.1.4/ has no test-requirements.txt.  Tests of TestApi,
  TestSSL and more yield ERROR   FAIL, however being 2013.1.x I'm
  guessing this is simply passe, or a memory of the past.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1251105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342086] [NEW] DBDeadlock in delete_instance

2014-07-15 Thread Joe Gordon
Public bug reported:

2014-07-15 11:11:59.993 ERROR nova.api.openstack 
[req-cc826bd5-7d9a-491d-bdde-c64a70a0a630 
TelemetryNotificationAPITestXML-21795205 
TelemetryNotificationAPITestXML-1445201375] Caught error: (OperationalError) 
(1213, 'Deadlock found when trying to get lock; try restarting transaction') 
'UPDATE instances SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
instances.deleted = %s AND instances.uuid = %s AND instances.host IS NULL' 
(datetime.datetime(2014, 7, 15, 11, 11, 59, 979427), 0, 
'bbb916b2-9854-4c5e-9687-515352f3e4df')
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 125, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
661, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return 
self._app(env, start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 906, in __call__
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack content_type, body, 
accept)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 972, in _process_stack
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 1056, in dispatch
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 1201, in 
delete
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack 
self._delete(req.environ['nova.context'], req, id)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 1030, in 
_delete
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack 
self.compute_api.delete(context, instance)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/api.py, line 192, in wrapped
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return func(self, 
context, target, *args, **kwargs)
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/compute/api.py, line 182, in inner
2014-07-15 11:11:59.993 20784 TRACE nova.api.openstack return 
function(self, context, instance, *args, **kwargs)
2014-07-15 11:11:59.993 

[Yahoo-eng-team] [Bug 1342108] [NEW] Missing options in ml2_conf.ini template

2014-07-15 Thread Robert van Leeuwen
Public bug reported:

To get the ml2 config with openvswitch to work I needed to set the
following:

[ovs]
bridge_mappings = (eg. default:br-eth1 )

These values are not in the documentation or the example.

(Running on SL6 with RDO )

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342108

Title:
  Missing options in ml2_conf.ini template

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  To get the ml2 config with openvswitch to work I needed to set the
  following:

  [ovs]
  bridge_mappings = (eg. default:br-eth1 )

  These values are not in the documentation or the example.

  (Running on SL6 with RDO )

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342134] [NEW] Rename Host in trove launch dialog

2014-07-15 Thread Andrew Bramley
Public bug reported:

In the Database details page / Users tab there is a column called
Allowed Hosts - this is the mysql setting.

In the Launch Databases dialog this setting is just called 'Host' which
can be confused with the compute Host which is displayed elsewhere in
the UI.

Rename 'Host' in the Launch Dialog to 'Allowed Hosts'

** Affects: horizon
 Importance: Undecided
 Assignee: Andrew Bramley (andrlw)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Andrew Bramley (andrlw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342134

Title:
  Rename Host in trove launch dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the Database details page / Users tab there is a column called
  Allowed Hosts - this is the mysql setting.

  In the Launch Databases dialog this setting is just called 'Host'
  which can be confused with the compute Host which is displayed
  elsewhere in the UI.

  Rename 'Host' in the Launch Dialog to 'Allowed Hosts'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341777] Re: HTTPConnectionPool is full warning should be a debug message

2014-07-15 Thread Matt Riedemann
The n-cpu and c-vol issues are from this change to python-glanceclient
on 7/11 to use requests:

https://github.com/openstack/python-
glanceclient/commit/dbb242b776908ca50ed8557ebfe7cfcd879366c8

That lines up with what we're seeing in the g-api logs with swiftclient
and bug 1295812, and the times line up with logstash for n-cpu/c-vol
logs and the fact they both use python-glanceclient.

** Summary changed:

- HTTPConnectionPool is full warning should be a debug message
+ glanceclient is not handling http connection pools properly with 
python-requests

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1341777

Title:
  glanceclient is not handling http connection pools properly with
  python-requests

Status in Python client library for Glance:
  New

Bug description:
  This:

  message:HttpConnectionPool is full, discarding connection: 127.0.0.1
  AND tags:screen-g-api.txt

  shows up nearly 420K times in 7 days in Jenkins runs, on 100%
  successful jobs, so it's not a very good warning if it's expected to
  happen this much.  It should be debug level if it's this pervasive,
  unless it's masking something else, but the logstash trends don't
  suggest that.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSHR0cENvbm5lY3Rpb25Qb29sIGlzIGZ1bGwsIGRpc2NhcmRpbmcgY29ubmVjdGlvbjogMTI3LjAuMC4xXCIgQU5EIHRhZ3M6XCJzY3JlZW4tZy1hcGkudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDUzNzE0NTU4NTZ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1341777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-07-15 Thread Dmitry Mescheryakov
** No longer affects: fuel

** Changed in: mos
   Status: Confirmed = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Fix Released
Status in Cinder:
  New
Status in Orchestration API (Heat):
  Confirmed
Status in Mirantis OpenStack:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342143] [NEW] ml2: tunnel_id_ranges and vni_ranges values are not validated

2014-07-15 Thread Elena Ezhova
Public bug reported:

Values of ranges of GRE tunnel IDs and VXLAN VNI IDs are not validated and can 
be set in the following ways:
vni_ranges = -11:20 (tun_min  0)
vni_ranges = 11:2 (tun_min  tun_max)

Some checks need to be added to the _parse_tunnel_ranges method [1]

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L52

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: ml2

** Tags added: ml2

** Summary changed:

- tunnel_id_ranges and vni_ranges values are not validated
+ ml2: tunnel_id_ranges and vni_ranges values are not validated

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342143

Title:
  ml2: tunnel_id_ranges and vni_ranges values are not validated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Values of ranges of GRE tunnel IDs and VXLAN VNI IDs are not validated and 
can be set in the following ways:
  vni_ranges = -11:20 (tun_min  0)
  vni_ranges = 11:2 (tun_min  tun_max)

  Some checks need to be added to the _parse_tunnel_ranges method [1]

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L52

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1342143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342142] [NEW] Netron metering agent don't work with more than one network node

2014-07-15 Thread Bellantuono Daniel
Public bug reported:

Hi Guys,
With more than one L3 agent node neutron metering agent services returns this 
error:

2014-07-15 12:20:56.005 12584 ERROR 
neutron.services.metering.agents.metering_agent 
[req-121072ee-794b-4272-b8a9-b1a7ada7efe0 None] Driver 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver:get_traffic_counters
 runtime error
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Traceback (most recent call 
last):
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py,
 line 177, in _invoke_driver
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent return 
getattr(self.metering_driver, func_name)(context, meterings)
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/common/log.py, line 34, in wrapper
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent return method(*args, 
**kwargs)
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/services/metering/drivers/iptables/iptables_driver.py,
 line 272, in get_traffic_counters
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent chain, wrap=False, 
zero=True)
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py, 
line 627, in get_traffic_counters
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent 
root_helper=self.root_helper))
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py, line 76, in 
execute
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent raise RuntimeError(m)
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent RuntimeError:
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-c1b46e53-08ac-458f-bc80-3d05ac3d97a3', 'iptables', '-t', 
'filter', '-L', 'neutron-meter-l-9263ed54-f97', '-n', '-v', '-x', '-Z']
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Exit code: 1
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Stdout: ''
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Stderr: 'Cannot open network 
namespace: No such file or directory\n'
2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent


This happens because each network nodes expect to have all network namespace, 
but it isn't because the router namespaces are divided across multiple nodes.

Do you have any idea to fix this bug?

Daniel

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342142

Title:
  Netron metering agent don't work with more than one network node

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi Guys,
  With more than one L3 agent node neutron metering agent services returns this 
error:

  2014-07-15 12:20:56.005 12584 ERROR 
neutron.services.metering.agents.metering_agent 
[req-121072ee-794b-4272-b8a9-b1a7ada7efe0 None] Driver 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver:get_traffic_counters
 runtime error
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent Traceback (most recent call 
last):
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py,
 line 177, in _invoke_driver
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent return 
getattr(self.metering_driver, func_name)(context, meterings)
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/common/log.py, line 34, in wrapper
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent return method(*args, 
**kwargs)
  2014-07-15 12:20:56.005 12584 TRACE 
neutron.services.metering.agents.metering_agent   File 
/usr/lib/python2.7/dist-packages/neutron/services/metering/drivers/iptables/iptables_driver.py,
 line 272, in get_traffic_counters
  2014-07-15 12:20:56.005 12584 TRACE 

[Yahoo-eng-team] [Bug 1282858] Re: InstanceInfoCacheNotFound while cleanup running deleted instances

2014-07-15 Thread Hans Lindgren
That is correct, thanks.

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282858

Title:
  InstanceInfoCacheNotFound while cleanup running deleted instances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  reproduce steps:
  1. create an instance
  2. stop nova-compute and wait for it becomes to XXX in `nova-manage service 
list `
  3. delete the instance
  and you should change these two config in nova.conf:
  running_deleted_instance_poll_interval=60
  running_deleted_instance_action = reap

  2014-02-21 10:57:14.915 DEBUG nova.network.api 
[req-60f769f1-0a53-4f0b-817f-a04dee2ab1af None None] Updating cache with info: 
[] from (pid=13440) update_instance_cache_with_nw_info 
/opt/stack/nova/nova/network/api.py:70
  2014-02-21 10:57:14.920 ERROR nova.network.api 
[req-60f769f1-0a53-4f0b-817f-a04dee2ab1af None None] [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Failed storing info cache
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Traceback (most recent call last):
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
/opt/stack/nova/nova/network/api.py, line 81, in 
update_instance_cache_with_nw_info
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] ic.save(update_cells=update_cells)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
/opt/stack/nova/nova/objects/base.py, line 151, in wrapper
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return fn(self, ctxt, *args, **kwargs)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
/opt/stack/nova/nova/objects/instance_info_cache.py, line 91, in save
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] {'network_info': nw_info_json})
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File /opt/stack/nova/nova/db/api.py, 
line 864, in instance_info_cache_update
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return 
IMPL.instance_info_cache_update(context, instance_uuid, values)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 128, in wrapper
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return f(*args, **kwargs)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 2308, in 
instance_info_cache_update
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] instance_uuid=instance_uuid)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] InstanceInfoCacheNotFound: Info cache for 
instance d150ab27-3a6a-4003-ac42-51a7c56ece66 could not be found.
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] 
  2014-02-21 10:57:16.724 INFO nova.virt.libvirt.driver [-] [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Instance destroyed successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342166] [NEW] Lock wait timeout updating floatingips

2014-07-15 Thread Eugene Nikanorov
Public bug reported:

Traceback:

ERROR neutron.api.v2.resource [req-64dccdfe-083d-437b-8d9c-f6709d77de6c None] 
update failed
TRACE neutron.api.v2.resource Traceback (most recent call last):
TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
TRACE neutron.api.v2.resource result = method(request=request, **args)
TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 529, in update
TRACE neutron.api.v2.resource obj = obj_updater(request.context, id, 
**kwargs)
TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 792, in update_floatingip
TRACE neutron.api.v2.resource context.elevated(), fip_port_id))
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 447, in 
__exit__
TRACE neutron.api.v2.resource self.rollback()
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 58, in 
__exit__
TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 444, in 
__exit__
TRACE neutron.api.v2.resource self.commit()
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 354, in 
commit
TRACE neutron.api.v2.resource self._prepare_impl()
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 334, in 
_prepare_impl
TRACE neutron.api.v2.resource self.session.flush()
TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py, line 
453, in _wrap
TRACE neutron.api.v2.resource return f(self, *args, **kwargs)
TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py, line 
737, in flush
TRACE neutron.api.v2.resource return super(Session, self).flush(*args, 
**kwargs)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1818, in 
flush
TRACE neutron.api.v2.resource self._flush(objects)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1936, in 
_flush
TRACE neutron.api.v2.resource transaction.rollback(_capture_exception=True)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 58, in 
__exit__
TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1900, in 
_flush
TRACE neutron.api.v2.resource flush_context.execute()
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 372, in 
execute
TRACE neutron.api.v2.resource rec.execute(self)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 525, in 
execute
TRACE neutron.api.v2.resource uow
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 59, in 
save_obj
TRACE neutron.api.v2.resource mapper, table, update)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 495, in 
_emit_update_statements
TRACE neutron.api.v2.resource execute(statement, params)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, in 
execute
TRACE neutron.api.v2.resource params)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, in 
_execute_clauseelement
TRACE neutron.api.v2.resource compiled_sql, distilled_params
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, in 
_execute_context
TRACE neutron.api.v2.resource context)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_exception
TRACE neutron.api.v2.resource exc_info
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
TRACE neutron.api.v2.resource reraise(type(exception), exception, tb=exc_tb)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
TRACE neutron.api.v2.resource context)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
TRACE neutron.api.v2.resource cursor.execute(statement, parameters)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
TRACE neutron.api.v2.resource self.errorhandler(self, exc, 

[Yahoo-eng-team] [Bug 1188202] Re: add_user_to_group should return 409 if conflict

2014-07-15 Thread Dolph Mathews
** Changed in: keystone
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1188202

Title:
  add_user_to_group should return 409 if conflict

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  if add a user to same group twice, it just return 204, I think 409 is
  better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1188202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258575] Re: Unused index on token table

2014-07-15 Thread Dolph Mathews
** Changed in: keystone
 Assignee: Dolph Mathews (dolph) = (unassigned)

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1258575

Title:
  Unused index on token table

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The 'valid' index on the token table in the SQL driver doesn't appear
  to be used. The 'expires' index is used, and the 'expires + valid'
  index is used, but we never query on 'valid' alone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1258575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1079154] Re: limit users not working

2014-07-15 Thread Dolph Mathews
** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1079154

Title:
  limit users not working

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  from keystoneclient.v2_0 import client

  keystone = client.Client(authenticate here)

  keystone.users.list(limit=1)

  is not working, seems not implemented. Help is available:

  list(self, tenant_id=None, limit=None, marker=None) method of
  keystoneclient.v2_0.users.UserManager instance

  The corresponding funtion

  keystone.tenants.list(limit=1)

  works without problems.

  -

  This bug relates to keystone, not to keystone client. Client sends
  valid request, but server doesn't process limit for users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1079154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341458] Re: Bad LOG format

2014-07-15 Thread Oleg Bondarev
LOG.debug(DHCP Agent not found on host %s, host) is a common way for
log messages in neutron, why you think it should be .. % host) ?

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341458

Title:
  Bad LOG format

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  we have some log as following:
  LOG.debug(DHCP Agent not found on host %s, host)

  It should be LOG.debug(DHCP Agent not found on host %s % host)

  I go through the neutron code and find the same problem is in the following 
files:
  agentschedulers_db.py
  mechanism_fslsdn.py
  cisco_csr_mock.py
  fake.py
  database_stubs.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1049363] Re: Fragile Test: glance.tests.functional.test_bin_glance:TestBinGlance.test_killed_image_not_in_index

2014-07-15 Thread Zhi Yan Liu
This test case is no longer in current glance codebase, feel free to
create a new report this if you get the similar issue again, thanks.

** Changed in: glance
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1049363

Title:
  Fragile Test:
  
glance.tests.functional.test_bin_glance:TestBinGlance.test_killed_image_not_in_index

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Seen here http://logs.openstack.org/12760/3/gate/gate-glance-
  python27/1827/console.html:

  20:33:51  
==
  20:33:51  ERROR: We test conditions that produced LP Bug #768969, where an 
image
  20:33:51  
--
  20:33:51  Traceback (most recent call last):
  20:33:51File 
/home/jenkins/workspace/gate-glance-python27/glance/tests/functional/test_bin_glance.py,
 line 574, in test_killed_image_not_in_index
  20:33:51  exitcode, out, err = execute(cmd)
  20:33:51File 
/home/jenkins/workspace/gate-glance-python27/glance/tests/utils.py, line 255, 
in execute
  20:33:51  raise RuntimeError(msg)
  20:33:51  RuntimeError: Command bin/glance --port=39745 index did not 
succeed. Returned an exit code of 1.
  20:33:51  
  20:33:51  STDOUT: Failed to show index. Got error:
  20:33:51  [Errno 111] Connection refused

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1049363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342220] [NEW] pkg_resources.DistributionNotFound: virtualenv=1.9.1 on bare-centos6-hpcloud-b3-901545 slave

2014-07-15 Thread Fawad Khaliq
Public bug reported:

Jenkins slaves bare-centos6-hpcloud-b3-901545 and  bare-
centos6-hpcloud-b4-901362 are buggy as they do not have correct packages
to run builds. This can be seen at [1][2]

[1]https://jenkins06.openstack.org/job/gate-python-keystoneclient-python26/353/
[2]https://jenkins04.openstack.org/job/gate-neutron-python26/2280/

2014-07-15 15:53:56.626 | Started by user anonymous
2014-07-15 15:53:56.628 | Building remotely on bare-centos6-hpcloud-b3-901545 
in workspace /home/jenkins/workspace/gate-neutron-python26
2014-07-15 15:54:30.412 | [gate-neutron-python26] $ /bin/bash 
/tmp/hudson7323049886741892388.sh
2014-07-15 15:54:31.684 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson4801467944983281998.sh
2014-07-15 15:54:31.721 | + /usr/local/jenkins/slave_scripts/gerrit-git-prep.sh 
https://review.openstack.org git://git.openstack.org
2014-07-15 15:54:31.721 | Triggered by: https://review.openstack.org/105542
2014-07-15 15:54:31.722 | + [[ ! -e .git ]]
2014-07-15 15:54:31.722 | + ls -a
2014-07-15 15:54:31.723 | .
2014-07-15 15:54:31.723 | ..
2014-07-15 15:54:31.723 | + rm -fr '.[^.]*' '*'
2014-07-15 15:54:31.724 | + '[' -d /opt/git/openstack/neutron/.git ']'
2014-07-15 15:54:31.724 | + git clone file:///opt/git/openstack/neutron .
2014-07-15 15:54:31.724 | Initialized empty Git repository in 
/home/jenkins/workspace/gate-neutron-python26/.git/
2014-07-15 15:54:51.015 | + git remote set-url origin 
git://git.openstack.org/openstack/neutron
2014-07-15 15:54:51.017 | + git remote update
2014-07-15 15:54:51.019 | Fetching origin
2014-07-15 15:54:52.524 | From git://git.openstack.org/openstack/neutron
2014-07-15 15:54:52.526 |  * [new branch]  stable/havana - 
origin/stable/havana
2014-07-15 15:54:52.526 |  * [new branch]  stable/icehouse - 
origin/stable/icehouse
2014-07-15 15:54:52.526 | + git reset --hard
2014-07-15 15:54:52.648 | HEAD is now at 2d4b75b Merge Add -s option for 
neutron metering rules
2014-07-15 15:54:52.649 | + git clean -x -f -d -q
2014-07-15 15:54:52.655 | + echo 
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388
2014-07-15 15:54:52.655 | + grep -q '^refs/tags/'
2014-07-15 15:54:52.657 | + '[' -z '' ']'
2014-07-15 15:54:52.657 | + git fetch 
http://zm02.openstack.org/p/openstack/neutron 
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388
2014-07-15 15:54:55.589 | From http://zm02.openstack.org/p/openstack/neutron
2014-07-15 15:54:55.589 |  * branch
refs/zuul/master/Z4f38c6ec4f09451f856a3d642a5f7388 - FETCH_HEAD
2014-07-15 15:54:55.590 | + git checkout FETCH_HEAD
2014-07-15 15:54:55.616 | Note: checking out 'FETCH_HEAD'.
2014-07-15 15:54:55.616 |
2014-07-15 15:54:55.616 | You are in 'detached HEAD' state. You can look 
around, make experimental
2014-07-15 15:54:55.616 | changes and commit them, and you can discard any 
commits you make in this
2014-07-15 15:54:55.616 | state without impacting any branches by performing 
another checkout.
2014-07-15 15:54:55.616 |
2014-07-15 15:54:55.616 | If you want to create a new branch to retain commits 
you create, you may
2014-07-15 15:54:55.616 | do so (now or later) by using -b with the checkout 
command again. Example:
2014-07-15 15:54:55.617 |
2014-07-15 15:54:55.617 |   git checkout -b new_branch_name
2014-07-15 15:54:55.617 |
2014-07-15 15:54:55.617 | HEAD is now at fc5f2a9... Merge commit 
'refs/changes/42/105542/6' of 
ssh://review.openstack.org:29418/openstack/neutron into HEAD
2014-07-15 15:54:55.617 | + git reset --hard FETCH_HEAD
2014-07-15 15:54:55.632 | HEAD is now at fc5f2a9 Merge commit 
'refs/changes/42/105542/6' of 
ssh://review.openstack.org:29418/openstack/neutron into HEAD
2014-07-15 15:54:55.633 | + git clean -x -f -d -q
2014-07-15 15:54:55.639 | + '[' -f .gitmodules ']'
2014-07-15 15:54:56.013 | [gate-neutron-python26] $ /bin/bash -xe 
/tmp/hudson691292143843068196.sh
2014-07-15 15:54:56.053 | + /usr/local/jenkins/slave_scripts/run-unittests.sh 
26 openstack neutron
2014-07-15 15:54:56.053 | + version=26
2014-07-15 15:54:56.053 | + org=openstack
2014-07-15 15:54:56.053 | + project=neutron
2014-07-15 15:54:56.053 | + source /usr/local/jenkins/slave_scripts/functions.sh
2014-07-15 15:54:56.054 | + check_variable_version_org_project 26 openstack 
neutron /usr/local/jenkins/slave_scripts/run-unittests.sh
2014-07-15 15:54:56.054 | + version=26
2014-07-15 15:54:56.054 | + org=openstack
2014-07-15 15:54:56.054 | + project=neutron
2014-07-15 15:54:56.054 | + 
filename=/usr/local/jenkins/slave_scripts/run-unittests.sh
2014-07-15 15:54:56.054 | + [[ -z 26 ]]
2014-07-15 15:54:56.054 | + [[ -z openstack ]]
2014-07-15 15:54:56.054 | + [[ -z neutron ]]
2014-07-15 15:54:56.054 | + venv=py26
2014-07-15 15:54:56.054 | + export NOSE_WITH_XUNIT=1
2014-07-15 15:54:56.054 | + NOSE_WITH_XUNIT=1
2014-07-15 15:54:56.055 | + export NOSE_WITH_HTML_OUTPUT=1
2014-07-15 15:54:56.055 | + NOSE_WITH_HTML_OUTPUT=1
2014-07-15 15:54:56.055 | + export NOSE_HTML_OUT_FILE=nose_results.html
2014-07-15 15:54:56.055 | + 

[Yahoo-eng-team] [Bug 1334233] Re: compute_manager network allocation retries not handled properly

2014-07-15 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334233

Title:
  compute_manager network allocation retries not handled properly

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  in the manager.py, ComputeManager has a method
  _allocate_network_async and uses the CONF parameter
  network_allocate_retries. While this method retries, the logic used
  is not proper as listed below:

  retry_time *= 2
  if retry_time  30:
  retry_time = 30

  This bug is filed to correct it as follows:
  if retry_time  30:
  retry_time = 30
  else
  retry_time *= 2

  This will avoid the calculation of retry time out when the timeout
  reaches beyond 30 sec.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342235] [NEW] Lunching an instance from image, insufficient flavors not filtered

2014-07-15 Thread Tzach Shefi
Public bug reported:

Description of problem: While selecting an image from Glance images, you
can click on Lunch button to boot an instance. This opens up an
instance boot page, on this page flavors with insufficient disk size are
not automatically grayed out. However if you reach this same location
from instance page and chose an image, assuming image has min disk
option, insufficient flavors are grayed out automatically.

Might be a case for another bug, on instance boot page, image selection
should be given a higher priority in the flow, thus flavor list would be
updated automatically. Rather then selecting a flavor then choosing an
image only to find that disk is too small for selected image.

Version-Release number of selected component (if applicable):
rhel 6.5
python-django-horizon-2014.1.1-2.el6ost.noarch

How reproducible:
Every time

Steps to Reproduce:
1. Upload an image, make sure to set min disk setting
2. On images page, click lunch button of image created above
3. Notice insufficient flavors are not grayed out. 
4. While if you start procedure not from images but rather instances, it does 
happen.

Actual results:
All instance flavors are available.

Expected results:
It would be nice if insufficient flavors would be grayed out automatically. As 
they are when using instance page to launch an instance, selecting an image 
causes small flavors to become disabled.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342235

Title:
  Lunching an instance from image, insufficient flavors not filtered

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem: While selecting an image from Glance images,
  you can click on Lunch button to boot an instance. This opens up an
  instance boot page, on this page flavors with insufficient disk size
  are not automatically grayed out. However if you reach this same
  location from instance page and chose an image, assuming image has min
  disk option, insufficient flavors are grayed out automatically.

  Might be a case for another bug, on instance boot page, image
  selection should be given a higher priority in the flow, thus flavor
  list would be updated automatically. Rather then selecting a flavor
  then choosing an image only to find that disk is too small for
  selected image.

  Version-Release number of selected component (if applicable):
  rhel 6.5
  python-django-horizon-2014.1.1-2.el6ost.noarch

  How reproducible:
  Every time

  Steps to Reproduce:
  1. Upload an image, make sure to set min disk setting
  2. On images page, click lunch button of image created above
  3. Notice insufficient flavors are not grayed out. 
  4. While if you start procedure not from images but rather instances, it does 
happen.

  Actual results:
  All instance flavors are available.

  Expected results:
  It would be nice if insufficient flavors would be grayed out automatically. 
As they are when using instance page to launch an instance, selecting an image 
causes small flavors to become disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342266] [NEW] IPv4 address sorting is nonsensical

2014-07-15 Thread Nicolas Simonds
Public bug reported:

Horizon uses the default IP address sorter provided with jquery-
tablesorter, which was apparently written by throwing tennis balls at a
keyboard from across the room and saving the results.  It gets things
preposterously wrong.

To Reproduce:

1.  Create a bunch of VMs on a large-ish network, say a /23 or /21
2.  Sort the Instances DataTable by IP address

Expected Behavior:

 Sensible sorting by IP address.

Actual Behaviour:

Hilarity ensues.  Screenshot attached.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Screenshot of nonsensical IP sort
   
https://bugs.launchpad.net/bugs/1342266/+attachment/4153327/+files/Screen%20Shot%202014-07-10%20at%202.26.36%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342266

Title:
  IPv4 address sorting is nonsensical

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon uses the default IP address sorter provided with jquery-
  tablesorter, which was apparently written by throwing tennis balls at
  a keyboard from across the room and saving the results.  It gets
  things preposterously wrong.

  To Reproduce:

  1.  Create a bunch of VMs on a large-ish network, say a /23 or /21
  2.  Sort the Instances DataTable by IP address

  Expected Behavior:

   Sensible sorting by IP address.

  Actual Behaviour:

  Hilarity ensues.  Screenshot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] [NEW] auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Brant Knudson
Public bug reported:


The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

** Affects: ceilometer
 Importance: Undecided
 Status: In Progress

** Affects: glance
 Importance: Undecided
 Status: In Progress

** Affects: heat
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Affects: neutron
 Importance: Undecided
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Morgan Fainberg
** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Morgan Fainberg
** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Morgan Fainberg
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279446] Re: Glance image create should handle invalid location more gracefully.

2014-07-15 Thread Raildo Mascena de Sousa Filho
The bug is already solved

I tried that way and I got HTTP 400 error, as you may see:

stack@raildo:~/devstack$ glance image-create --name test --location 
'swift://example.com/container/obj'
400 Bad Request
Location is missing user:password information.
(HTTP 400)


** Changed in: glance
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1279446

Title:
  Glance image create should handle invalid location more gracefully.

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  On trying to create an image with invalid location uri, the following is the 
error message.
  HTTPInternalServerError (HTTP 500)

  This is not very informative. While it should be ideally 400 bad
  request.

File /opt/stack/glance/glance/store/__init__.py, line 273, in 
get_size_from_backend
  return store.get_size(loc)
File /opt/stack/glance/glance/store/swift.py, line 355, in get_size
  connection = self.get_connection(location)
File /opt/stack/glance/glance/store/swift.py, line 612, in get_connection
  raise exception.BadStoreUri(message=reason)
  BadStoreUri: Location is missing user:password information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1279446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Morgan Fainberg
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342293] [NEW] [data processing] Make the job executions page more useful

2014-07-15 Thread Chad Roberts
Public bug reported:

This came up at OS Summit in Atlanta, ideally to be done for Juno.

On the data processing - Job Executions panel, the uuid for the job
execution is listed as the name.  That isn't particularly meaningful to
a horizon user.  Come up with a more meaningful and unique name that can
be used in lieu of the uuid.

Also, adding the start/end time/date of the job execution would also be
a good addition.

These changes should be made to be consistent with both the job
executions listing page and the job execution details page.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342293

Title:
  [data processing] Make the job executions page more useful

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This came up at OS Summit in Atlanta, ideally to be done for Juno.

  On the data processing - Job Executions panel, the uuid for the job
  execution is listed as the name.  That isn't particularly meaningful
  to a horizon user.  Come up with a more meaningful and unique name
  that can be used in lieu of the uuid.

  Also, adding the start/end time/date of the job execution would also
  be a good addition.

  These changes should be made to be consistent with both the job
  executions listing page and the job execution details page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342309] [NEW] run_tests on stable/icehouse is failing with AttributeError: can't set attribute

2014-07-15 Thread Doug Fish
Public bug reported:

On a fresh clone of Horizon stable/icehouse on an Ubuntu 12.04 machine
there are 870 errors using run_tests.sh.  The tests seem to fail similar
to this:

==
ERROR: test_reject_random_string 
(openstack_dashboard.test.tests.utils.UtilsFilterTests)
--
Traceback (most recent call last):
  File /home/drf/stuff/icehouse/horizon/openstack_dashboard/test/helpers.py, 
line 124, in setUp
test_utils.load_test_data(self)
  File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/utils.py, 
line 43, in load_test_data
data_func(load_onto)
  File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 60, in data
TEST.exceptions.nova_unauthorized = create_stubbed_exception(nova_unauth)
  File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 44, in create_stubbed_exception
return cls(status_code, msg)
  File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 31, in fake_init_exception
self.code = code
AttributeError: can't set attribute

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1342309

Title:
  run_tests on stable/icehouse is failing with AttributeError: can't set
  attribute

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On a fresh clone of Horizon stable/icehouse on an Ubuntu 12.04 machine
  there are 870 errors using run_tests.sh.  The tests seem to fail
  similar to this:

  ==
  ERROR: test_reject_random_string 
(openstack_dashboard.test.tests.utils.UtilsFilterTests)
  --
  Traceback (most recent call last):
File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/helpers.py, line 
124, in setUp
  test_utils.load_test_data(self)
File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/utils.py, 
line 43, in load_test_data
  data_func(load_onto)
File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 60, in data
  TEST.exceptions.nova_unauthorized = create_stubbed_exception(nova_unauth)
File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 44, in create_stubbed_exception
  return cls(status_code, msg)
File 
/home/drf/stuff/icehouse/horizon/openstack_dashboard/test/test_data/exceptions.py,
 line 31, in fake_init_exception
  self.code = code
  AttributeError: can't set attribute

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1342309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340596] Re: Tests fail due to novaclient 2.18 update

2014-07-15 Thread Michael Still
** Changed in: python-novaclient
   Status: In Progress = Fix Committed

** Changed in: python-novaclient
Milestone: None = 2.18.1

** Changed in: python-novaclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  Tests fail due to novaclient 2.18 update

Status in Orchestration API (Heat):
  Invalid
Status in heat havana series:
  Confirmed
Status in heat icehouse series:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Python client library for Nova:
  Fix Released

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py,
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py,
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py,
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py,
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py,
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342348] [NEW] Hyper-V agent IDE/SCSI related refactoring necessary

2014-07-15 Thread Claudiu Belu
Public bug reported:

Hyper-V Server 2012 R2 introduces a new feature for virtual machines
named generation 2, consisting mainly in a new firmware and better
support for synthetic devices.

Generation 2 VMs don't support IDE devices, which means that local boot,
ephemeral disks and DVD Drives must be attached as SCSI, instead of IDE
for generation 1 VMs.

Since the Virtual Hard Disks and Virtual CD/DVD Disks can be attached to
IDE controllers or SCSI controllers (generation 2 only), some constants,
variables and methods have been improperly named, having the IDE prefix.

e.g.: _IDE_DISK_RES_SUB_TYPE will have to be renamed to
_HARD_DISK_RES_SUB_TYPE

** Affects: nova
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New


** Tags: hyper-v

** Changed in: nova
 Assignee: (unassigned) = Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342348

Title:
  Hyper-V agent IDE/SCSI related refactoring necessary

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hyper-V Server 2012 R2 introduces a new feature for virtual machines
  named generation 2, consisting mainly in a new firmware and better
  support for synthetic devices.

  Generation 2 VMs don't support IDE devices, which means that local
  boot, ephemeral disks and DVD Drives must be attached as SCSI, instead
  of IDE for generation 1 VMs.

  Since the Virtual Hard Disks and Virtual CD/DVD Disks can be attached
  to IDE controllers or SCSI controllers (generation 2 only), some
  constants, variables and methods have been improperly named, having
  the IDE prefix.

  e.g.: _IDE_DISK_RES_SUB_TYPE will have to be renamed to
  _HARD_DISK_RES_SUB_TYPE

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Sergey Lukjanov
** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: sahara
   Importance: Undecided = Medium

** Changed in: sahara
 Assignee: (unassigned) = Sergey Lukjanov (slukjanov)

** Changed in: sahara
Milestone: None = juno-2

** Changed in: sahara
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Triaged

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Dolph Mathews
Added keystoneclient so we can have it emit a deprecation warning on
startup.

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
 Assignee: (unassigned) = Dolph Mathews (dolph)

** Changed in: python-keystoneclient
   Importance: Undecided = Medium

** Changed in: python-keystoneclient
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Triaged

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342380] [NEW] intermittent libvirtError Device or resource busy

2014-07-15 Thread melanie witt
Public bug reported:

Example error from nova-compute log:

libvirtError: Failed to terminate process 5962 with SIGKILL: Device or
resource busy

I saw this fail a check build job. It seems to fail the job only if the
error doesn't occur during _shutdown_instance (in which case it's
ignored by tempest, I guess).

logstash query: message:(libvirtError: Failed to terminate process NOT
in _shutdown_instance) AND tags:screen-n-cpu.txt

14 hits in 7 days, check and gate, all failures.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOihcImxpYnZpcnRFcnJvcjogRmFpbGVkIHRvIHRlcm1pbmF0ZSBwcm9jZXNzXCIgTk9UIFwiaW4gX3NodXRkb3duX2luc3RhbmNlXCIpIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI5MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA1NDYxMjE5MzI2fQ==

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342380

Title:
  intermittent libvirtError Device or resource busy

Status in OpenStack Compute (Nova):
  New

Bug description:
  Example error from nova-compute log:

  libvirtError: Failed to terminate process 5962 with SIGKILL: Device or
  resource busy

  I saw this fail a check build job. It seems to fail the job only if
  the error doesn't occur during _shutdown_instance (in which case it's
  ignored by tempest, I guess).

  logstash query: message:(libvirtError: Failed to terminate process
  NOT in _shutdown_instance) AND tags:screen-n-cpu.txt

  14 hits in 7 days, check and gate, all failures.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOihcImxpYnZpcnRFcnJvcjogRmFpbGVkIHRvIHRlcm1pbmF0ZSBwcm9jZXNzXCIgTk9UIFwiaW4gX3NodXRkb3duX2luc3RhbmNlXCIpIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI5MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA1NDYxMjE5MzI2fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-15 Thread Devananda van der Veen
** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
 Assignee: (unassigned) = Devananda van der Veen (devananda)

** Changed in: ironic
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Triaged

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342464] [NEW] PEP 8 H305 test unpassable on both Python 2.6 2.7

2014-07-15 Thread Corey Wright
Public bug reported:

https://review.openstack.org/105950 introduced (or, more precisely,
removed from flake8's ignore list) H305 which enforces grouping if
Python module imports by type: stdlib, third-party, or project-specific.
The problem is that some modules are third-party in Python 2.6, but
became part of the stdlib in Python 2.7.

For example, under Python 2.6:

./nova/cmd/manage.py:58:1: H305  imports not grouped correctly (argparse: 
third-party, os: stdlib)
./nova/tests/test_utils.py:19:1: H305  imports not grouped correctly (hashlib: 
stdlib, importlib: third-party)
./nova/tests/test_utils.py:20:1: H305  imports not grouped correctly 
(importlib: third-party, os: stdlib)

argparse and importlib are not part of Python 2.6's stdlib (and therefor
third-party), but were added to Python 2.7's stdlib.

This wasn't detect by the gate because, though Nova is tested against
Python 2.6 by gate-nova-python26, the PEP 8 tests are executed with
gate-nova-pep8, which appears to be using Python 2.7.

My proposed solution is to add # noqa to the aforementioned lines
(yes, those appear to be the only three occurrences), though that not
only makes them invisible to flake8 for Python 2.6, but unfortunately
Python 2.7 too.

I'll try to generate a patch in the next 24 hours unless someone beats
me to it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342464

Title:
  PEP 8 H305 test unpassable on both Python 2.6  2.7

Status in OpenStack Compute (Nova):
  New

Bug description:
  https://review.openstack.org/105950 introduced (or, more precisely,
  removed from flake8's ignore list) H305 which enforces grouping if
  Python module imports by type: stdlib, third-party, or project-
  specific.  The problem is that some modules are third-party in Python
  2.6, but became part of the stdlib in Python 2.7.

  For example, under Python 2.6:

  ./nova/cmd/manage.py:58:1: H305  imports not grouped correctly (argparse: 
third-party, os: stdlib)
  ./nova/tests/test_utils.py:19:1: H305  imports not grouped correctly 
(hashlib: stdlib, importlib: third-party)
  ./nova/tests/test_utils.py:20:1: H305  imports not grouped correctly 
(importlib: third-party, os: stdlib)

  argparse and importlib are not part of Python 2.6's stdlib (and
  therefor third-party), but were added to Python 2.7's stdlib.

  This wasn't detect by the gate because, though Nova is tested against
  Python 2.6 by gate-nova-python26, the PEP 8 tests are executed with
  gate-nova-pep8, which appears to be using Python 2.7.

  My proposed solution is to add # noqa to the aforementioned lines
  (yes, those appear to be the only three occurrences), though that not
  only makes them invisible to flake8 for Python 2.6, but unfortunately
  Python 2.7 too.

  I'll try to generate a patch in the next 24 hours unless someone beats
  me to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342498] [NEW] test_live_migration_* tests fail with fakelibvirt

2014-07-15 Thread Corey Wright
Public bug reported:

With the recent merging of https://review.openstack.org/73428,
test_live_migration_(changes_listen_addresses|raises_exception) requires
fakelibvirt to have the migrateToURI2() method (if libvirt, at least
v0.9.2, is not installed) so that it can be mocked out by those tests,
otherwise the following mox errors occur during unit testing.

UnknownMethodCallError: Method called is not a member of the object:
migrateToURI2

patch attached (which i may submit properly in the next 24 hours if
someone doesn't beat me to it),  with signature taken from upstream
libvirt and mocking stolen from fakelibvirt's migrateToURI().

full error log:

$ ./run_tests.sh 
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_live_migration_*
Running `tools/with_venv.sh python -m nova.openstack.common.lockutils python 
setup.py testr --testr-args='--subunit --concurrency 0  
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_live_migration_*'`
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase
test_live_migration_changes_listen_addresses  FAIL
test_live_migration_uses_migrateToURI_without_dest_listen_addrs   OK  0.27
test_live_migration_raises_exception  FAIL
test_live_migration_fails_without_migratable_flag_or_0_addr   OK  3.32
test_live_migration_uses_migrateToURI_without_migratable_flag OK  0.43

Slowest 5 tests took 8.80 secs:
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase
test_live_migration_changes_listen_addresses  2.10
test_live_migration_fails_without_migratable_flag_or_0_addr   3.32
test_live_migration_raises_exception  2.68
test_live_migration_uses_migrateToURI_without_dest_listen_addrs   0.27
test_live_migration_uses_migrateToURI_without_migratable_flag 0.43

==
FAIL: 
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_live_migration_changes_listen_addresses
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
INFO [migrate.versioning.api] 215 - 216... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 216 - 217... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 217 - 218... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 218 - 219... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 219 - 220... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 220 - 221... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 221 - 222... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 222 - 223... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 223 - 224... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 224 - 225... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 225 - 226... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 226 - 227... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 227 - 228... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 228 - 229... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 229 - 230... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 230 - 231... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 231 - 232... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 232 - 233... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 233 - 234... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 234 - 235... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 235 - 236... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 236 - 237... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 237 - 238... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 238 - 239... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 239 - 240... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 240 - 241... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 241 - 242... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 242 - 243... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 243 - 244... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 244 - 245... 
INFO [migrate.versioning.api] done
INFO [nova.network.driver] Loading network driver 'nova.network.linux_net'
INFO [nova.virt.driver] Loading compute driver 'nova.virt.fake.FakeDriver'
}}}

Traceback (most recent call last):
  File 
/home/dev/Desktop/nova/.venv/local/lib/python2.7/site-packages/mock.py, line 
1201, in patched
return func(*args, **keywargs)
  File 

[Yahoo-eng-team] [Bug 1342507] [NEW] healing migration doesn't work for ryu CI

2014-07-15 Thread YAMAMOTO Takashi
Public bug reported:

ryu CI started failing after healing migration change
(https://review.openstack.org/#/c/96438/)

http://180.37.183.32/ryuci/38/96438/41/check/check-tempest-dsvm-
ryuplugin/e457d80/logs/devstacklog.txt.gz

2014-07-15 11:27:55.722 | Traceback (most recent call last):
2014-07-15 11:27:55.722 |   File /usr/local/bin/neutron-db-manage, line 10, 
in module
2014-07-15 11:27:55.722 | sys.exit(main())
2014-07-15 11:27:55.722 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 171, in main
2014-07-15 11:27:55.722 | CONF.command.func(config, CONF.command.name)
2014-07-15 11:27:55.722 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 85, in 
do_upgrade_downgrade
2014-07-15 11:27:55.722 | do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
2014-07-15 11:27:55.722 |   File 
/opt/stack/new/neutron/neutron/db/migration/cli.py, line 63, in 
do_alembic_command
2014-07-15 11:27:55.723 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
2014-07-15 11:27:55.723 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, in 
upgrade
2014-07-15 11:27:55.733 | script.run_env()
2014-07-15 11:27:55.734 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in run_env
2014-07-15 11:27:55.736 | util.load_python_file(self.dir, 'env.py')
2014-07-15 11:27:55.737 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 205, in 
load_python_file
2014-07-15 11:27:55.737 | module = load_module_py(module_id, path)
2014-07-15 11:27:55.738 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 58, in 
load_module_py
2014-07-15 11:27:55.738 | mod = imp.load_source(module_id, path, fp)
2014-07-15 11:27:55.738 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
106, in module
2014-07-15 11:27:55.738 | run_migrations_online()
2014-07-15 11:27:55.738 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, line 
90, in run_migrations_online
2014-07-15 11:27:55.738 | options=build_options())
2014-07-15 11:27:55.738 |   File string, line 7, in run_migrations
2014-07-15 11:27:55.739 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 681, in 
run_migrations
2014-07-15 11:27:55.740 | self.get_context().run_migrations(**kw)
2014-07-15 11:27:55.740 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, in 
run_migrations
2014-07-15 11:27:55.741 | change(**kw)
2014-07-15 11:27:55.741 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
2014-07-15 11:27:55.741 | heal_script.heal()
2014-07-15 11:27:55.741 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 78, in heal
2014-07-15 11:27:55.741 | execute_alembic_command(el)
2014-07-15 11:27:55.741 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 93, in execute_alembic_command
2014-07-15 11:27:55.741 | parse_modify_command(command)
2014-07-15 11:27:55.741 |   File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py,
 line 126, in parse_modify_command
2014-07-15 11:27:55.741 | op.alter_column(table, column, **kwargs)
2014-07-15 11:27:55.741 |   File string, line 7, in alter_column
2014-07-15 11:27:55.742 |   File string, line 1, in lambda
2014-07-15 11:27:55.742 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/util.py, line 322, in go
2014-07-15 11:27:55.742 | return fn(*arg, **kw)
2014-07-15 11:27:55.742 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 300, in 
alter_column
2014-07-15 11:27:55.743 | existing_autoincrement=existing_autoincrement
2014-07-15 11:27:55.743 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py, line 42, in 
alter_column
2014-07-15 11:27:55.743 | else existing_autoincrement
2014-07-15 11:27:55.743 |   File 
/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in _exec
2014-07-15 11:27:55.744 | conn.execute(construct, *multiparams, **params)
2014-07-15 11:27:55.744 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 717, 
in execute
2014-07-15 11:27:55.745 | return meth(self, multiparams, params)
2014-07-15 11:27:55.745 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 67, in 
_execute_on_connection
2014-07-15 11:27:55.745 | return connection._execute_ddl(self, multiparams, 
params)
2014-07-15 11:27:55.745 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 765, 
in _execute_ddl
2014-07-15 11:27:55.746 | compiled = ddl.compile(dialect=dialect)
2014-07-15 11:27:55.746 |   File string, line 1, in lambda
2014-07-15 11:27:55.746 |   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py,