[Yahoo-eng-team] [Bug 1303591] [NEW] InvalidAggregateAction exception is not handled in v3

2014-04-07 Thread Haiwei Xu
Public bug reported:

When an aggregate with 'host' attribute not empty is deleted, 
InvalidAggregateAction exception will be raised, but this exception
is not handled.

$ nova --os-compute-api-version 3 aggregate-delete agg5
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
class 'nova.exception.InvalidAggregateAction' (HTTP 500) (Request-ID: 
req-9a8500ae-379a-4121-b217-7e7ea6188ad0)

2014-04-07 23:56:16.347 ERROR nova.api.openstack.extensions 
[req-f7c09203-a681-496c-a84e-18fb3d2e3659 admin demo] Unexpected exception in 
API method
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 472, in wrapped
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/aggregates.py, line 
155, in delete
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
self.api.delete_aggregate(context, id)
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions payload)
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions return f(self, 
context, *args, **kw)
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 3363, in delete_aggregate
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions reason='not 
empty')
2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
InvalidAggregateAction: Cannot perform action 'delete' on aggregate 3. Reason: 
not empty.

** Affects: nova
 Importance: Undecided
 Assignee: Haiwei Xu (xu-haiwei)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303591

Title:
  InvalidAggregateAction exception is not handled in v3

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When an aggregate with 'host' attribute not empty is deleted, 
InvalidAggregateAction exception will be raised, but this exception
  is not handled.

  $ nova --os-compute-api-version 3 aggregate-delete agg5
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  class 'nova.exception.InvalidAggregateAction' (HTTP 500) (Request-ID: 
req-9a8500ae-379a-4121-b217-7e7ea6188ad0)

  2014-04-07 23:56:16.347 ERROR nova.api.openstack.extensions 
[req-f7c09203-a681-496c-a84e-18fb3d2e3659 admin demo] Unexpected exception in 
API method
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 472, in wrapped
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/aggregates.py, line 
155, in delete
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
self.api.delete_aggregate(context, id)
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions payload)
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions return 
f(self, context, *args, **kw)
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 3363, in delete_aggregate
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions reason='not 
empty')
  2014-04-07 23:56:16.347 TRACE nova.api.openstack.extensions 
InvalidAggregateAction: Cannot perform 

[Yahoo-eng-team] [Bug 1289397] Re: vmware: nova instance delete - show status error

2014-04-07 Thread satyadev svn
** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289397

Title:
  vmware:  nova  instance delete - show status error

Status in OpenStack Compute (Nova):
  New

Bug description:
  ssatya@devstack:~$ nova boot --image 1e95fe6b-cec6-4420-97d1-1e7bc8c81c49 
--flavor 1  testdummay
  
+--+---+
  | Property | Value
 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   
 |
  | OS-EXT-AZ:availability_zone  | nova 
 |
  | OS-EXT-STS:power_state   | 0
 |
  | OS-EXT-STS:task_state| networking   
 |
  | OS-EXT-STS:vm_state  | building 
 |
  | OS-SRV-USG:launched_at   | -
 |
  | OS-SRV-USG:terminated_at | -
 |
  | accessIPv4   |  
 |
  | accessIPv6   |  
 |
  | adminPass| fK8SPGtHLUds 
 |
  | config_drive |  
 |
  | created  | 2014-03-07T14:33:49Z 
 |
  | flavor   | m1.tiny (1)  
 |
  | hostId   | 
2c1ae30aa2a235d9c0c8b04aae3f4199cd98356e44a03b5c8f878adb  |
  | id   | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e 
 |
  | image| debian-2.6.32-i686 
(1e95fe6b-cec6-4420-97d1-1e7bc8c81c49) |
  | key_name | -
 |
  | metadata | {}   
 |
  | name | testdummay   
 |
  | os-extended-volumes:volumes_attached | []   
 |
  | progress | 0
 |
  | security_groups  | default  
 |
  | status   | BUILD
 |
  | tenant_id| 209ab7e4f3744675924212805db3ad74 
 |
  | updated  | 2014-03-07T14:33:50Z 
 |
  | user_id  | f3756a4910054883b84ee15acc15fbd1 
 |
  
+--+---+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | BUILD  | spawning   | 
NOSTATE |  |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2 |
  
+--++++-+--+
  ssatya@devstack:~$ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | 
Power State | Networks |
  
+--++++-+--+
  | eae503d9-c6f7-4e3e-9adc-0b8b6803c90e | testdummay | ACTIVE | -  | 
Running | private=10.0.0.3 |
  | d1e982c4-85c2-422d-b046-1643bd81e674 | testvm1| ERROR  | deleting   | 
Shutdown| private=10.0.0.2 |
  
+--++++-+--+
  

[Yahoo-eng-team] [Bug 1303179] Re: Missing comma in nsx router mappings migration

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85629
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ffe5cf1627b36fc32928d4cc50af1fb1336d0297
Submitter: Jenkins
Branch:milestone-proposed

commit ffe5cf1627b36fc32928d4cc50af1fb1336d0297
Author: Henry Gessau ges...@cisco.com
Date:   Sat Apr 5 18:10:52 2014 -0400

Add missing comma in nsx router mappings migration

Change-Id: I85bcc9b7fe636f34dbdf6f8c3172352c8e586e2a
Closes-bug: #1303179
Related-bug: #1207402
(cherry picked from commit acae91475775a8c85598b1bfdc4910e5fe81ced9)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303179

Title:
  Missing comma in nsx router mappings migration

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Found during review https://review.openstack.org/40296
  There is a comma missing in the migration_for_plugins list
  
https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/4ca36cfc898c_nsx_router_mappings.py#L30

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303605] [NEW] test_rollback_on_router_delete fails

2014-04-07 Thread Irena Berezovsky
Public bug reported:

gate-neutron-python26 failis for test_rollback_on_router_delete with following 
error:
2014-04-07 03:53:51,643ERROR [neutron.plugins.bigswitch.servermanager] 
ServerProxy: POST failure for servers: ('localhost', 9000) Response: {'status': 
'This server is broken, please try another'}
2014-04-07 03:53:51,643ERROR [neutron.plugins.bigswitch.servermanager] 
ServerProxy: Error details: status=500, reason='Internal Server Error', 
ret={'status': 'This server is broken, please try another'}, data={'status': 
'This server is broken, please try another'}
}}}

Traceback (most recent call last):
  File neutron/tests/unit/bigswitch/test_router_db.py, line 536, in 
test_rollback_on_router_delete
expected_code=exc.HTTPInternalServerError.code)
  File neutron/tests/unit/test_db_plugin.py, line 450, in _delete
self.assertEqual(res.status_int, expected_code)
  File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
raise mismatch_error
MismatchError: 204 != 500

full log is here:  
http://logs.openstack.org/29/82729/3/check/gate-neutron-python26/a1065eb/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303605

Title:
  test_rollback_on_router_delete fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  gate-neutron-python26 failis for test_rollback_on_router_delete with 
following error:
  2014-04-07 03:53:51,643ERROR [neutron.plugins.bigswitch.servermanager] 
ServerProxy: POST failure for servers: ('localhost', 9000) Response: {'status': 
'This server is broken, please try another'}
  2014-04-07 03:53:51,643ERROR [neutron.plugins.bigswitch.servermanager] 
ServerProxy: Error details: status=500, reason='Internal Server Error', 
ret={'status': 'This server is broken, please try another'}, data={'status': 
'This server is broken, please try another'}
  }}}

  Traceback (most recent call last):
File neutron/tests/unit/bigswitch/test_router_db.py, line 536, in 
test_rollback_on_router_delete
  expected_code=exc.HTTPInternalServerError.code)
File neutron/tests/unit/test_db_plugin.py, line 450, in _delete
  self.assertEqual(res.status_int, expected_code)
File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: 204 != 500

  full log is here:  
  
http://logs.openstack.org/29/82729/3/check/gate-neutron-python26/a1065eb/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303615] [NEW] Instance's Task state never change from scheduling after sending SIGHUP to nova-compute

2014-04-07 Thread Mitsuru Kanabuchi
Public bug reported:

[Issue]

I tried to reload nova.conf with sending SIGHUP to nova-compute.
In my understanding, nova-compute can reload nova.conf by receiving SIGHUP when 
it has started as daemon.

Reloading config is succeed.
However, booting new instance doesn't work correctly after sending SIGHUP.
Task State would never change from scheduling.

$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 9aa89186-28fa-44ee-8e97-84bd0de23f91 | vm   | BUILD  | scheduling | NOSTATE   
  |  |
+--+--+++-+--+

[How to reproduce]

nova's commit id: 33fc957a5aeb9d310cbff3ac22c7a3c97a794f72

1) Start nova-compute as daemon.

$ sudo cat /etc/init/nova-compute.conf
description nova-compute
author openstack

start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [016]

exec su -s /bin/sh -c exec /usr/local/bin/nova-compute --config-file
/etc/nova/nova.conf --log-file /home/devstack/log/nova-compute.log 
/dev/null 21 devstack

$ sudo service nova-compute start
nova-compute start/running, process 10521

2) Send SIGHUP to nova-compute's PID.

$ ps aux|grep nova-compute
root 10521  0.0  0.0   4052  1548 ?Ss   15:42   0:00 su -s /bin/sh 
-c exec /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf 
--log-file /home/devstack/log/nova-compute.log  /dev/null 21 devstack
devstack 10523 19.5  1.5 220744 32796 ?Ssl  15:42   0:00 
/usr/bin/python /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf 
--log-file /home/devstack/log/nova-compute.log
$ sudo kill -SIGHUP 10523

3) Verify nova-compute.log and check reload success.

$ cat /home/devstack/log/nova-compute.log
:
2014-04-07 15:46:18.791 INFO nova.openstack.common.service [-] Caught SIGHUP, 
exiting
2014-04-07 15:46:18.811 DEBUG nova.openstack.common.service [-] Full set of 
CONF: from (pid=10523) _wait_for_exit_or_signal 
/opt/stack/nova/nova/openstack/common/service.py:167
:

4) Boot new instance and check Task State of new instance repeatedly.

$ nova boot --flavor 1 --image dee24998-10f7-42a3-8cd7-d46d185281ca vm
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
:

5) Task State would never change from scheduling.

$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 9aa89186-28fa-44ee-8e97-84bd0de23f91 | vm   | BUILD  | scheduling | NOSTATE   
  |  |
+--+--+++-+--+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303615

Title:
  Instance's Task state never change from scheduling after sending
  SIGHUP to nova-compute

Status in OpenStack Compute (Nova):
  New

Bug description:
  [Issue]

  I tried to reload nova.conf with sending SIGHUP to nova-compute.
  In my understanding, nova-compute can reload nova.conf by receiving SIGHUP 
when it has started as daemon.

  Reloading config is succeed.
  However, booting new instance doesn't work correctly after sending SIGHUP.
  Task State would never change from scheduling.

  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 9aa89186-28fa-44ee-8e97-84bd0de23f91 | vm   | BUILD  | scheduling | NOSTATE 
|  |
  
+--+--+++-+--+

  [How to reproduce]

  nova's commit id: 33fc957a5aeb9d310cbff3ac22c7a3c97a794f72

  1) Start nova-compute as daemon.

  $ sudo cat /etc/init/nova-compute.conf
  description nova-compute
  author openstack

  start on (local-filesystems and net-device-up IFACE!=lo)
  stop 

[Yahoo-eng-team] [Bug 1303642] [NEW] Conflict: An object with that identifier already exists ni Tempest test_servers_negative.py

2014-04-07 Thread Thierry Carrez
Public bug reported:

Tempest's test_servers_negative.py sporadically fails in unshelve_server with:
Conflict: An object with that identifier already exists
Cannot 'unshelve' while instance is in vm_state stopped
TimeoutException: Request timed out
failed to reach SHELVED_OFFLOADED status and task state None within the 
required time

2014-04-07 01:44:29.354 | Traceback (most recent call last):
2014-04-07 01:44:29.354 |   File 
tempest/services/compute/v3/json/servers_client.py, line 387, in 
unshelve_server
2014-04-07 01:44:29.354 | return self.action(server_id, 'unshelve', None, 
**kwargs)
2014-04-07 01:44:29.355 |   File 
tempest/services/compute/v3/json/servers_client.py, line 203, in action
2014-04-07 01:44:29.355 | post_body)
2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 201, in 
post
2014-04-07 01:44:29.355 | return self.request('POST', url, headers, body)
2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 443, in 
request
2014-04-07 01:44:29.355 | resp, resp_body)
2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 497, in 
_error_checker
2014-04-07 01:44:29.355 | raise exceptions.Conflict(resp_body)
2014-04-07 01:44:29.355 | Conflict: An object with that identifier already 
exists
2014-04-07 01:44:29.355 | Details: {u'message': uCannot 'unshelve' while 
instance is in vm_state stopped, u'code': 409}
2014-04-07 01:44:29.355 | }}}
2014-04-07 01:44:29.356 |
2014-04-07 01:44:29.356 | Traceback (most recent call last):
2014-04-07 01:44:29.356 |   File 
tempest/api/compute/v3/servers/test_servers_negative.py, line 411, in 
test_shelve_shelved_server
2014-04-07 01:44:29.356 | extra_timeout=offload_time)
2014-04-07 01:44:29.356 |   File 
tempest/services/compute/v3/json/servers_client.py, line 167, in 
wait_for_server_status
2014-04-07 01:44:29.356 | raise_on_error=raise_on_error)
2014-04-07 01:44:29.356 |   File tempest/common/waiters.py, line 89, in 
wait_for_server_status
2014-04-07 01:44:29.356 | raise exceptions.TimeoutException(message)
2014-04-07 01:44:29.356 | TimeoutException: Request timed out
2014-04-07 01:44:29.357 | Details: Server 4ff6dc10-eac8-41d2-a645-3a0e0ba07c8a 
failed to reach SHELVED_OFFLOADED status and task state None within the 
required time (196 s). Current status: SHUTOFF. Current task state: None.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- Tempest's test_servers_negative.py fails in unshelve_server with:
+ Tempest's test_servers_negative.py sporadically fails in unshelve_server with:
  Conflict: An object with that identifier already exists
  Cannot 'unshelve' while instance is in vm_state stopped
  TimeoutException: Request timed out
  failed to reach SHELVED_OFFLOADED status and task state None within the 
required time
- 
  
  2014-04-07 01:44:29.354 | Traceback (most recent call last):
  2014-04-07 01:44:29.354 |   File 
tempest/services/compute/v3/json/servers_client.py, line 387, in 
unshelve_server
  2014-04-07 01:44:29.354 | return self.action(server_id, 'unshelve', None, 
**kwargs)
  2014-04-07 01:44:29.355 |   File 
tempest/services/compute/v3/json/servers_client.py, line 203, in action
  2014-04-07 01:44:29.355 | post_body)
  2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 201, 
in post
  2014-04-07 01:44:29.355 | return self.request('POST', url, headers, body)
  2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 443, 
in request
  2014-04-07 01:44:29.355 | resp, resp_body)
  2014-04-07 01:44:29.355 |   File tempest/common/rest_client.py, line 497, 
in _error_checker
  2014-04-07 01:44:29.355 | raise exceptions.Conflict(resp_body)
  2014-04-07 01:44:29.355 | Conflict: An object with that identifier already 
exists
  2014-04-07 01:44:29.355 | Details: {u'message': uCannot 'unshelve' while 
instance is in vm_state stopped, u'code': 409}
  2014-04-07 01:44:29.355 | }}}
- 2014-04-07 01:44:29.356 | 
+ 2014-04-07 01:44:29.356 |
  2014-04-07 01:44:29.356 | Traceback (most recent call last):
  2014-04-07 01:44:29.356 |   File 
tempest/api/compute/v3/servers/test_servers_negative.py, line 411, in 
test_shelve_shelved_server
  2014-04-07 01:44:29.356 | extra_timeout=offload_time)
  2014-04-07 01:44:29.356 |   File 
tempest/services/compute/v3/json/servers_client.py, line 167, in 
wait_for_server_status
  2014-04-07 01:44:29.356 | raise_on_error=raise_on_error)
  2014-04-07 01:44:29.356 |   File tempest/common/waiters.py, line 89, in 
wait_for_server_status
  2014-04-07 01:44:29.356 | raise exceptions.TimeoutException(message)
  2014-04-07 01:44:29.356 | TimeoutException: Request timed out
  2014-04-07 01:44:29.357 | Details: Server 
4ff6dc10-eac8-41d2-a645-3a0e0ba07c8a failed to reach SHELVED_OFFLOADED status 
and task state None within the required time (196 s). Current status: 
SHUTOFF. Current task state: None.

-- 
You received this bug notification because you 

[Yahoo-eng-team] [Bug 1303644] [NEW] Horizon errors when creating a valid Heat stack

2014-04-07 Thread Ami Jeain
Public bug reported:

I have done the following:
- Went to project tab, and Orchestration - Stack
- Clicked on launc Stack and chose template from file (valid template), using 
the following content:
heat_template_version: 2013-05-23
description: 
  A single stack with a keypair.

parameters:
  key_name:
type: string
default: heat_key
  key_save:
type: string
default: false

resources:
  KeyPair:
type: OS::Nova::KeyPair
properties:
  name: { get_param: key_name }
  save_private_key: { get_param: key_save }

outputs:
  PublicKey:
value: { get_attr: [KeyPair, public_key] }
  PrivateKey:
value: { get_attr: [KeyPair, private_key] }

- gave the stack a simple name, and password and hit 'Launch'

= Horizon errors out with something went wront with the following
error in /var/log/horizon/horizon.log:

2014-04-07 07:55:23,816 7116 ERROR horizon.tables.base Error while rendering 
table rows.
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1559, in 
get_rows
row = self._meta.row_class(self, datum)
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 476, in 
__init__
self.load_cells()
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 502, in 
load_cells
cell = table._meta.cell_class(datum, column, self)
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 597, in 
__init__
self.data = self.get_data(datum, column, row)
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 635, in 
get_data
data = column.get_data(datum)
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 350, in 
get_data
data = filter_func(data)
  File /usr/lib/python2.7/site-packages/django/utils/timesince.py, line 32, 
in timesince
d = datetime.datetime(d.year, d.month, d.day)
AttributeError: 'str' object has no attribute 'year'
2014-04-07 07:55:23,816 7116 ERROR django.request Internal Server Error: 
/dashboard/project/stacks/
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
140, in get_response
response = response.render()
  File /usr/lib/python2.7/site-packages/django/template/response.py, line 
105, in render
self.content = self.rendered_content
  File /usr/lib/python2.7/site-packages/django/template/response.py, line 82, 
in rendered_content
content = template.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 140, in 
render
return self._render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 134, in 
_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, line 
124, in render
return compiled_parent._render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 134, in 
_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, line 
63, in render
result = block.nodelist.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, line 
63, in render
result = block.nodelist.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 881, in 
render
output = self.filter_expression.resolve(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 578, in 
resolve
obj = self.var.resolve(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 728, in 
resolve
value = self._resolve_lookup(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 779, in 
_resolve_lookup
current = current()
  File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1142, in 
render
return table_template.render(context)
  File /usr/lib/python2.7/site-packages/django/template/base.py, line 140, in 
render
return 

[Yahoo-eng-team] [Bug 1302490] Re: Requirements fail to be synced in milestone-proposed

2014-04-07 Thread Thierry Carrez
** Changed in: openstack-ci
   Status: In Progress = Fix Released

** Changed in: cinder
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302490

Title:
  Requirements fail to be synced in milestone-proposed

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Core Infrastructure:
  Fix Released
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  With our current process around openstack/requirements, no
  requirements sync is ever pushed to milestone-proposed branches. None
  is proposed until the openstack/requirements MP branch is created, and
  when it is, the propose-requirements job fails with:

  git review -t openstack/requirements milestone-proposed
  + OUTPUT='Had trouble running git log --color=auto --decorate --oneline 
milestone-proposed --not remotes/gerrit/milestone-proposed
  fatal: ambiguous argument '\''milestone-proposed'\'': unknown revision or 
path not in the working tree.
  Use '\''--'\'' to separate paths from revisions'

  See https://jenkins.openstack.org/job/propose-requirements-
  updates/153/console as an example (while it lasts)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1302490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303644] Re: Horizon errors when creating a valid Heat stack

2014-04-07 Thread Julie Pichon
*** This bug is a duplicate of bug 1286959 ***
https://bugs.launchpad.net/bugs/1286959

Thank you for the report. This is a duplicate of bug 1286959 and was
fixed in Icehouse RC1. (Please reopen this bug if you're seeing this on
a more recent version.)

** This bug has been marked a duplicate of bug 1286959
   stack.updated_time is None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303644

Title:
  Horizon errors when creating a valid Heat stack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have done the following:
  - Went to project tab, and Orchestration - Stack
  - Clicked on launc Stack and chose template from file (valid template), using 
the following content:
  heat_template_version: 2013-05-23
  description: 
A single stack with a keypair.

  parameters:
key_name:
  type: string
  default: heat_key
key_save:
  type: string
  default: false

  resources:
KeyPair:
  type: OS::Nova::KeyPair
  properties:
name: { get_param: key_name }
save_private_key: { get_param: key_save }

  outputs:
PublicKey:
  value: { get_attr: [KeyPair, public_key] }
PrivateKey:
  value: { get_attr: [KeyPair, private_key] }

  - gave the stack a simple name, and password and hit 'Launch'

  = Horizon errors out with something went wront with the following
  error in /var/log/horizon/horizon.log:

  2014-04-07 07:55:23,816 7116 ERROR horizon.tables.base Error while rendering 
table rows.
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1559, 
in get_rows
  row = self._meta.row_class(self, datum)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 476, 
in __init__
  self.load_cells()
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 502, 
in load_cells
  cell = table._meta.cell_class(datum, column, self)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 597, 
in __init__
  self.data = self.get_data(datum, column, row)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 635, 
in get_data
  data = column.get_data(datum)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 350, 
in get_data
  data = filter_func(data)
File /usr/lib/python2.7/site-packages/django/utils/timesince.py, line 32, 
in timesince
  d = datetime.datetime(d.year, d.month, d.day)
  AttributeError: 'str' object has no attribute 'year'
  2014-04-07 07:55:23,816 7116 ERROR django.request Internal Server Error: 
/dashboard/project/stacks/
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
140, in get_response
  response = response.render()
File /usr/lib/python2.7/site-packages/django/template/response.py, line 
105, in render
  self.content = self.rendered_content
File /usr/lib/python2.7/site-packages/django/template/response.py, line 
82, in rendered_content
  content = template.render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 140, 
in render
  return self._render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 134, 
in _render
  return self.nodelist.render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, 
in render
  bit = self.render_node(node, context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, 
in render_node
  return node.render(context)
File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, 
line 124, in render
  return compiled_parent._render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 134, 
in _render
  return self.nodelist.render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, 
in render
  bit = self.render_node(node, context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, 
in render_node
  return node.render(context)
File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, 
line 63, in render
  result = block.nodelist.render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, 
in render
  bit = self.render_node(node, context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, 
in render_node
  return node.render(context)
File /usr/lib/python2.7/site-packages/django/template/loader_tags.py, 
line 63, in render
  result = block.nodelist.render(context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 830, 
in render
  bit = self.render_node(node, context)
File /usr/lib/python2.7/site-packages/django/template/base.py, line 844, 
in 

[Yahoo-eng-team] [Bug 1303663] [NEW] Default security group wrong from Grizzly to IceHouse

2014-04-07 Thread Federico Iezzi
Public bug reported:

Hi Guys,

There is a bug that prevent a true default security group creation.
The default rules can't permit all traffic in and out.

Below how i fixed it.

if s.get('name') == 'default':
for ethertype in ext_sg.sg_supported_ethertypes:
# Allow All incoming Connections
ingress_rule = SecurityGroupRule(
id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
security_group=security_group_db,
direction='ingress',
ethertype=ethertype,
remote_ip_prefix='0.0.0.0/0')
context.session.add(ingress_rule)
# Allow All outcoming Connections
egress_rule = SecurityGroupRule(
id=uuidutils.generate_uuid(),
tenant_id=tenant_id,
security_group=security_group_db,
direction='egress',
ethertype=ethertype,
remote_ip_prefix='0.0.0.0/0')
context.session.add(egress_rule)

https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L120

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303663

Title:
  Default security group wrong from Grizzly to IceHouse

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi Guys,

  There is a bug that prevent a true default security group creation.
  The default rules can't permit all traffic in and out.

  Below how i fixed it.

  if s.get('name') == 'default':
  for ethertype in ext_sg.sg_supported_ethertypes:
  # Allow All incoming Connections
  ingress_rule = SecurityGroupRule(
  id=uuidutils.generate_uuid(),
  tenant_id=tenant_id,
  security_group=security_group_db,
  direction='ingress',
  ethertype=ethertype,
  remote_ip_prefix='0.0.0.0/0')
  context.session.add(ingress_rule)
  # Allow All outcoming Connections
  egress_rule = SecurityGroupRule(
  id=uuidutils.generate_uuid(),
  tenant_id=tenant_id,
  security_group=security_group_db,
  direction='egress',
  ethertype=ethertype,
  remote_ip_prefix='0.0.0.0/0')
  context.session.add(egress_rule)

  
https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300290] Re: Import translations for Icehouse release

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85655
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a9cf547d20065b2bd49e101364b46cffed4b6aee
Submitter: Jenkins
Branch:milestone-proposed

commit a9cf547d20065b2bd49e101364b46cffed4b6aee
Author: Akihiro Motoki mot...@da.jp.nec.com
Date:   Mon Apr 7 16:04:04 2014 +0900

Import translations from Transifex for Icehouse

* Import ~100% completed translations
  We have three languages: German, Serbian and Hindi in Icehouse :-)
* Update language list in openstack_dashboard settings.py
* Update English POT files

This commit also updates compiled PO files (.mo). There is
a discussion compiled PO files should be included in the repo
or not, but it is better to be unchanged in this release.

Update Transifex resource name in .tx/config for Icehouse.

Closes-Bug: #1300290
Change-Id: I0c378e885efc4ecdafdd5d6b027a514a5af5bb2f


** Changed in: horizon
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1300290

Title:
  Import translations for Icehouse release

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Needs to import translations for Icehouse release.

  
  [Mar 31] At the current plan, importing translation will be scheduled next 
Monday (UTC). Most translations are completed, but we have some string 
updates in the last moment of RC1, and translator need to catch up with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1300290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303682] [NEW] 'qg' interface associates with 'br-int' inetead of 'br-ext' when multiple external networks are created

2014-04-07 Thread Vinod Kumar
Public bug reported:

This is with respect to Change Id260a239: L3 Agent can handle many
external networks. (https://review.openstack.org/#/c/59359/)

After this fix was introduced L3 Agent could handle the multiple
external networks but the 'qg' interfaces were getting created under br-
int instead of br-ext.

** Affects: neutron
 Importance: Undecided
 Assignee: Vinod Kumar (vinod-kumar5)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Vinod Kumar (vinod-kumar5)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303682

Title:
  'qg' interface associates with 'br-int' inetead of 'br-ext' when
  multiple external networks are created

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This is with respect to Change Id260a239: L3 Agent can handle many
  external networks. (https://review.openstack.org/#/c/59359/)

  After this fix was introduced L3 Agent could handle the multiple
  external networks but the 'qg' interfaces were getting created under
  br-int instead of br-ext.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303690] [NEW] nova live-migration slow when using volumes

2014-04-07 Thread Jacek Nykis
Public bug reported:

I have block live migration configured in my environment (no shared storage) 
and it is very fast for instances which don't use volumes. An instance with 
2.5G disk image takes ~40 seconds to migrate to different host.
When I migrate instances which do use ceph backed volumes they take much longer 
and it depends on the volume size. For example migration of an instance with 1G 
volume takes around 1 minute, 10G ~8 minutes and with 50G I had to wait nearly 
50 minutes for the process to complete. It completes without errors every time, 
it is just very slow.

I was looking at the network traffic during migration and it looks a bit
strange. Lets say I am migrating an instance with 50B volume from
compute node A to compute node B and ceph is running on hosts X,Y and Z.

I initiate live migration and as expected there is lots of traffic going from 
host A to B, this lasts less than 1 minute (disk image transfer). Then traffic 
from A to B goes down to ~200Mbit/s and stays at this level until migration is 
completed.
After initial traffic burst between host A and B host B starts sending data to 
the ceph nodes X,Y and Z. I can see between 40 to 80Mbit/s of going from host B 
to each of the ceph nodes. This continues for ~50 minutes, then migration 
completes and networks traffic idles.

Every time I tried migration eventually completed fine but for instances
with lets say 200G volume it could take nearly 4 hours to complete.

I am using havana on precise.

Compute nodes:
ii  nova-common  1:2013.2.2-0ubuntu1~cloud0
ii  nova-compute 1:2013.2.2-0ubuntu1~cloud0
ii  nova-compute-kvm 1:2013.2.2-0ubuntu1~cloud0

Ceph:
ii  ceph 0.67.4-0ubuntu2.2~cloud0
ii  ceph-common  0.67.4-0ubuntu2.2~cloud0
ii  libcephfs1   0.67.4-0ubuntu2.2~cloud0

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: canonical-is

** Tags added: canonical-is

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303690

Title:
  nova live-migration slow when using volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have block live migration configured in my environment (no shared storage) 
and it is very fast for instances which don't use volumes. An instance with 
2.5G disk image takes ~40 seconds to migrate to different host.
  When I migrate instances which do use ceph backed volumes they take much 
longer and it depends on the volume size. For example migration of an instance 
with 1G volume takes around 1 minute, 10G ~8 minutes and with 50G I had to wait 
nearly 50 minutes for the process to complete. It completes without errors 
every time, it is just very slow.

  I was looking at the network traffic during migration and it looks a
  bit strange. Lets say I am migrating an instance with 50B volume from
  compute node A to compute node B and ceph is running on hosts X,Y and
  Z.

  I initiate live migration and as expected there is lots of traffic going from 
host A to B, this lasts less than 1 minute (disk image transfer). Then traffic 
from A to B goes down to ~200Mbit/s and stays at this level until migration is 
completed.
  After initial traffic burst between host A and B host B starts sending data 
to the ceph nodes X,Y and Z. I can see between 40 to 80Mbit/s of going from 
host B to each of the ceph nodes. This continues for ~50 minutes, then 
migration completes and networks traffic idles.

  Every time I tried migration eventually completed fine but for
  instances with lets say 200G volume it could take nearly 4 hours to
  complete.

  I am using havana on precise.

  Compute nodes:
  ii  nova-common  1:2013.2.2-0ubuntu1~cloud0
  ii  nova-compute 1:2013.2.2-0ubuntu1~cloud0
  ii  nova-compute-kvm 1:2013.2.2-0ubuntu1~cloud0

  Ceph:
  ii  ceph 0.67.4-0ubuntu2.2~cloud0
  ii  ceph-common  0.67.4-0ubuntu2.2~cloud0
  ii  libcephfs1   0.67.4-0ubuntu2.2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1155063] Re: dhcp-agent uses an interface which is down

2014-04-07 Thread Salvatore Orlando
I am setting this bug as won't fix as we did not hear anything about for
over a year.

** Changed in: neutron
   Status: Incomplete = Won't Fix

** Changed in: neutron
 Assignee: Gary Kotton (garyk) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1155063

Title:
  dhcp-agent uses an interface which is down

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  I am using folsom from the ubuntu cloud archive.
  Quantum started dnsmasq correctly, but on an interface which is down.
  Modifying init_l3 in agent/linux/interface.py with adding a 
device.link.set_up() call remediated the problem.
  I don't know who would be responsible for upping the interface in the first 
place though.

  Here is the relevant snippet from my current code:

  def init_l3(self, device_name, ip_cidrs, namespace=None):
  Set the L3 settings for the interface using data from the port.
 ip_cidrs: list of 'X.X.X.X/YY' strings
  
  LOG.debug(init_l3)
  device = ip_lib.IPDevice(device_name, self.conf.root_helper,
   namespace=namespace)
  LOG.debug(link setup)
  device.link.set_up()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1155063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301449] Re: ODL ML2 driver doesn't notify active/inactive ports

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85511
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=01380b7c67ba2ef1f923d0cf4265dd2d20e31093
Submitter: Jenkins
Branch:milestone-proposed

commit 01380b7c67ba2ef1f923d0cf4265dd2d20e31093
Author: Robert Kukura kuk...@noironetworks.com
Date:   Thu Apr 3 17:01:00 2014 -0400

ML2: ODL driver sets port status

The OpenDaylight mechanism driver does not depend on an L2 agent to
plug the port. Now that nova waits for notification that the port
status is ACTIVE, the ML2 driver API is extended so that the mechanism
driver that binds a port can optionally set the port status, and the
OpenDaylight mechanism driver uses this to set the port status to
ACTIVE.

Closes-Bug: 1301449
Change-Id: I171c405f36b4f2354d9585e8ae3dfa50ddaa9a7e
(cherry picked from commit a9e3074aa9f442c2fff1ba058ac8ed585c6caa24)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301449

Title:
  ODL ML2 driver doesn't notify active/inactive ports

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The nova-event-callback blueprint [1] implemented the notifications of
  active/down ports to Nova. Before effectively starting an instance,
  Nova compute waits for a VIF plugged notification from Neutron.

  I'm running ODL ML2 driver in a devstack using the master branch and I
  notice that the ODL driver doesn't notify back the Nova API. Hence
  with the default settings, the VM creation always fails.

  As a workaround, set the following parameters in your nova.conf:
  vif_plugging_timeout = 10
  vif_plugging_is_fatal = False

  With this configuration, I'm able to boot and connect to the instances
  but the Neutron ports are always reported as DOWN [2].

  [1] https://blueprints.launchpad.net/neutron/+spec/nova-event-callback
  [2] http://paste.openstack.org/show/74861/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236821] Re: migration-list can't return correct information

2014-04-07 Thread Haiwei Xu
*** This bug is a duplicate of bug 1260249 ***
https://bugs.launchpad.net/bugs/1260249

** Changed in: nova
   Status: In Progress = Invalid

** This bug has been marked a duplicate of bug 1260249
   migration-list: 'unicode' object has no attribute 'iteritems'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236821

Title:
  migration-list can't return correct information

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when I use 'nova migration-list' to show the migration information, I got the 
following error.
  $ nova migration-list
  ERROR: 'unicode' object has no attribute 'iteritems'

  This is because the server returns the migration information like 
{u'migrations': {u'objects':[]}}, while {u'migrations': []} is expected.
  I will give the patch soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1236821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244034] Re: v2/os-migrations failed

2014-04-07 Thread Haiwei Xu
*** This bug is a duplicate of bug 1260249 ***
https://bugs.launchpad.net/bugs/1260249

** This bug is no longer a duplicate of bug 1236821
   migration-list can't return correct information
** This bug has been marked a duplicate of bug 1260249
   migration-list: 'unicode' object has no attribute 'iteritems'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244034

Title:
  v2/os-migrations failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code commit-id: 8f62dd1f054475c0644c83031b86c8e622253ddc

  $ nova --debug migration-list
  ...
  RESP: [200] CaseInsensitiveDict({'date': 'Thu, 24 Oct 2013 03:25:56 GMT', 
'content-length': '31', 'content-type': 'application/json', 
'x-compute-request-id': 'req-dacff374-8230-4be1-9ddc-1f6eb1a3ed3f'})
  RESP BODY: {migrations: {objects: []}}

  DEBUG (shell:724) 'unicode' object has no attribute 'iteritems'
  Traceback (most recent call last):
File /opt/stack/python-novaclient/novaclient/shell.py, line 721, in main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File /opt/stack/python-novaclient/novaclient/shell.py, line 657, in main
  args.func(self.cs, args)
File /opt/stack/python-novaclient/novaclient/v1_1/contrib/migrations.py, 
line 71, in do_migration_list
  args.cell_name))
File /opt/stack/python-novaclient/novaclient/v1_1/contrib/migrations.py, 
line 53, in list
  return self._list(/os-migrations%s % query_string, migrations)
File /opt/stack/python-novaclient/novaclient/base.py, line 78, in _list
  for res in data if res]
File /opt/stack/python-novaclient/novaclient/base.py, line 420, in 
__init__
  self._add_details(info)
File /opt/stack/python-novaclient/novaclient/base.py, line 443, in 
_add_details
  for (k, v) in six.iteritems(info):
File /usr/local/lib/python2.7/dist-packages/six.py, line 439, in iteritems
  return iter(getattr(d, _iteritems)(**kw))
  AttributeError: 'unicode' object has no attribute 'iteritems'
  ERROR: 'unicode' object has no attribute 'iteritems'
  ...

  In  doc/api_samples/os-migrations/migrations-get.json, I found the correct 
response json should be like:
  {  migration: [] }. Actually extra nest dict named 'objects' was injected 
into response body.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303714] [NEW] force_config_drive=True don't go to db

2014-04-07 Thread Nikolay Starodubtsev
Public bug reported:

If we use --config-drive=True via client we'll have filled config_drive
column in db. In the other hand if we use force_config_drive=True in
nova.conf the column will be empty. Also, in both cases config drive
will be attached to vm, and I can't find any problems with live-
migration or evacuation. In my deployment I use shared storage.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303714

Title:
  force_config_drive=True don't go to db

Status in OpenStack Compute (Nova):
  New

Bug description:
  If we use --config-drive=True via client we'll have filled
  config_drive column in db. In the other hand if we use
  force_config_drive=True in nova.conf the column will be empty. Also,
  in both cases config drive will be attached to vm, and I can't find
  any problems with live-migration or evacuation. In my deployment I use
  shared storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276251] Re: Minimum disk details should be updated when image is selected considering the image size.

2014-04-07 Thread Akihiro Motoki
As Ana commented, we cannot know the minimum disk requirement on the form in 
OpenStack Dashboard.
To know min_disk requirement we need to analyze image to be created. Even if 
we can analyze, there is no guarantee the estimated value is correct. I think 
it is not a thing Horizon cares.


** Changed in: horizon
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1276251

Title:
  Minimum disk details should be updated when image is selected
  considering the image size.

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  Steps to Reproduce the problem
  1. Login on Horizon.
  2. Select create an Image.
  3. Input the required parameters, such as image source and type.
  4. After selecting the image, the Minimum Disk should be updated with the 
image size instead of considering no minimum.

  Expected behavior
  Disk value cannot be zero by default. It must consider the default Image 
source size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1276251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245698] Re: Unit test failed or show errors for NVP advanced plugin test cases

2014-04-07 Thread Salvatore Orlando
** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245698

Title:
  Unit test failed or show errors for NVP advanced plugin test cases

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Random unit test failures are observed in NVP advanced plugin test
  cases.

  One cause being observed from the failure log is caused by
  unpredictable greenthread scheduling. This will be fixed by increasing
  the max task status probing time from 1 second to 10 seconds.

  Another one being observed, which showed Exception error messages, is
  a bug in task manger code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236704] Re: neutron API list calls taking lot of time

2014-04-07 Thread Salvatore Orlando
An issue concerning slowness of list operations has been recently
identified and fixed: https://bugs.launchpad.net/neutron/+bug/1302467

The above issue will be part of the upcoming Icehouse release and backported to 
Havana.
The issue reported in https://bugs.launchpad.net/neutron/+bug/1302611, less 
important than the previous one, will instead be available on the first 
Icehouse stable release, and likely backported to havana too.

The policy checks are still performed after the DB object are retrieved, but 
they're now much faster.
We are working on further improvements, but it is not yet clear whether they 
will be backportable.

This bug is now going to be marked as invalid, as we have new trackers for this 
issue.
Please reopen if you wish so.

** Changed in: neutron
   Status: Confirmed = Invalid

** Changed in: neutron
   Importance: High = Undecided

** Changed in: neutron
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1236704

Title:
  neutron API list calls taking lot of time

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in neutron grizzly series:
  New
Status in neutron havana series:
  New

Bug description:
  Neutron API calls are taking lot of time compared to nova or keystone service 
APIs.
  In our deployment - its considerable that we had to increase 
neutron_url_timeout in nova.conf to 120s.  Required for nova list to succeed.

  In our analysis we found that DB access was quick enough but
  considerable time spent in the following code

  https://github.com/openstack/neutron/blob/master/neutron/api/v2/base.py#L236

  Here is the code for reference
  if do_authz:
  # FIXME(salvatore-orlando): obj_getter might return references to
  # other resources. Must check authZ on them too.
  # Omit items from list that should not be visible
  obj_list = [obj for obj in obj_list
  if policy.check(request.context,
  self._plugin_handlers[self.SHOW],
  obj,
  plugin=self._plugin)]

  There is a clear comment from Salvatore to fix the above code.

  # FIXME(salvatore-orlando): obj_getter might return references to
  # other resources. Must check authZ on them too.
  # Omit items from list that should not be visible

  Need to fix it or improve the neutron API response time for list calls.
  Commenting the above code improved in my devstack setup for port list to 6 
seconds against 18 seconds for about 500 ports.
  This issue is reproduced in Grizzly and I am sure it is an issue for Havana 
too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1236704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303759] [NEW] neutron net-create is failing and apiSrv is throwing an exception

2014-04-07 Thread Numan Siddique
Public bug reported:

When I run ./stack.sh with localrc configured to run as a controller node, 
stack.sh fails with the below error
 neutron net-create --tenant-id a5ceeadae4c44781bfee71554f283362 private
2014-04-07 11:33:25.224 | ++ grep ' id '
2014-04-07 11:33:25.226 | ++ get_field 2
2014-04-07 11:33:25.228 | ++ read data
2014-04-07 11:33:26.119 | Request Failed: internal server error while 
processing your request.
2014-04-07 11:33:26.136 | + NET_ID=
2014-04-07 11:33:26.138 | + die_if_not_set 397 NET_ID 'Failure creating NET_ID 
for  a5ceeadae4c44781bfee71554f283362'
2014-04-07 11:33:26.140 | + local exitcode=0
2014-04-07 11:33:26.142 | [Call Trace]
2014-04-07 11:33:26.144 | ./stack.sh:1188:create_neutron_initial_network

screen-apiSrv.log has the below exception at the beginning

ubuntu@oc-comp2:~/devstack$ python 
/usr/local/lib/python2.7/dist-packages/vnc_cf 
^Mg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/api_server.conf 
--r ^Mabbit_password contrail123  echo $! 
/opt/stack/status/contrail/apiSrv.pid; fg  ^M|| echo apiSrv failed to start 
| tee /opt/stack/status/contrail/apiSrv.failur ^Me
[1] 28773
bash: /opt/stack/status/contrail/apiSrv.pid: No such file or directory
python 
/usr/local/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py 
--conf_file /etc/contrail/api_server.conf --rabbit_password contrail123
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: Failed to import package 
sandesh
ERROR:oc-comp2:ApiServer:Config:0:Failed to import package sandesh
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: Failed to import package 
sandesh
ERROR:oc-comp2:ApiServer:Config:0:Failed to import package sandesh
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: SANDESH: Logging: LEVEL: 
[SYS_INFO] - [SYS_DEBUG]
INFO:oc-comp2:ApiServer:Config:0:SANDESH: Logging: LEVEL: [SYS_INFO] - 
[SYS_DEBUG]
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: SANDESH: Logging: FILE: 
[stdout] - [/var/log/contrail/api.log]
INFO:oc-comp2:ApiServer:Config:0:SANDESH: Logging: FILE: [stdout] - 
[/var/log/contrail/api.log]
ERROR:stevedore.extension:Could not load 'xxx': No option 'admin_token' in 
section: 'KEYSTONE'
ERROR:stevedore.extension:No option 'admin_token' in section: 'KEYSTONE'
Traceback (most recent call last):
  File /opt/stack/stevedore/stevedore/extension.py, line 162, in _load_plugins
verify_requirements,
  File /opt/stack/stevedore/stevedore/extension.py, line 180, in 
_load_one_plugin
obj = plugin(*invoke_args, **invoke_kwds)
  File /usr/local/lib/python2.7/dist-packages/vnc_openstack/__init__.py, line 
41, in __init__
self._admin_token = conf_sections.get('KEYSTONE', 'admin_token')
  File /usr/lib/python2.7/ConfigParser.py, line 618, in get
raise NoOptionError(option, section)
NoOptionError: No option 'admin_token' in section: 'KEYSTONE'
Bottle v0.11.6 server starting up (using GeventServer())...
Listening on http://0.0.0.0:8084/
Hit Ctrl-C to quit.

.

Possible solution
---

By adding the below lines in the file /usr/local/lib/python2.7/dist-
packages/vnc_openstack/__init__.py seems to solve the problem at line
~26

try:
self._admin_token = conf_sections.get('KEYSTONE', 'admin_token')
except:
self._admin_token = None

I am not sure if this is the right solution, but this should be
addressed.

Thanks

** Affects: opencontrail
 Importance: Undecided
 Status: New

** Project changed: neutron = opencontrail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303759

Title:
  neutron net-create is failing and apiSrv is throwing an exception

Status in OpenContrail:
  New

Bug description:
  When I run ./stack.sh with localrc configured to run as a controller node, 
stack.sh fails with the below error
   neutron net-create --tenant-id a5ceeadae4c44781bfee71554f283362 private
  2014-04-07 11:33:25.224 | ++ grep ' id '
  2014-04-07 11:33:25.226 | ++ get_field 2
  2014-04-07 11:33:25.228 | ++ read data
  2014-04-07 11:33:26.119 | Request Failed: internal server error while 
processing your request.
  2014-04-07 11:33:26.136 | + NET_ID=
  2014-04-07 11:33:26.138 | + die_if_not_set 397 NET_ID 'Failure creating 
NET_ID for  a5ceeadae4c44781bfee71554f283362'
  2014-04-07 11:33:26.140 | + local exitcode=0
  2014-04-07 11:33:26.142 | [Call Trace]
  2014-04-07 11:33:26.144 | ./stack.sh:1188:create_neutron_initial_network

  screen-apiSrv.log has the below exception at the beginning

  ubuntu@oc-comp2:~/devstack$ python 
/usr/local/lib/python2.7/dist-packages/vnc_cf 
^Mg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/api_server.conf 
--r ^Mabbit_password contrail123  echo $! 
/opt/stack/status/contrail/apiSrv.pid; fg  ^M|| echo apiSrv failed to start 
| tee /opt/stack/status/contrail/apiSrv.failur ^Me
  [1] 28773
  bash: /opt/stack/status/contrail/apiSrv.pid: No such file or directory
  python 

[Yahoo-eng-team] [Bug 1303781] [NEW] V3 API multinic extension missing expected_errors decorators

2014-04-07 Thread Christopher Yeoh
Public bug reported:

The V3 API multinic extension is missing the expected errors decorator.
This means that unexpected exceptions from Nova internals are not
properly caught.

** Affects: nova
 Importance: Undecided
 Assignee: Christopher Yeoh (cyeoh-0)
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303781

Title:
  V3 API multinic extension missing expected_errors decorators

Status in OpenStack Compute (Nova):
  New

Bug description:
  The V3 API multinic extension is missing the expected errors
  decorator. This means that unexpected exceptions from Nova internals
  are not properly caught.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1032633] Re: Keystone's token table grows unconditionally when using SQL backend.

2014-04-07 Thread Dolph Mathews
Since it's not already mentioned in this bug, the long term solution
here is to simply not persist tokens at all:

  https://blueprints.launchpad.net/keystone/+spec/ephemeral-pki-tokens

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1032633

Title:
  Keystone's token table grows unconditionally when using SQL backend.

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Manuals:
  New
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  Keystone's `token` table grows unconditionally with expired tokens
  when using the SQL backend.

  Keystone should provide a backend-agnostic method to find and delete
  these tokens. This could be run via a periodic task or supplied as a
  script to run as a cron job.

  An example SQL statement (if you're using a SQL backend) to workaround
  this problem:

  sql DELETE FROM token WHERE expired = NOW();

  It may be ideal to allow a date smear to allow older tokens to persist
  if needed.

  Choosing the `memcache` backend may workaround this issue, but SQL is
  the package default.

  System Information:

  $ dpkg-query --show keystone
  keystone2012.1+stable~20120608-aff45d6-0ubuntu1

  $ cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=12.04
  DISTRIB_CODENAME=precise
  DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1032633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302490] Re: Requirements fail to be synced in milestone-proposed

2014-04-07 Thread Thierry Carrez
** No longer affects: openstack-ci

** Changed in: ceilometer
   Status: Fix Committed = Fix Released

** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302490

Title:
  Requirements fail to be synced in milestone-proposed

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  With our current process around openstack/requirements, no
  requirements sync is ever pushed to milestone-proposed branches. None
  is proposed until the openstack/requirements MP branch is created, and
  when it is, the propose-requirements job fails with:

  git review -t openstack/requirements milestone-proposed
  + OUTPUT='Had trouble running git log --color=auto --decorate --oneline 
milestone-proposed --not remotes/gerrit/milestone-proposed
  fatal: ambiguous argument '\''milestone-proposed'\'': unknown revision or 
path not in the working tree.
  Use '\''--'\'' to separate paths from revisions'

  See https://jenkins.openstack.org/job/propose-requirements-
  updates/153/console as an example (while it lasts)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1302490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303811] [NEW] nova-api won't start because /usr/lib/python2.7/dist-packages/keys is missing

2014-04-07 Thread Sebastian Herzberg
Public bug reported:

I'm currently setting up a Testlab with the Icehouse RC1 on Ubuntu
14.04. I mostly followed the standard OpenStack Installation Manual with
some minor modifications.  Installation runs through fine but nova-api
won't start because some folder is missing.

When I create the folder it starts and works fine.

#
2014-04-07 12:58:04.762 3401 TRACE nova OSError: [Errno 13] Permission denied: 
'/usr/lib/python2.7/dist-packages/keys'
2014-04-07 12:58:04.762 3401 TRACE nova
2014-04-07 12:58:04.824 3414 INFO nova.openstack.common.service [-] Parent 
process has died unexpectedly, exiting
2014-04-07 12:58:04.824 3414 INFO nova.wsgi [-] Stopping WSGI server.
2014-04-07 12:58:04.825 3414 INFO nova.wsgi [-] WSGI server has stopped.
#

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api

** Attachment added: Error Log
   
https://bugs.launchpad.net/bugs/1303811/+attachment/4071912/+files/nova-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303811

Title:
  nova-api won't start because /usr/lib/python2.7/dist-packages/keys is
  missing

Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm currently setting up a Testlab with the Icehouse RC1 on Ubuntu
  14.04. I mostly followed the standard OpenStack Installation Manual
  with some minor modifications.  Installation runs through fine but
  nova-api won't start because some folder is missing.

  When I create the folder it starts and works fine.

  #
  2014-04-07 12:58:04.762 3401 TRACE nova OSError: [Errno 13] Permission 
denied: '/usr/lib/python2.7/dist-packages/keys'
  2014-04-07 12:58:04.762 3401 TRACE nova
  2014-04-07 12:58:04.824 3414 INFO nova.openstack.common.service [-] Parent 
process has died unexpectedly, exiting
  2014-04-07 12:58:04.824 3414 INFO nova.wsgi [-] Stopping WSGI server.
  2014-04-07 12:58:04.825 3414 INFO nova.wsgi [-] WSGI server has stopped.
  #

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303820] [NEW] Update Cisco VPN device driver

2014-04-07 Thread Paul Michali
Public bug reported:

Based on recent updates to the Cisco CSR REST APIs,  update the Cisco
device driver for VPN to sync up with those changes.

This includes

- Support for various IKE and IPSec encryption modes.
- Support for disable of anti-replay-window-size
- Cleanup UTs based on verified changes
- Enhance UT coverage for new REST API capabilities

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303820

Title:
  Update Cisco VPN device driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Based on recent updates to the Cisco CSR REST APIs,  update the Cisco
  device driver for VPN to sync up with those changes.

  This includes

  - Support for various IKE and IPSec encryption modes.
  - Support for disable of anti-replay-window-size
  - Cleanup UTs based on verified changes
  - Enhance UT coverage for new REST API capabilities

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303827] [NEW] nova boot ends in error state when using nova-network VlanManager

2014-04-07 Thread Christoph Bachhuber-Haller
Public bug reported:

Steps to reproduce: 
- Setup a devstack from scratch using nova-network
- delete the default network
  # nova-manage network delete 10.0.0.0/24
- change nova.conf to use VlanManager: 
network_manager = nova.network.manager.VlanManager
- restart nova-network
- create a new network with a vlan id: 
nova-manage network create --label=network --fixed_range_v4 10.0.1.0/24 --vlan 
42

- boot a vm on the cirros image: 
 nova --debug boot --flavor 1 --image 0b969819-2d85-4f7f-af76-125c5bb5789f test

Expected behavior: The new VM goes to Active state
Actual behavior: The new VM goes to Error state, also nova-network log has this 
exception: 
a7-abaf-78db50a4b62c] network allocations from (pid=13676) 
allocate_for_instance /opt/stack/nova/nova/network/manager.py:494
2014-04-07 15:32:02.137 ERROR nova.network 
[req-87a65a9e-9196-4203-9de2-f6911d2aef4b admin demo] No db access allowed in 
nova-network:   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in 
main
result = function(*args, **kwargs)
  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
128, in lambda
yield lambda: self._dispatch_and_reply(incoming)
  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply
incoming.message))
  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
  File /opt/stack/nova/nova/network/floating_ips.py, line 119, in 
allocate_for_instance
**kwargs)
  File /opt/stack/nova/nova/network/manager.py, line 497, in 
allocate_for_instance
requested_networks=requested_networks)
  File /opt/stack/nova/nova/network/manager.py, line 1837, in 
_get_networks_for_instance
networks = self.db.project_get_networks(context, project_id)
  File /opt/stack/nova/nova/db/api.py, line 1370, in project_get_networks
return IMPL.project_get_networks(context, project_id, associate)
  File /opt/stack/nova/nova/cmd/network.py, line 47, in __call__
stacktrace = .join(traceback.format_stack())

I think the exception was introduced by this patch that disables direct
database access from nova-network:
https://review.openstack.org/#/c/79716/

However, VlanManager still relies on database access for the given scenario, 
and there are 3 other places in manager.py that rely on direct db access: 
devuser@ubuntu:/opt/stack/nova$ grep self.db  nova/network/manager.py -n
1389:vifs = self.db.virtual_interface_get_by_instance(context,
1446:vif = self.db.virtual_interface_get_by_address(context,
1837:networks = self.db.project_get_networks(context, project_id)
1914:not self.db.network_in_use_on_host(context, network['id'],

Therefore, I cannot currently use conductor with nova-network
VlanManager, which is a regression from Havana.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303827

Title:
  nova boot ends in error state when using nova-network VlanManager

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce: 
  - Setup a devstack from scratch using nova-network
  - delete the default network
# nova-manage network delete 10.0.0.0/24
  - change nova.conf to use VlanManager: 
  network_manager = nova.network.manager.VlanManager
  - restart nova-network
  - create a new network with a vlan id: 
  nova-manage network create --label=network --fixed_range_v4 10.0.1.0/24 
--vlan 42

  - boot a vm on the cirros image: 
   nova --debug boot --flavor 1 --image 0b969819-2d85-4f7f-af76-125c5bb5789f 
test

  Expected behavior: The new VM goes to Active state
  Actual behavior: The new VM goes to Error state, also nova-network log has 
this exception: 
  a7-abaf-78db50a4b62c] network allocations from (pid=13676) 
allocate_for_instance /opt/stack/nova/nova/network/manager.py:494
  2014-04-07 15:32:02.137 ERROR nova.network 
[req-87a65a9e-9196-4203-9de2-f6911d2aef4b admin demo] No db access allowed in 
nova-network:   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in 
main
  result = function(*args, **kwargs)
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
128, in lambda
  yield lambda: self._dispatch_and_reply(incoming)
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply
  incoming.message))
File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch
  return 

[Yahoo-eng-team] [Bug 1303830] [NEW] Cisco VPN device driver support ipsec site conn udpates

2014-04-07 Thread Paul Michali
Public bug reported:

Provide support in the Cisco VPN device driver for updates to the IPSec
site to site connection configuration. Currently, one must manually
delete and then recreate the connection to change the configuration.

Changeable items include MTU, admin state, PSK, peer address/id, and
peer CIDRs.

Once the Cisco CSR REST API supports admin state change for IPSec site-
to-site connections (tunnel admin up/down), enhance the device driver to
change the admin state of the tunnel, rather than deleting the tunnel
and maintaining state.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: Confirmed

** Changed in: neutron
   Status: New = Confirmed

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303830

Title:
  Cisco VPN device driver support ipsec site conn udpates

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Provide support in the Cisco VPN device driver for updates to the
  IPSec site to site connection configuration. Currently, one must
  manually delete and then recreate the connection to change the
  configuration.

  Changeable items include MTU, admin state, PSK, peer address/id, and
  peer CIDRs.

  Once the Cisco CSR REST API supports admin state change for IPSec
  site-to-site connections (tunnel admin up/down), enhance the device
  driver to change the admin state of the tunnel, rather than deleting
  the tunnel and maintaining state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303834] [NEW] using LXC as a libvirt type fails with libvirt error

2014-04-07 Thread Ashok kumaran B
Public bug reported:

With RC1 using lxc throwing the below error

2014-04-07 07:04:30.349 30728 ERROR nova.openstack.common.threadgroup [-] this 
function is not supported by the connection driver: virConnectBaselineCPU
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 480, 
in run_service
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
service.start()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 163, in start
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1010, in 
init_host
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
self.driver.init_host(host=self.host)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 648, in 
init_host
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
self._do_quality_warnings()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 636, in 
_do_quality_warnings
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup caps 
= self.get_host_capabilities()
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2841, in 
get_host_capabilities
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in proxy_call
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup rv = 
execute(f,*args,**kwargs)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup rv = 
meth(*args,**kwargs)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in baselineCPU
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup if 
ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup 
libvirtError: this function is not supported by the connection driver: 
virConnectBaselineCPU
2014-04-07 07:04:30.349 30728 TRACE nova.openstack.common.threadgroup

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!

[Yahoo-eng-team] [Bug 1301626] Re: Same Keypairs accessible in multiple projects assigned to same user

2014-04-07 Thread Sharath Rao
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301626

Title:
  Same Keypairs accessible in multiple projects assigned to same user

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Compute (Nova):
  New

Bug description:
  If I have two projects assigned to the same user in Horizon Dashboard.
  Each project has a separate ID, separate set of VMs and different floating 
IPs and different security rules assigned.

  But,  The Keypairs are being shared for both the projects are the
  same.

  This is causing an issue since this implies that I can access VMs belonging 
to different projects using the same key pair.
  Also, i cannot add new keypairs for a particular project as its becoming 
visible in both the projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303865] [NEW] mandatory fields are not enforced in launch stack

2014-04-07 Thread Ami Jeain
Public bug reported:

- go to the Create Stack screen, enter the following valid Heat template:
heat_template_version: 2013-05-23
description: 
  A single stack with a keypair.

parameters:
  key_name:
type: string
default: heat_key3
  key_save:
type: string
default: false

resources:
  KeyPair:
type: OS::Nova::KeyPair
properties:
  name: { get_param: key_name }
  save_private_key: { get_param: key_save }

outputs:
  PublicKey:
value: { get_attr: [KeyPair, public_key] }
  PrivateKey:
value: { get_attr: [KeyPair, private_key] }

 - delete one of the fields value (key_name or/and key_save)
= you will get a message saying Error: Stack creation failed.

In horizon.log you will get:
2014-04-07 14:49:23,055 7116 DEBUG heatclient.common.http 
HTTP/1.1 400 Bad Request
date: Mon, 07 Apr 2014 14:49:23 GMT
content-length: 301
content-type: application/json; charset=UTF-8

{explanation: The server could not comply with the request since it
is either malformed or otherwise incorrect., code: 400, error:
{message: Property error : KeyPair: save_private_key \\ is not a
valid boolean, traceback: null, type: StackValidationFailed},
title: Bad Request}

if any/all of the 2 fields is mandatory, this should be enforced, both
with a message, and with an asterisk, right next to the field.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303865

Title:
  mandatory fields are not enforced in launch stack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  - go to the Create Stack screen, enter the following valid Heat template:
  heat_template_version: 2013-05-23
  description: 
A single stack with a keypair.

  parameters:
key_name:
  type: string
  default: heat_key3
key_save:
  type: string
  default: false

  resources:
KeyPair:
  type: OS::Nova::KeyPair
  properties:
name: { get_param: key_name }
save_private_key: { get_param: key_save }

  outputs:
PublicKey:
  value: { get_attr: [KeyPair, public_key] }
PrivateKey:
  value: { get_attr: [KeyPair, private_key] }

   - delete one of the fields value (key_name or/and key_save)
  = you will get a message saying Error: Stack creation failed.

  In horizon.log you will get:
  2014-04-07 14:49:23,055 7116 DEBUG heatclient.common.http 
  HTTP/1.1 400 Bad Request
  date: Mon, 07 Apr 2014 14:49:23 GMT
  content-length: 301
  content-type: application/json; charset=UTF-8

  {explanation: The server could not comply with the request since it
  is either malformed or otherwise incorrect., code: 400, error:
  {message: Property error : KeyPair: save_private_key \\ is not a
  valid boolean, traceback: null, type: StackValidationFailed},
  title: Bad Request}

  if any/all of the 2 fields is mandatory, this should be enforced, both
  with a message, and with an asterisk, right next to the field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1303865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1032633] Re: Keystone's token table grows unconditionally when using SQL backend.

2014-04-07 Thread Matt Kassawara
Addressed by the following patch:

https://review.openstack.org/#/c/79105/

** Changed in: openstack-manuals
   Status: New = Fix Released

** Changed in: openstack-manuals
Milestone: None = icehouse

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1032633

Title:
  Keystone's token table grows unconditionally when using SQL backend.

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Manuals:
  Fix Released
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  Keystone's `token` table grows unconditionally with expired tokens
  when using the SQL backend.

  Keystone should provide a backend-agnostic method to find and delete
  these tokens. This could be run via a periodic task or supplied as a
  script to run as a cron job.

  An example SQL statement (if you're using a SQL backend) to workaround
  this problem:

  sql DELETE FROM token WHERE expired = NOW();

  It may be ideal to allow a date smear to allow older tokens to persist
  if needed.

  Choosing the `memcache` backend may workaround this issue, but SQL is
  the package default.

  System Information:

  $ dpkg-query --show keystone
  keystone2012.1+stable~20120608-aff45d6-0ubuntu1

  $ cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=12.04
  DISTRIB_CODENAME=precise
  DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1032633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301453] Re: Libvirt LXC and Xen boot fails with VIR_DOMAIN_START_PAUSED

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85708
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c0feec3f21757efe74f0228b5e05c9edff3bffac
Submitter: Jenkins
Branch:milestone-proposed

commit c0feec3f21757efe74f0228b5e05c9edff3bffac
Author: Vladik Romanovsky vladik.romanov...@enovance.com
Date:   Wed Apr 2 15:00:19 2014 -0400

libvirt: pause mode is not supported by all drivers

Only KVM/Qemu drivers support the VIR_DOMAIN_START_PAUSED flag
Booting guests on other drivers with the above flag will make it fail.

Closes-Bug: 1301453
Change-Id: Ia98e018b686c4ec3c15fd1f6bcc78188f330fcef
(cherry picked from commit bfb28fcf9031af4c695177663702ce05edbbfa4d)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301453

Title:
  Libvirt LXC and Xen boot fails with VIR_DOMAIN_START_PAUSED

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Libvirt's lxc driver does not support the VIR_DOMAIN_START_PAUSED flag
  and thus fails to create the domain. This causes the boot to fail.

  Stacktrace:
  https://gist.github.com/ramielrowe/9936359

  Steps to reproduce:
  1) Create devstack with following localrc options
  VIRT_DRIVER='libvirt'
  LIBVIRT_TYPE='lxc'
  2) Boot instance
  3) Observe failed boot and exception in compute log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302490] Re: Requirements fail to be synced in milestone-proposed

2014-04-07 Thread Thierry Carrez
** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302490

Title:
  Requirements fail to be synced in milestone-proposed

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  With our current process around openstack/requirements, no
  requirements sync is ever pushed to milestone-proposed branches. None
  is proposed until the openstack/requirements MP branch is created, and
  when it is, the propose-requirements job fails with:

  git review -t openstack/requirements milestone-proposed
  + OUTPUT='Had trouble running git log --color=auto --decorate --oneline 
milestone-proposed --not remotes/gerrit/milestone-proposed
  fatal: ambiguous argument '\''milestone-proposed'\'': unknown revision or 
path not in the working tree.
  Use '\''--'\'' to separate paths from revisions'

  See https://jenkins.openstack.org/job/propose-requirements-
  updates/153/console as an example (while it lasts)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1302490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299349] Re: upstream-translation-update Jenkins job failing

2014-04-07 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1299349

Title:
  upstream-translation-update Jenkins job failing

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Manuals:
  Fix Committed

Bug description:
  Various upstream-translation-update succeed apparently but looking
  into the file errors get ignored - and there are errors uploading the
  files to the translation site like:

  Error uploading file: There is a syntax error in your file.
  Line 1936: duplicate message definition...
  Line 7: ...this is the location of the first definition

  
  For keystone, see:
  
http://logs.openstack.org/78/7882359da114079e8411bd3f97c5628f2cd1c098/post/keystone-upstream-translation-update/27cbb22/

  This has been fixed on ironic with:
  https://review.openstack.org/#/c/83935
  See also: 
http://lists.openstack.org/pipermail/openstack-i18n/2014-March/000479.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1299349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301775] Re: Make operations in VM web page(not first page) encounter strange happenings

2014-04-07 Thread David Lyle
Ok, after playing with this more, the behavior is confusing but correct.
The correct instance gets paused and you are returned to the page
(determined by marker) that you were on.  If you are already on the last
page, no pagination footer is shown, this is correct as currently
defined.

** Changed in: horizon
   Importance: High = Undecided

** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301775

Title:
  Make operations in VM web page(not first page) encounter strange
  happenings

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I have 30 VM displayed in two pages.

  When made operations in page 1, all things were ok.
  But when made operation in page 2, like started a VM, horizon redirected web 
page to page 1 and did not carry out any request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302384] Re: Requires refresh browser on deleting the Instance

2014-04-07 Thread David Lyle
When deleting an instance, while the instance is still in the deleting
state, it will appear in the table.  Once completely deleted the page
will update and remove the instance.  Until the instance is fully
deleted, it will appear in the table.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1302384

Title:
  Requires refresh browser on deleting the Instance

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Release -- Icehouse

  Builds Used

  root@os-controller:~/horizon-pkg# dpkg -l | grep -i django
  ii  openstack-dashboard  
1:2014.1+git201403311756~precise-0ubuntu1 django web interface to 
Openstack
  ii  python-django-horizon
1:2014.1+git201403311756~precise-0ubuntu1 Django module providing 
web based interaction with OpenStack
  ii  python-django-openstack  
1:2014.1+git201403311756~precise-0ubuntu1 dummy transitonal package

  Problem Description :

  The instance on deletion still appears on the Instances page,  its
  entry gets clear only on clicking the browser refresh button.

  Steps

  1. On Horizon Instances page, select the instance on deletion.
  2.  Clicked the Terminate Instance Button. It gives message Success: 
Scheduled termination of Instance
  3. Observed the deleted instance entry is still appearing.
  4. Only on clicking the browser refresh button, the deleted instance entry 
gets removed.

  This behavior creates confusion, it seems like instance is not deleted
  and if again try to terminate the same instance it gives error message
  Error : you are not allowed to terminate the instance :vm-a1.   In
  this case even when we refresh the browser the entry of the deleted
  instance remains present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1302384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300718] Re: We should design a pluggable help API for each panel. This generic feature would then allow the documentation team to plug-in their own explanation of the panel;

2014-04-07 Thread David Lyle
This is beyond the scope of a bug fix, please file a new blueprint.

https://blueprints.launchpad.net/horizon/+spec/context-specific-help
exists but the scope is modals.

** Changed in: horizon
   Status: New = Invalid

** Changed in: horizon
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1300718

Title:
  We should design a pluggable help API for each panel. This generic
  feature would then allow the documentation team to plug-in their own
  explanation of the panel;

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  We should design a pluggable help API for each panel. This generic feature 
would then allow the documentation team to plug-in their own explanation of the 
panel; 
  FOR EX: If we are at instance tab then the help button should show 
description of options available to instance pannel, giving more information 
about each option. (i.e: Every pannel should have customized help page for that 
particular pannel).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1300718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301384] Re: Note that XML support *may* be removed, not *will* be

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85709
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bd7ae423edf6aea95185ae19b17d2930edb4dbde
Submitter: Jenkins
Branch:milestone-proposed

commit bd7ae423edf6aea95185ae19b17d2930edb4dbde
Author: Russell Bryant rbry...@redhat.com
Date:   Wed Apr 2 09:33:34 2014 -0400

Note that XML support *may* be removed.

To be more accurate, note that XML support *may* be removed as early
as the Juno release.  I think there is still more discussion needed
around concrete usage data before the removal date is finalized.

Related thread:
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031608.html

Closes-bug: #1301384
Change-Id: I0415b50ec0b81bb56f5c0fa13bc6d01f8bec7865
(cherry picked from commit 5d39189df6ecb559c3f8b7e2fa3beff25da9f452)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301384

Title:
  Note that XML support *may* be removed, not *will* be

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In Icehouse we marked the v2 API XML support as deprecated.  The log
  message says it *will* be removed, but should be updated to be more
  accurate and say *may* be removed, pending finalizing the discussion
  around it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303913] [NEW] Console logs for unittest failures are 100MB

2014-04-07 Thread Clark Boylan
Public bug reported:

When unittests fail for nova and neutron the resulting console logs are
quite large.

Nova:
http://logs.openstack.org/56/83256/14/check/gate-nova-python26/294f78f/ 142MB
http://logs.openstack.org/56/83256/14/check/gate-nova-python27/195cbd3/ 142MB

Neutron:
http://logs.openstack.org/92/85492/5/check/gate-neutron-python27/fa325bf/ 122MB
http://logs.openstack.org/92/85492/5/check/gate-neutron-python26/76c0527/ 100MB

This is problematic because it makes it very hard to debug what actually
happened. We should continue to preserve complete logging in the subunit
log (we do need the verbose information), but we don't need to fill the
console log with noisy redundant data.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303913

Title:
  Console logs for unittest failures are  100MB

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  When unittests fail for nova and neutron the resulting console logs
  are quite large.

  Nova:
  http://logs.openstack.org/56/83256/14/check/gate-nova-python26/294f78f/ 142MB
  http://logs.openstack.org/56/83256/14/check/gate-nova-python27/195cbd3/ 142MB

  Neutron:
  http://logs.openstack.org/92/85492/5/check/gate-neutron-python27/fa325bf/ 
122MB
  http://logs.openstack.org/92/85492/5/check/gate-neutron-python26/76c0527/ 
100MB

  This is problematic because it makes it very hard to debug what
  actually happened. We should continue to preserve complete logging in
  the subunit log (we do need the verbose information), but we don't
  need to fill the console log with noisy redundant data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303933] [NEW] openvswitch plugin does not supports rpc_workers

2014-04-07 Thread Choe, Cheng-Dae
Public bug reported:

rpc_worker option supports multi rpc worker process.
and It requires start_rpc_listener method and start rpc worker here.

ml2 plugin implements this method, so that rpc worker get works.

but openvswitch plugin doesn't implements this method. so rpc_worker
options are discarded.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303933

Title:
  openvswitch plugin does not supports rpc_workers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  rpc_worker option supports multi rpc worker process.
  and It requires start_rpc_listener method and start rpc worker here.

  ml2 plugin implements this method, so that rpc worker get works.

  but openvswitch plugin doesn't implements this method. so rpc_worker
  options are discarded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295381] Re: VMware: resize operates on orig VM and not clone

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85740
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b6e14a58b37811312f344402e3003d439c0ccc71
Submitter: Jenkins
Branch:milestone-proposed

commit b6e14a58b37811312f344402e3003d439c0ccc71
Author: Sidharth Surana ssur...@vmware.com
Date:   Fri Mar 21 17:03:40 2014 -0700

VMware: Fixes the instance resize problem

The fix includes separating out methods for
associating/disassociating a vsphere vm from the
openstack instance. Modifying the resize workflow
to use the above mentioned methods.

Closes-Bug: #1295381

Change-Id: I92acdd5cd00f739d504738413d3b63a2e17f2866
(cherry picked from commit 91ddf85abb8a516cfa2da346b393aa7234660f6c)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295381

Title:
  VMware: resize operates on orig VM and not clone

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The resize operation when using the VCenter driver ends up resizing
  the original VM and not the newly cloned VM.

  To recreate:
  1) create a new VM from horizon using default debian image.  I use a flavor 
of nano.
  2) wait for it to complete and go active
  3) click on resize and choose a flavor larger than what you used originally.  
i then usually choose a flavor of small.
  4) wait for horizon to prompt you to confirm or revert the migration.
  5) Switch over to vSphere Web Client.  Notice two VMs for your newly created 
instance.  One with a UUID name and the other with a UUID-orig name.  -orig 
indicating the original.
  6) Notice the original has be resized (cpu and mem are increased, disk is 
not, but that's a separate bug) and not the new clone.  This is problem #1.
  7) Now hit confirm in horizon.  It works, but the logs contain a warning: 
The attempted operation cannot be performed in the current state (Powered 
on)..  I suspect its attempting to destroy the orig VM, but the orig was the 
VM resized and powered on, so it fails.  This is problem #2.
  Results in a leaked VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1295381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302791] Re: PLUMgrid CI Testing

2014-04-07 Thread Fawad Khaliq
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302791

Title:
  PLUMgrid CI Testing

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This is just a dummy bug for a dummy changeset to test PLUMgrid CI for
  stablization. It will be removed when work is done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302106] Re: LDAP non-URL safe characters cause auth failure

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85460
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=5b5331fa02de38207cd81922d5794192ebb4b77a
Submitter: Jenkins
Branch:milestone-proposed

commit 5b5331fa02de38207cd81922d5794192ebb4b77a
Author: Brant Knudson bknud...@us.ibm.com
Date:   Fri Apr 4 10:50:07 2014 -0500

Fix invalid LDAP filter for user ID with comma

The Keystone server would respond with a 500 error when configured
to use the LDAP identity backend and a request is made to get a
token for a user that has an ID with a comma. The response is like:

 Authorization Failed: An unexpected error prevented the server from
 fulfilling your request. {'desc': 'Bad search filter'} (HTTP 500)

This is because the user DN wasn't properly escaped in the filter
for the query to get the groups that the user is a member of.

Closes-Bug: #1302106

Change-Id: Ib4886e66af0e979fcf23a84bcd51b07034547cb9


** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1302106

Title:
  LDAP non-URL safe characters cause auth failure

Status in OpenStack Identity (Keystone):
  Fix Released

Bug description:
  An Openstack user attempting to integrate Keystone with AD has
  reported that when his user contains a comma (full name CN='Doe,
  John'), a 'Bad search filter' error is thrown. If the full name CN is
  instead 'John Doe', authorization succeeds.

  dpkg -l |grep keystone
  ii  keystone 1:2013.2.2-0ubuntu1~cloud0   
   OpenStack identity service - Daemons
  ii  python-keystone  1:2013.2.2-0ubuntu1~cloud0   
   OpenStack identity service - Python library
  ii  python-keystoneclient1:0.3.2-0ubuntu1~cloud0  
   Client library for OpenStack Identity API

  Relevant error message:
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. {'desc': 'Bad search filter'} (HTTP 500)

  Relevant stack trace:
  2014-03-31 15:44:27.459 3018 ERROR keystone.common.wsgi [-] {'desc': 'Bad 
search filter'}
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 238, in 
__call__
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 94, in 
authenticate
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi context, auth)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 272, in 
_authenticate_local
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, 
tenant_id)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/token/controllers.py, line 369, in 
_get_project_roles_and_ref
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, 
tenant_id)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 475, in 
get_roles_for_user_and_project
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi user_id, 
tenant_id)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/assignment/core.py, line 160, in 
get_roles_for_user_and_project
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_role_list = 
_get_group_project_roles(user_id, project_ref)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/assignment/core.py, line 111, in 
_get_group_project_roles
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_refs = 
self.identity_api.list_groups_for_user(user_id)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 177, in 
wrapper
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 425, in 
list_groups_for_user
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi group_list = 
driver.list_groups_for_user(user_id)
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/dist-packages/keystone/identity/backends/ldap.py, line 
154, in list_groups_for_user
  2014-03-31 15:44:27.459 3018 TRACE keystone.common.wsgi 

[Yahoo-eng-team] [Bug 1302976] Re: Install the Image Service in OpenStack Installation Guide for Ubuntu 12.04 (LTS)  - icehouse - Configuration error

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85730
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=6e2c24ade0b1d77af5b6079b8026992091a1ca6d
Submitter: Jenkins
Branch:master

commit 6e2c24ade0b1d77af5b6079b8026992091a1ca6d
Author: Matt Kassawara mkassaw...@gmail.com
Date:   Mon Apr 7 08:28:44 2014 -0600

Corrected 'rpc_backend' in Glance installation section

I corrected the 'rpc_backend' configuration key in
/etc/glance-api.conf to use 'rabbit' and 'qpid' values from Oslo.
I also removed extraneous AMQP configuration from
/etc/glance-registry.conf.

Change-Id: Ice9848de7fdee0df82bf35082237371e1d6ed19d
Closes-Bug: #1302976


** Changed in: openstack-manuals
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1302976

Title:
  Install the Image Service in OpenStack Installation Guide for Ubuntu
  12.04 (LTS)  - icehouse - Configuration error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Manuals:
  Fix Released
Status in Ubuntu:
  New

Bug description:
  Hello,
  The rpc_backend field set as glance.rpc.impl_kombu is not recognized. The 
process crashes with a CRITICAL error (see below)
  Using glance.openstack.common.rpc.impl_kombu instead seem to resolve the 
problem.

  
  CRITICAL glance [-] DriverLoadFailure: Failed to load transport driver 
glance.rpc.impl_kombu: No 'oslo.messaging.drivers' driver found, looking for 
'glance.rpc.impl_kombu'
  TRACE glance Traceback (most recent call last):
  TRACE glance   File /usr/bin/glance-api, line 10, in module
  TRACE glance sys.exit(main())
  TRACE glance   File /usr/lib/python2.7/dist-packages/glance/cmd/api.py, 
line 63, in main
  TRACE glance server.start(config.load_paste_app('glance-api'), 
default_port=9292)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/common/config.py, line 210, in 
load_paste_app
  TRACE glance app = deploy.loadapp(config:%s % conf_file, name=app_name)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
  TRACE glance return loadobj(APP, uri, name=name, **kw)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
  TRACE glance return context.create()
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
  TRACE glance return self.object_type.invoke(self)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 203, in invoke
  TRACE glance app = context.app_context.create()
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
  TRACE glance return self.object_type.invoke(self)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke
  TRACE glance **context.local_conf)
  TRACE glance   File /usr/lib/python2.7/dist-packages/paste/deploy/util.py, 
line 55, in fix_call
  TRACE glance val = callable(*args, **kw)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/api/__init__.py, line 27, in 
root_app_factory
  TRACE glance return paste.urlmap.urlmap_factory(loader, global_conf, 
**local_conf)
  TRACE glance   File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 
28, in urlmap_factory
  TRACE glance app = loader.get_app(app_name, global_conf=global_conf)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
  TRACE glance name=name, global_conf=global_conf).create()
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
  TRACE glance return self.object_type.invoke(self)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 146, in invoke
  TRACE glance return fix_call(context.object, context.global_conf, 
**context.local_conf)
  TRACE glance   File /usr/lib/python2.7/dist-packages/paste/deploy/util.py, 
line 55, in fix_call
  TRACE glance val = callable(*args, **kw)
  TRACE glance   File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, 
line 472, in factory
  TRACE glance return cls(APIMapper())
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/api/v2/router.py, line 58, in __init__
  TRACE glance images_resource = 
images.create_resource(custom_image_properties)
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/api/v2/images.py, line 809, in 
create_resource
  TRACE glance controller = ImagesController()
  TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/api/v2/images.py, line 49, in __init__
  TRACE glance self.notifier = notifier or glance.notifier.Notifier()
  TRACE glance   File 

[Yahoo-eng-team] [Bug 1303968] [NEW] Empty ini files not outputting correctly

2014-04-07 Thread Joshua Harlow
Public bug reported:

It appears empty configuration files, with values being set on them do
not actually result in those values then being output. This appears to
be a feature/bug of iniparse and configparser and how it handles the
DEFAULT section whereby if the DEFAULT section is not present in the
initial document that subsequent modifications to the DEFAULT sections
keys and values will not cause a new DEFAULT section to be created (and
then output). Since nova.conf.sample recently disappeared and now the
handling of this returns an empty string (without a DEFAULT section) we
need to add logic that creates a DEFAULT section if it is not present in
the initial document to ensure that the keys/values we add/remove/set
are actually adjusted and output correctly.

** Affects: anvil
 Importance: Critical
 Assignee: Joshua Harlow (harlowja)
 Status: New

** Changed in: anvil
   Importance: Undecided = Critical

** Changed in: anvil
 Assignee: (unassigned) = Joshua Harlow (harlowja)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1303968

Title:
  Empty ini files not outputting correctly

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  It appears empty configuration files, with values being set on them do
  not actually result in those values then being output. This appears to
  be a feature/bug of iniparse and configparser and how it handles the
  DEFAULT section whereby if the DEFAULT section is not present in the
  initial document that subsequent modifications to the DEFAULT sections
  keys and values will not cause a new DEFAULT section to be created
  (and then output). Since nova.conf.sample recently disappeared and now
  the handling of this returns an empty string (without a DEFAULT
  section) we need to add logic that creates a DEFAULT section if it is
  not present in the initial document to ensure that the keys/values we
  add/remove/set are actually adjusted and output correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1303968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303983] [NEW] Enable ServerGroup scheduler filters by default

2014-04-07 Thread Russell Bryant
Public bug reported:

The Icehouse release includes a server group REST API.  For these groups
to actually function properly, the server group scheduler filters must
be enabled.  So, these filters should be enabled by default since the
API is also enabled by default.  If the API is not used, the scheduler
filters will be a no-op.

http://lists.openstack.org/pipermail/openstack-
dev/2014-April/032068.html

** Affects: nova
 Importance: High
 Assignee: Russell Bryant (russellb)
 Status: In Progress

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova
Milestone: None = icehouse-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303983

Title:
  Enable ServerGroup scheduler filters by default

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The Icehouse release includes a server group REST API.  For these
  groups to actually function properly, the server group scheduler
  filters must be enabled.  So, these filters should be enabled by
  default since the API is also enabled by default.  If the API is not
  used, the scheduler filters will be a no-op.

  http://lists.openstack.org/pipermail/openstack-
  dev/2014-April/032068.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303988] [NEW] Two Error messages should not be shown when creating host aggregate fails

2014-04-07 Thread Alejandro Emanuel Paredes
Public bug reported:

When creating a new host aggregate and in the same workflow a host is added to 
the aggregate, and it fails, two error messages are shown. 
There should be 1 error message (error adding host to de agregate) and 1 
success message (created new host aggregate).

This bug is solved by removing the return False in the
add_host_to_aggregate exception in the handle method of the class
CreateAggregateWorkflow

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303988

Title:
  Two Error messages should not be shown when creating host aggregate
  fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a new host aggregate and in the same workflow a host is added 
to the aggregate, and it fails, two error messages are shown. 
  There should be 1 error message (error adding host to de agregate) and 1 
success message (created new host aggregate).

  This bug is solved by removing the return False in the
  add_host_to_aggregate exception in the handle method of the class
  CreateAggregateWorkflow

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1303988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303986] [NEW] CloudSigma Datasource doesn't handle vendor-data correctly

2014-04-07 Thread Viktor Petersson
Public bug reported:

It appears as the CloudSigma datasource only has support for 'user-
data', and not 'vendor-data.'

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: cloudsigma

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1303986

Title:
  CloudSigma Datasource doesn't handle vendor-data correctly

Status in Init scripts for use on cloud images:
  New

Bug description:
  It appears as the CloudSigma datasource only has support for 'user-
  data', and not 'vendor-data.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1303986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302490] Re: Requirements fail to be synced in milestone-proposed

2014-04-07 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302490

Title:
  Requirements fail to be synced in milestone-proposed

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Object Storage (Swift):
  New

Bug description:
  With our current process around openstack/requirements, no
  requirements sync is ever pushed to milestone-proposed branches. None
  is proposed until the openstack/requirements MP branch is created, and
  when it is, the propose-requirements job fails with:

  git review -t openstack/requirements milestone-proposed
  + OUTPUT='Had trouble running git log --color=auto --decorate --oneline 
milestone-proposed --not remotes/gerrit/milestone-proposed
  fatal: ambiguous argument '\''milestone-proposed'\'': unknown revision or 
path not in the working tree.
  Use '\''--'\'' to separate paths from revisions'

  See https://jenkins.openstack.org/job/propose-requirements-
  updates/153/console as an example (while it lasts)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1302490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303993] [NEW] xenapi: fixup bittorrent plugin for better performance and less logging

2014-04-07 Thread Christopher Lefelhocz
Public bug reported:

The bittorent plugin currently has some settings which could use some 
improvement.  We have this data and would like to get
it upstream.  Also the plugin logs ever second and that is somewhat excessive.  
Would like to log only every 10 seconds.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303993

Title:
  xenapi: fixup bittorrent plugin for better performance and less
  logging

Status in OpenStack Compute (Nova):
  New

Bug description:
  The bittorent plugin currently has some settings which could use some 
improvement.  We have this data and would like to get
  it upstream.  Also the plugin logs ever second and that is somewhat 
excessive.  Would like to log only every 10 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303998] [NEW] vm fails with error vif_type=binding_failed using gre tunnels

2014-04-07 Thread Phil Hopkins
Public bug reported:

I am running Icehouse r-1 on Ubuntu 12.04. Whenever I try to launch a VM
it immediately goes into error state. The log file fo rnova-compute
shows the following:

 http_log_req /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
2014-04-07 19:15:32.888 2866 DEBUG neutronclient.client [-] RESP:{'date': 'Mon, 
07 Apr 2014 19:15:32 GMT', 'status': '204', 'content-length
': '0', 'x-openstack-request-id': 'req-92a58024-6cd6-4ef3-bd81-f579bd057445'} 
 http_log_resp 
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
2014-04-07 19:15:32.888 2866 DEBUG nova.network.api 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087cd9e
412daf2360c0cf83a5c6] Updating cache with info: [] 
update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:
74
2014-04-07 19:15:32.909 2866 ERROR nova.compute.manager 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087
cd9e412daf2360c0cf83a5c6] [instance: a85f771d-13d2-4cba-88f6-6c26a5cc7f37] 
Error: Unexpected vif_type=binding_failed


--snip--

2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 858, in 
unplug_vifs
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
self.vif_driver.unplug(instance, vif)
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py, line 798, in unplug
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
_(Unexpected vif_type=%s) % vif_type)
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher NovaException: 
Unexpected vif_type=binding_failed
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] Returning 
exception Unexpected vif_type=binding_failed to caller
2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] Returning 
exception Unexpected vif_type=binding_failed to caller

full log file for nova-compute at: http://paste.openstack.org/show/75244/
Log file for /var/log/neutron/openvswitch-agent.log is at: 
http://paste.openstack.org/show/75245/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303998

Title:
  vm fails with error vif_type=binding_failed using gre tunnels

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running Icehouse r-1 on Ubuntu 12.04. Whenever I try to launch a
  VM it immediately goes into error state. The log file fo rnova-compute
  shows the following:

   http_log_req 
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
  2014-04-07 19:15:32.888 2866 DEBUG neutronclient.client [-] RESP:{'date': 
'Mon, 07 Apr 2014 19:15:32 GMT', 'status': '204', 'content-length
  ': '0', 'x-openstack-request-id': 'req-92a58024-6cd6-4ef3-bd81-f579bd057445'} 
   http_log_resp 
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
  2014-04-07 19:15:32.888 2866 DEBUG nova.network.api 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087cd9e
  412daf2360c0cf83a5c6] Updating cache with info: [] 
update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/api.py:
  74
  2014-04-07 19:15:32.909 2866 ERROR nova.compute.manager 
[req-48a2dbdd-7067-4c48-8c09-62d604160d59 b90c0e0ca1aa4cd79703c50e1ff8684a 
f1c5b087
  cd9e412daf2360c0cf83a5c6] [instance: a85f771d-13d2-4cba-88f6-6c26a5cc7f37] 
Error: Unexpected vif_type=binding_failed

  
  --snip--

  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 858, in 
unplug_vifs
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
self.vif_driver.unplug(instance, vif)
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py, line 798, in unplug
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
_(Unexpected vif_type=%s) % vif_type)
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
NovaException: Unexpected vif_type=binding_failed
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
  2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] 
Returning exception Unexpected vif_type=binding_failed to caller
  2014-04-07 19:15:33.218 2866 TRACE oslo.messaging.rpc.dispatcher 
  2014-04-07 19:15:33.221 2866 ERROR oslo.messaging._drivers.common [-] 
Returning exception Unexpected vif_type=binding_failed to caller

  full log file for nova-compute at: 

[Yahoo-eng-team] [Bug 1304049] [NEW] able to create two users with the same name in the same domain

2014-04-07 Thread Guang Yee
Public bug reported:

Looks like we can create two different users with the same name in the
same domain. That should not be allowed.

gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{domain: {name: 
test-domain}}' -XPOST http://localhost:35357/v3/domains | python -mjson.tool
{
domain: {
enabled: true,
id: ebf7d50dbba54e13a1fe881e39ad4409,
links: {
self: 
http://localhost:35357/v3/domains/ebf7d50dbba54e13a1fe881e39ad4409;
},
name: test-domain
}
}
gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{user: {name: jacksquat, 
password: jacksquat, domain_id: ebf7d50dbba54e13a1fe881e39ad4409}}' 
-XPOST http://localhost:35357/v3/users | python -mjson.tool
{
user: {
domain_id: ebf7d50dbba54e13a1fe881e39ad4409,
enabled: true,
id: 375ac107d3624752a5a53dc561ba578c,
links: {
self: 
http://localhost:35357/v3/users/375ac107d3624752a5a53dc561ba578c;
},
name: jacksquat
}
}
gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{user: {name: jacksquat, 
password: jacksquat-fake, domain_id: 
ebf7d50dbba54e13a1fe881e39ad4409}}' -XPOST http://localhost:35357/v3/users | 
python -mjson.tool
{
user: {
domain_id: ebf7d50dbba54e13a1fe881e39ad4409,
enabled: true,
id: c3bd426062d243d68d5ada2bb5984751,
links: {
self: 
http://localhost:35357/v3/users/c3bd426062d243d68d5ada2bb5984751;
},
name: jacksquat
}
}


Now try to authenticate the user and obviously it will fail.
 
gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'Content-Type: 
application/json' -d '{auth: {identity: {methods: [password], 
password: {user: {name: jacksquat, password: jacksquat, domain: 
{id: ebf7d50dbba54e13a1fe881e39ad4409}}' -XPOST 
http://localhost:35357/v3/auth/tokens | python -mjson.tool
{
error: {
code: 500,
message: An unexpected error prevented the server from fulfilling 
your request.,
title: Internal Server Error
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1304049

Title:
  able to create two users with the same name in the same domain

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Looks like we can create two different users with the same name in the
  same domain. That should not be allowed.

  gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{domain: {name: 
test-domain}}' -XPOST http://localhost:35357/v3/domains | python -mjson.tool
  {
  domain: {
  enabled: true,
  id: ebf7d50dbba54e13a1fe881e39ad4409,
  links: {
  self: 
http://localhost:35357/v3/domains/ebf7d50dbba54e13a1fe881e39ad4409;
  },
  name: test-domain
  }
  }
  gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{user: {name: jacksquat, 
password: jacksquat, domain_id: ebf7d50dbba54e13a1fe881e39ad4409}}' 
-XPOST http://localhost:35357/v3/users | python -mjson.tool
  {
  user: {
  domain_id: ebf7d50dbba54e13a1fe881e39ad4409,
  enabled: true,
  id: 375ac107d3624752a5a53dc561ba578c,
  links: {
  self: 
http://localhost:35357/v3/users/375ac107d3624752a5a53dc561ba578c;
  },
  name: jacksquat
  }
  }
  gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'X-Auth-Token: 
ADMIN' -H 'Content-Type: application/json' -d '{user: {name: jacksquat, 
password: jacksquat-fake, domain_id: 
ebf7d50dbba54e13a1fe881e39ad4409}}' -XPOST http://localhost:35357/v3/users | 
python -mjson.tool
  {
  user: {
  domain_id: ebf7d50dbba54e13a1fe881e39ad4409,
  enabled: true,
  id: c3bd426062d243d68d5ada2bb5984751,
  links: {
  self: 
http://localhost:35357/v3/users/c3bd426062d243d68d5ada2bb5984751;
  },
  name: jacksquat
  }
  }

  
  Now try to authenticate the user and obviously it will fail.
   
  gyee@gyee-VirtualBox:~/projects/openstack/keystone$ curl -s -H 'Content-Type: 
application/json' -d '{auth: {identity: {methods: [password], 
password: {user: {name: jacksquat, password: jacksquat, domain: 
{id: ebf7d50dbba54e13a1fe881e39ad4409}}' -XPOST 
http://localhost:35357/v3/auth/tokens | python -mjson.tool
  {
  error: {
  code: 500,
  message: An unexpected error prevented the server from fulfilling 
your request.,
  title: Internal Server Error
  }
  }

To 

[Yahoo-eng-team] [Bug 1304056] [NEW] Resource Usage graph format numbers

2014-04-07 Thread Cindy Lu
Public bug reported:

In the d3 line graph the hover detail should show a formatted number.

For example, if you select image.size and then hover on the line.
Please see attached image.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 040714 - units.png
   
https://bugs.launchpad.net/bugs/1304056/+attachment/4073119/+files/040714%20-%20units.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1304056

Title:
  Resource Usage graph format numbers

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the d3 line graph the hover detail should show a formatted number.

  For example, if you select image.size and then hover on the line.
  Please see attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1304056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304066] [NEW] Daily Usage report format 'Value (Avg) column values

2014-04-07 Thread Cindy Lu
Public bug reported:

i.e. 12345 = 12,345

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1304066

Title:
  Daily Usage report format 'Value (Avg) column values

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  i.e. 12345 = 12,345

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1304066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256043] Re: Need to add Development environment files to ignore list

2014-04-07 Thread Steve Baker
** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1256043

Title:
  Need to add Development environment files to ignore list

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  In Progress
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Nova:
  In Progress
Status in Python client library for Swift:
  Won't Fix
Status in OpenStack Object Storage (Swift):
  Won't Fix

Bug description:
  Following files generated by Eclipse development environment should be
  in ignore list to avoid their inclusion during a git push.

  .project
  .pydevproject

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1256043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments

2014-04-07 Thread Steve Baker
** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229324

Title:
  extraneous vim editor configuration comments

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Hacking Guidelines:
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Python client library for Ceilometer:
  In Progress
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for Ironic:
  In Progress
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Swift:
  In Progress
Status in Trove client binding:
  In Progress
Status in Storyboard database creator:
  New
Status in OpenStack Object Storage (Swift):
  In Progress
Status in Taskflow for task-oriented systems.:
  In Progress
Status in Tempest:
  Fix Released
Status in Tuskar:
  In Progress

Bug description:
  Many of the source code files have a beginning line

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

  This should be deleted.

  Many of these lines are in the ceilometer/openstack/common directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233916] Re: Deleting an instance doesn't check task_state properly

2014-04-07 Thread Jordan Callicoat
*** This bug is a duplicate of bug 1143659 ***
https://bugs.launchpad.net/bugs/1143659

** This bug has been marked a duplicate of bug 1143659
   Deleting instance during snapshot leaves snapshot state in saving

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233916

Title:
  Deleting an instance doesn't check task_state properly

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Issuing a delete on an instance with any task_state other than None
  should either fail, or do cleanup as required. For example, when
  taking a snapshot of an instance, task_state goes to image_snapshot,
  but during the snapshot you can issue a delete on the instance, and
  task_state goes to deleting as the resources are removed.  If you do
  it quickly, while the snapshot is running, it ends up deleting the
  backing disk before the snapshot is complete and the snapshot just
  hangs in SAVING status. I also saw state transition go image_snapshot
  - image_pending_upload - deleting and the snapshot hung in SAVING in
  that case as well.

  Steps to reproduce:

  1. Create instance
  2. Create snapshot
  3. Delete instance while snapshot is running
  4. Hung snapshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304093] [NEW] IPv6 API attributes cannot be used until IPv6 is fully supported

2014-04-07 Thread Salvatore Orlando
Public bug reported:

A few IPv6 blueprints were not implemented in Icehouse.
These blueprints are needed for full IPv6 support.

Therefore the IPv6 attributes for ra_mode and address_mode are not yet
functional.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304093

Title:
  IPv6 API attributes cannot be used until IPv6 is  fully supported

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A few IPv6 blueprints were not implemented in Icehouse.
  These blueprints are needed for full IPv6 support.

  Therefore the IPv6 attributes for ra_mode and address_mode are not yet
  functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255876] Re: need to ignore swap files from getting into repository

2014-04-07 Thread Steve Baker
** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255876

Title:
  need to ignore swap files from getting into repository

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in Heat Orchestration Templates and tools:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix
Status in Python client library for Ceilometer:
  Fix Committed
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  Fix Committed
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  Fix Committed
Status in Python client library for Nova:
  Fix Released
Status in Python client library for Swift:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Invalid

Bug description:
  need to ignore swap files from getting into repository
  currently the implemented ignore in .gitignore is *.swp
  however vim goes beyond to generate these so to improve it could be done *.sw?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1255876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304099] [NEW] link prefixes are truncated

2014-04-07 Thread Evan Petrie
Public bug reported:

The osapi_glance_link_prefix and osapi_compute_link_prefix configuration
parameters have their paths removed. For instance, if nova.conf contains

osapi_compute_link_prefix = http:/127.0.0.1/compute/

the values displayed in the API response exclude the compute/
component. Other services, such as keystone, retain the path.

This bit of code is where the bug occurs:

https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  The osapi_glance_link_prefix and osapi_compute_link_prefix configuration
- parameters have their path's removed. For instance, if nova.conf
- contains
+ parameters have their paths removed. For instance, if nova.conf contains
  
  osapi_compute_link_prefix = http:/127.0.01/compute/
  
  the values displayed in the API response exclude the compute/
  component. Other services, such as keystone, retain the path.
  
  This bit of code is where the bug occurs:
  
  
https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

** Description changed:

  The osapi_glance_link_prefix and osapi_compute_link_prefix configuration
  parameters have their paths removed. For instance, if nova.conf contains
  
- osapi_compute_link_prefix = http:/127.0.01/compute/
+ osapi_compute_link_prefix = http:/127.0.0.1/compute/
  
  the values displayed in the API response exclude the compute/
  component. Other services, such as keystone, retain the path.
  
  This bit of code is where the bug occurs:
  
  
https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304099

Title:
  link prefixes are truncated

Status in OpenStack Compute (Nova):
  New

Bug description:
  The osapi_glance_link_prefix and osapi_compute_link_prefix
  configuration parameters have their paths removed. For instance, if
  nova.conf contains

  osapi_compute_link_prefix = http:/127.0.0.1/compute/

  the values displayed in the API response exclude the compute/
  component. Other services, such as keystone, retain the path.

  This bit of code is where the bug occurs:

  
https://github.com/openstack/nova/blob/673ecaea3935b6a50294f24f8a964590ca07a959/nova/api/openstack/common.py#L568-L582

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304105] [NEW] Two duplicated config section: securitygroup and security_group

2014-04-07 Thread Akihiro Motoki
Public bug reported:

There are two duplicated configuration sections: security_group and 
securitygroup.
Reference dev ML thread: 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032086.html

[securitygroup] firewall_driver
[security_group] enable_security_group

We have [securitygroup] section exists in Havana and previous releases and it 
is the right section name.
When we introduced enable_security_group option, we seem to have added a new 
section
accidentally. We don't intended to introduce a new section name.

Both firewall_driver and enable_security_group are placed in
[securitygroup].

It should be fixed before the release.

** Affects: neutron
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: icehouse-rc-potential

** Tags removed: icehouse-rcpo
** Tags added: icehouse-rc-potential

** Changed in: neutron
 Assignee: (unassigned) = Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304105

Title:
  Two duplicated config section: securitygroup and security_group

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are two duplicated configuration sections: security_group and 
securitygroup.
  Reference dev ML thread: 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032086.html

  [securitygroup] firewall_driver
  [security_group] enable_security_group

  We have [securitygroup] section exists in Havana and previous releases and it 
is the right section name.
  When we introduced enable_security_group option, we seem to have added a new 
section
  accidentally. We don't intended to introduce a new section name.

  Both firewall_driver and enable_security_group are placed in
  [securitygroup].

  It should be fixed before the release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304107] [NEW] Libvirt error launching instance - Device 'virtio-net-pci' could not be initialized

2014-04-07 Thread Matt Kassawara
Public bug reported:

I'm developing the installation guide for Icehouse. In this particular
case, I'm installing and testing RC1 on Ubuntu 12.04 with nova
networking. All nodes in this environment run as VMs and the nova-
compute service uses QEMU due to hardware limitations with nested VMs.
Attempting to launch an instance generates the following error (full
traceback attached):

2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] libvirtError: internal error: process 
exited while connecting to monitor: qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 PCI: slot 1 function 0 not available for virtio-net-pci, in use by PIIX3
2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 Device initialization failed.
2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 Device 'virtio-net-pci' could not be initialized

Package version information:

nova-compute: :2014.1~rc1-0ubuntu1~cloud0
nova-network: :2014.1~rc1-0ubuntu1~cloud0
libvirt-bin: 1.2.2-0ubuntu7~cloud0

Command output:

$ nova service-list
+--++--+-+---++-+
| Binary   | Host   | Zone | Status  | State | Updated_at   
  | Disabled Reason |
+--++--+-+---++-+
| nova-cert| hst-osctl5 | internal | enabled | up| 
2014-04-08T00:33:15.00 | -   |
| nova-consoleauth | hst-osctl5 | internal | enabled | up| 
2014-04-08T00:33:19.00 | -   |
| nova-scheduler   | hst-osctl5 | internal | enabled | up| 
2014-04-08T00:33:13.00 | -   |
| nova-conductor   | hst-osctl5 | internal | enabled | up| 
2014-04-08T00:33:16.00 | -   |
| nova-compute | hst-oscpu5 | nova | enabled | up| 
2014-04-08T00:33:15.00 | -   |
| nova-network | hst-oscpu5 | internal | enabled | up| 
2014-04-08T00:33:13.00 | -   |
+--++--+-+---++-+

$ nova net-list
+--+--+--+
| ID   | Label| CIDR |
+--+--+--+
| 7f849be3-4494-495a-95a1-0f99ccb884c4 | demo-net | 172.24.246.24/29 |
+--+--+--+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: icehouse-backport-potential

** Attachment added: nova-compute.txt
   
https://bugs.launchpad.net/bugs/1304107/+attachment/4073375/+files/nova-compute.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304107

Title:
  Libvirt error launching instance - Device 'virtio-net-pci' could not
  be initialized

Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm developing the installation guide for Icehouse. In this particular
  case, I'm installing and testing RC1 on Ubuntu 12.04 with nova
  networking. All nodes in this environment run as VMs and the nova-
  compute service uses QEMU due to hardware limitations with nested VMs.
  Attempting to launch an instance generates the following error (full
  traceback attached):

  2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] libvirtError: internal error: process 
exited while connecting to monitor: qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 PCI: slot 1 function 0 not available for virtio-net-pci, in use by PIIX3
  2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 Device initialization failed.
  2014-04-07 17:50:52.235 1220 TRACE nova.compute.manager [instance: 
4574ce1a-e81f-4bfc-a079-b45c2a1f31ae] qemu-system-x86_64: -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:c4:59,bus=pci.0,addr=0x1:
 Device 'virtio-net-pci' could not be initialized

  Package version information:

  nova-compute: :2014.1~rc1-0ubuntu1~cloud0
  nova-network: :2014.1~rc1-0ubuntu1~cloud0
  libvirt-bin: 1.2.2-0ubuntu7~cloud0

  Command output:

  $ nova service-list
  

[Yahoo-eng-team] [Bug 1304127] [NEW] NSX: dhcp port missing on the metadata network

2014-04-07 Thread Armando Migliaccio
Public bug reported:

The DHCP agent used to have a leg on the metadata network, this is a
regression caused by:

https://review.openstack.org/#/c/69465/

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: Confirmed


** Tags: icehouse-backport-potential vmware

** Changed in: neutron
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

** Changed in: neutron
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304127

Title:
  NSX: dhcp port missing on the metadata network

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The DHCP agent used to have a leg on the metadata network, this is a
  regression caused by:

  https://review.openstack.org/#/c/69465/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298771] Re: GET /servers/​{server_id}​ should return 400 on invalid server_id but returns 404

2014-04-07 Thread Christopher Yeoh
The URL supplied is invalid if /tenant_id/servers/{server_id} does not
exist because server_id is not a valid server id. So 404 is the
appropriate response code.


** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298771

Title:
  GET /servers/​{server_id}​ should return 400 on invalid server_id but
  returns 404

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Currently,

  Call to GET /${tenant_id}/servers/${server_id} validates the server_id
  (uuid_like/int_like) and if found invalid, returns a 404 with
  ''Instance could not be found'' message.

  On invalid server_id, it should return 400 (Bad Request) with the
  message like Invalid server_id

  more info:
  ref: https://review.openstack.org/#/c/72637/13/nova/compute/api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302796] Re: nova-compute (icehouse) exits with a misleading error when libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85707
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fdffaab6171562487a404963dbf6b7f1f9469a65
Submitter: Jenkins
Branch:milestone-proposed

commit fdffaab6171562487a404963dbf6b7f1f9469a65
Author: Lars Kellogg-Stedman l...@redhat.com
Date:   Fri Apr 4 14:58:12 2014 -0400

mark vif_driver as deprecated and log warning

Several classes were dropped from nova.virt.libvirt.vif from havana -
icehouse, leading to invalid configurations if one of these classes was
used in the libvirt_vif_driver setting in nova.conf.  The error message
produced by nova-compute in this situation is misleading.

This patch introduces stubs for all of the classes that were removed.
These stubs inherit from LibvirtGenericVIFDriver and log a deprecation
warning in __init__.

This patch also marks the vif_driver option as deprecated.

Change-Id: I6d6cb9315ce6f3b33d17756bcdc77dccda26fefe
Closed-bug: 1302796
(cherry picked from commit 9f6070e194504cc2ca2b7f2a2aabbf91c6b81897)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302796

Title:
  nova-compute (icehouse) exits with a misleading error when
  libvirt_vif_driver =
  nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In Havana, this was a valid setting:

  libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

  The nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver class has been
  removed in Icehouse; if nova-compute is run with this setting in
  nova.conf, the resulting error is...

 2014-04-04 19:33:55.783 17413 TRACE nova.virt.driver ImportError:
  Class LibvirtDriver cannot be found (['Traceback (most recent call
  last):\n', '  File /usr/lib/python2.6/site-
  packages/nova/openstack/common/importutils.py, line 29, in
  import_class\nreturn getattr(sys.modules[mod_str], class_str)\n',
  AttributeError: 'module' object has no attribute 'LibvirtDriver'\n])

  ...which is misleading, and will cause people to start looking at the
  setting of compute_driver.  The error is caused by the libvirt driver
  attempting to import the vif class:

vif_class =
  importutils.import_class(CONF.libvirt.vif_driver)

  If this configuration option was valid in Havana, then:

  (a) there should probably be a deprecation warning prior to it going away, 
and 
  (b) the error message in icehouse should point at the actual problem rather 
than throwing a misleading exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300325] Re: nic port ordering is not honored

2014-04-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/85711
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f78ee9c53a8cb28f5b8e1dfeb21adfa78be2e09a
Submitter: Jenkins
Branch:milestone-proposed

commit f78ee9c53a8cb28f5b8e1dfeb21adfa78be2e09a
Author: Aaron Rosen aaronoro...@gmail.com
Date:   Mon Mar 31 11:35:29 2014 -0700

Ensure network interfaces are in requested order

_build_network_info_model was iterating current_neutron_ports
instead of port_ids which contains ports in their correctly requested
order. Because of this the requested nic order was no longer being
perserved. This patch fixes this and also changes the order of ports
in test_build_network_info_model() so this case is tested.

Change-Id: Ia9e71364bca6cbc24ebc1c234e6a5af14f51cd62
Closes-bug: #1300325
(cherry picked from commit 721e7f939859fbfe6b0c79ef3a6d5e43c916da65)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300325

Title:
  nic port ordering is not honored

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This bug was fixed by https://bugs.launchpad.net/nova/+bug/1064524
  previously but broken by version
  46922068ac167f492dd303efb359d0c649d69118.

  Instead of iterating the already ordered port list, the new code
  iterates the list from neutron and the result is random ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304181] [NEW] neutron should validate gateway_ip is in subnet

2014-04-07 Thread Aaron Rosen
Public bug reported:

I don't believe this is actually a valid network configuration:

arosen@arosen-MacBookPro:~/devstack$ neutron subnet-show  
be0a602b-ea52-4b13-8003-207be20187da
+--++
| Field| Value  |
+--++
| allocation_pools | {start: 10.11.12.1, end: 10.11.12.254} |
| cidr | 10.11.12.0/24  |
| dns_nameservers  ||
| enable_dhcp  | True   |
| gateway_ip   | 10.0.0.1   |
| host_routes  ||
| id   | be0a602b-ea52-4b13-8003-207be20187da   |
| ip_version   | 4  |
| name | private-subnet |
| network_id   | 53ec3eac-9404-41d4-a899-da4f32045abd   |
| tenant_id| f2d9c1726aa940d3bd5a8ee529ea2480   |
+--++

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304181

Title:
  neutron should validate gateway_ip is in subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I don't believe this is actually a valid network configuration:

  arosen@arosen-MacBookPro:~/devstack$ neutron subnet-show  
be0a602b-ea52-4b13-8003-207be20187da
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {start: 10.11.12.1, end: 10.11.12.254} |
  | cidr | 10.11.12.0/24  |
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | be0a602b-ea52-4b13-8003-207be20187da   |
  | ip_version   | 4  |
  | name | private-subnet |
  | network_id   | 53ec3eac-9404-41d4-a899-da4f32045abd   |
  | tenant_id| f2d9c1726aa940d3bd5a8ee529ea2480   |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304183] [NEW] check flavor type is disabled before rebuild or not

2014-04-07 Thread jichencom
Public bug reported:

when the flavor is disabled, we can't use it to create a new instance 
but we are able to rebuild a instance through rebuild interface, this should be 
checked 

in create function we have following but we don't have for rebuild 
if instance_type['disabled']:
raise exception.FlavorNotFound(flavor_id=instance_type['id'])

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304183

Title:
  check flavor type is disabled before rebuild or not

Status in OpenStack Compute (Nova):
  New

Bug description:
  when the flavor is disabled, we can't use it to create a new instance 
  but we are able to rebuild a instance through rebuild interface, this should 
be checked 

  in create function we have following but we don't have for rebuild 
  if instance_type['disabled']:
  raise exception.FlavorNotFound(flavor_id=instance_type['id'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304184] [NEW] instance stuck into rebuild state when nova-compute unexpected restart

2014-04-07 Thread jichencom
Public bug reported:


when rebuild an instance, and nova-compute died unexpectedly, then the instance 
will be in REBUILD state forever unless admin take actions 

[root@controller ~]# nova list
+--++-++-++
| ID   | Name   | Status  | Task State | Power 
State | Networks   |
+--++-++-++
| a9dd1fd6-27fb-4128-92e6-93bcab085a98 | test11 | REBUILD | rebuilding | Running

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304184

Title:
  instance stuck into rebuild state when nova-compute unexpected restart

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  when rebuild an instance, and nova-compute died unexpectedly, then the 
instance will be in REBUILD state forever unless admin take actions 

  [root@controller ~]# nova list
  
+--++-++-++
  | ID   | Name   | Status  | Task State | 
Power State | Networks   |
  
+--++-++-++
  | a9dd1fd6-27fb-4128-92e6-93bcab085a98 | test11 | REBUILD | rebuilding | 
Running

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp