[Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?

Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
   common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
   python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py, line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get(/volumes/%s
% volume_id, volume)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/base.py, line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 123, in
request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/requests/api.py, line 44, in request
2013-05-30 16:08:45.224 

Re: [Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 07:37 AM, Martin Mailand wrote:

Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

 From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?


It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).

Josh


Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py, line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get(/volumes/%s
% volume_id, volume)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/base.py, line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 123, in
request
2013-05-30 

Re: [Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:
 It's trying to talk to the cinder api, and failing to connect at all.
 Perhaps there's a firewall preventing that on the compute host, or
 it's trying to use the wrong endpoint for cinder (check the keystone
 service and endpoint tables for the volume service).

the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp