There is a hidden bug which I couldn't reproduce. I was using devstack for
openstack and I enabled syslog option for getting nova and cinder logs .
After reboot, Everything was fine. I was able to create volumes and I
verified in rados.

Another thing I noticed is, I don't have cinder user as in devstack script.
Hence, I didn't change owner permissions for keyring files and they are
owned by root. But, it works fine though


On Tue, Jul 23, 2013 at 6:19 AM, Sebastien Han
<[email protected]>wrote:

> Can you send your ceph.conf too?
>
> Is /etc/ceph/ceph.conf present? Is the key of user volume present too?
>
> ––––
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
>
>
>
>
>
>
>
>
> *Phone : *+33 (0)1 49 70 99 72 – *Mobile : *+33 (0)6 52 84 44 70
>  *Email :* [email protected] – *Skype : *han.sbastien
>  *Address :* 10, rue de la Victoire – 75009 Paris
> *Web : *www.enovance.com – *Twitter : *@enovance
>
> On Jul 23, 2013, at 5:39 AM, johnu <[email protected]> wrote:
>
> Hi,
>      I have a  three node ceph  cluster. ceph -w says health ok . I have
> openstack in the same cluster and trying to map cinder and glance onto rbd.
>
>
> I have followed steps as given in
> http://ceph.com/docs/next/rbd/rbd-openstack/
>
> New Settings that is added  in cinder.conf for three files
>
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_pool=volumes
> glance_api_version=2
> rbd_user=volumes
> rbd_secret_uuid=62d0b384-50ad-2e17-15ed-66bfeda40252 ( different for each
> node)
>
>
> LOGS seen when I run ./rejoin.sh
>
> 2013-07-22 20:35:01.900 INFO cinder.service [-] Starting 1 workers
> 2013-07-22 20:35:01.909 INFO cinder.service [-] Started child 2290
> 2013-07-22 20:35:01.965 AUDIT cinder.service [-] Starting cinder-volume
> node (version 2013.2)
> 2013-07-22 20:35:02.129 ERROR cinder.volume.drivers.rbd
> [req-d3bc2e86-e9db-40e8-bcdb-08c609ce44c3 None None] error connecting to
> ceph cluster
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd Traceback (most
> recent call last):
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd   File
> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 243, in
> check_for_setup_error
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd     with
> RADOSClient(self):
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd   File
> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 215, in __init__
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd     self.cluster,
> self.ioctx = driver._connect_to_rados(pool)
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd   File
> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 263, in
> _connect_to_rados
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd
> client.connect()
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd   File
> "/usr/lib/python2.7/dist-packages/rados.py", line 192, in connect
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd     raise
> make_ex(ret, "error calling connect")
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd ObjectNotFound:
> error calling connect
> 2013-07-22 20:35:02.129 TRACE cinder.volume.drivers.rbd
> 2013-07-22 20:35:02.149 ERROR cinder.service
> [req-d3bc2e86-e9db-40e8-bcdb-08c609ce44c3 None None] Unhandled exception
> 2013-07-22 20:35:02.149 TRACE cinder.service Traceback (most recent call
> last):
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/service.py", line 228, in _start_child
> 2013-07-22 20:35:02.149 TRACE cinder.service
> self._child_process(wrap.server)
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/service.py", line 205, in _child_process
> 2013-07-22 20:35:02.149 TRACE cinder.service
> launcher.run_server(server)
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/service.py", line 96, in run_server
> 2013-07-22 20:35:02.149 TRACE cinder.service     server.start()
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/service.py", line 359, in start
> 2013-07-22 20:35:02.149 TRACE cinder.service     self.manager.init_host()
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/volume/manager.py", line 139, in init_host
> 2013-07-22 20:35:02.149 TRACE cinder.service
> self.driver.check_for_setup_error()
> 2013-07-22 20:35:02.149 TRACE cinder.service   File
> "/opt/stack/cinder/cinder/volume/drivers/rbd.py", line 248, in
> check_for_setup_error
> 2013-07-22 20:35:02.149 TRACE cinder.service     raise
> exception.VolumeBackendAPIException(data=msg)
> 2013-07-22 20:35:02.149 TRACE cinder.service VolumeBackendAPIException:
> Bad or unexpected response from the storage volume backend API: error
> connecting to ceph cluster
> 2013-07-22 20:35:02.149 TRACE cinder.service
> 2013-07-22 20:35:02.191 INFO cinder.service [-] Child 2290 exited with
> status 2
> 2013-07-22 20:35:02.192 INFO cinder.service [-] _wait_child 1
> 2013-07-22 20:35:02.193 INFO cinder.service [-] wait wrap.failed True
>
>
> Can someone help me with some debug points and solve it ?
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>

<<image.png>>

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to