Hi,

Sorry to bother, but I have urgent situation: upgraded CEPH from 0.72 to
0.80 (centos 6.5), and now all my CloudStack HOSTS can not connect.

I did basic "yum update ceph" on the first MON leader, and all CEPH
services on that HOST, have been restarted - done same on other CEPH nodes
(I have 1MON + 2 OSD per physical host), then I have set variables to
optimal with "ceph osd crush tunables optimal" and after some rebalancing,
ceph shows HEALTH_OK.

Also, I can create new images with qemu-img -f rbd rbd:/cloudstack

Libvirt 1.2.3 was compiled while ceph was 0.72, but I got instructions from
Wido that I don't need to REcompile now with ceph 0.80...

Libvirt logs:

libvirt: Storage Driver error : Storage pool not found: no storage pool
with matching uuid ‡Îhyš<JŠ~`a*×

Note there are some strange "uuid" - not sure what is happening ?

Did I forget to do something after CEPH upgrade ?


Any help will be VERY much appriciated...
Andrija
-- 

Andrija Panić
--------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to