Hi,

if you use debian,

try to use a recent kernel from backport (>3.10)

also check your libleveldb1 version, it should be 1.9.0-1~bpo70+1  (debian 
wheezy version is too old)

I don't see it in ceph repo:
http://ceph.com/debian-firefly/pool/main/l/leveldb/

(only for squeeze ~bpo60+1)

but you can take it from our proxmox repository
http://download.proxmox.com/debian/dists/wheezy/pve-no-subscription/binary-amd64/libleveldb1_1.9.0-1~bpo70+1_amd64.deb


----- Mail original ----- 

De: "jan zeller" <[email protected]> 
À: [email protected] 
Envoyé: Vendredi 23 Mai 2014 10:50:40 
Objet: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean 

Dear ceph, 

I am trying to setup ceph 0.80.1 with the following components : 

1 x mon - Debian Wheezy (i386) 
3 x osds - Debian Wheezy (i386) 

(all are kvm powered) 

Status after the standard setup procedure : 

root@ceph-node2:~# ceph -s 
cluster d079dd72-8454-4b4a-af92-ef4c424d96d8 
health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck 
unclean 
monmap e1: 1 mons at {ceph-node1=192.168.123.48:6789/0}, election epoch 2, 
quorum 0 ceph-node1 
osdmap e11: 3 osds: 3 up, 3 in 
pgmap v18: 192 pgs, 3 pools, 0 bytes data, 0 objects 
103 MB used, 15223 MB / 15326 MB avail 
192 incomplete 

root@ceph-node2:~# ceph health 
HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean 

root@ceph-node2:~# ceph osd tree 
# id weight type name up/down reweight 
-1 0 root default 
-2 0 host ceph-node2 
0 0 osd.0 up 1 
-3 0 host ceph-node3 
1 0 osd.1 up 1 
-4 0 host ceph-node4 
2 0 osd.2 up 1 


root@ceph-node2:~# ceph osd dump 
epoch 11 
fsid d079dd72-8454-4b4a-af92-ef4c424d96d8 
created 2014-05-23 09:00:08.780211 
modified 2014-05-23 09:01:33.438001 
flags 

pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool 
crash_replay_interval 45 stripe_width 0 

pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash 
rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool 
stripe_width 0 

pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0 
max_osd 3 

osd.0 up in weight 1 up_from 4 up_thru 5 down_at 0 last_clean_interval [0,0) 
192.168.123.49:6800/11373 192.168.123.49:6801/11373 192.168.123.49:6802/11373 
192.168.123.49:6803/11373 exists,up 21a7d2a8-b709-4a28-bc3b-850913fe4c6b 

osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0) 
192.168.123.50:6800/10542 192.168.123.50:6801/10542 192.168.123.50:6802/10542 
192.168.123.50:6803/10542 exists,up c1cd3ad1-b086-438f-a22d-9034b383a1be 

osd.2 up in weight 1 up_from 11 up_thru 0 down_at 0 last_clean_interval [0,0) 
192.168.123.53:6800/6962 192.168.123.53:6801/6962 192.168.123.53:6802/6962 
192.168.123.53:6803/6962 exists,up aa06d7e4-181c-4d70-bb8e-018b088c5053 


What am I doing wrong here ? 
Or what kind of additional information should be provided to get 
troubleshooted. 

thanks, 

--- 

Jan 

P.S. with emperor 0.72.2 I had no such problems 
_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to