hi,all
in order to test ceph stability. i  try to kill osds.
in this case ,i kill 3 osds(osd3,2,0) that store the same pg  2.30.
 
---crush---
osdmap e1342 pool 'rbd' (2) object 'rbd_data.19d92ae8944a.0000000000000000' -> 
pg 2.c59a45b0 (2.30) -> up ([3,2,0], p3) acting ([3,2,0], p3)
[root@cephosd5-gw current]# ceph osd tree
# id    weight  type name       up/down reweight
-1      0.09995 root default
-2      0.01999         host cephosd1-mona
0       0.01999                 osd.0   down    0
-3      0.01999         host cephosd2-monb
1       0.01999                 osd.1   up      1
-4      0.01999         host cephosd3-monc
2       0.01999                 osd.2   down    0
-5      0.01999         host cephosd4-mdsa
3       0.01999                 osd.3   down    0
-6      0.01999         host cephosd5-gw
4       0.01999                 osd.4   up      1
-----
according to the test result, i have some confusion.
 
1.
[root@cephosd5-gw current]# ceph pg 2.30 query
Error ENOENT: i don't have pgid 2.30
 
why i can not query infomations of this pg?  how to dump this pg?
 
2.
#ceph osd map rbd rbd_data.19d92ae8944a.0000000000000000
osdmap e1451 pool 'rbd' (2) object 'rbd_data.19d92ae8944a.0000000000000000' -> 
pg 2.c59a45b0 (2.30) -> up ([4,1], p4) acting ([4,1], p4)
 
does 'ceph osd map' command just calculate map , but does not check real pg 
stat?  i do not find 2.30  on osd1 and osd.4.
new that client will get the new map, why client hang ?
 
 
thanks very much
 
 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to