Hi again,
sorry - forgot my post... see
osdmap e421: 9 osds: 9 up, 9 in
shows that all your 9 osds are up!
Do you have trouble with your journal/filesystem?
Udo
Am 25.09.2014 08:01, schrieb Udo Lembke:
> Hi,
> looks that some osds are down?!
>
> What is the output of "ceph osd tree"
>
> Udo
>
> Am 25.09.2014 04:29, schrieb Aegeaner:
>> The cluster healthy state is WARN:
>>
>> health HEALTH_WARN 118 pgs degraded; 8 pgs down; 59 pgs
>> incomplete; 28 pgs peering; 292 pgs stale; 87 pgs stuck inactive;
>> 292 pgs stuck stale; 205 pgs stuck unclean; 22 requests are blocked
>> > 32 sec; recovery 12474/46357 objects degraded (26.909%)
>> monmap e3: 3 mons at
>>
>> {CVM-0-mon01=172.18.117.146:6789/0,CVM-0-mon02=172.18.117.152:6789/0,CVM-0-mon03=172.18.117.153:6789/0},
>> election epoch 24, quorum 0,1,2 CVM-0-mon01,CVM-0-mon02,CVM-0-mon03
>> osdmap e421: 9 osds: 9 up, 9 in
>> pgmap v2261: 292 pgs, 4 pools, 91532 MB data, 23178 objects
>> 330 MB used, 3363 GB / 3363 GB avail
>> 12474/46357 objects degraded (26.909%)
>> 20 stale+peering
>> 87 stale+active+clean
>> 8 stale+down+peering
>> 59 stale+incomplete
>> 118 stale+active+degraded
>>
>>
>> What does these errors mean? Can these PGs be recovered?
>>
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com