Hi guys!
After get a cluster for 10 servers and build a image for a pool of 450TB, it
got sucked at the mkfs moment and what I noticed was that my entire cluster was
failing in a really weird way, some osds goes down and up from different nodes
and repeat so now I have a lot of PGs degraded and stuck :S I don’t know why
the cluster behave in that way (osds going up and down) do you think is a bug?
[root@capricornio ceph-cluster]# ceph status
cluster d39f6247-1543-432d-9247-6c56f65bb6cd
health HEALTH_WARN 109 pgs degraded; 251 pgs down; 1647 pgs peering; 2598
pgs stale; 1618 pgs stuck inactive; 1643 pgs stuck unclean; 109 pgs undersized;
1 requests are blocked > 32 sec; recovery 13/1838 objects degraded (0.707%);
1/107 in osds are down
monmap e1: 3 mons at
{capricornio=192.168.4.44:6789/0,geminis=192.168.4.37:6789/0,tauro=192.168.4.36:6789/0},
election epoch 50, quorum 0,1,2 tauro,geminis,capricornio
osdmap e5275: 119 osds: 106 up, 107 in
pgmap v15826: 8192 pgs, 1 pools, 2016 MB data, 919 objects
48484 MB used, 388 TB / 388 TB avail
13/1838 objects degraded (0.707%)
2162 stale+active+clean
1031 peering
4274 active+clean
3 stale+remapped+peering
32 stale+active+undersized+degraded
43 stale+down+peering
4 remapped+peering
77 active+undersized+degraded
208 down+peering
358 stale+peering
[cid:[email protected]]
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
[email protected]<mailto:[email protected]>
Phone: +52 55 5267 3146
Mobile: +51 1 5538883255
CCIE - 44433
Cisco.com<http://www.cisco.com/>
[cid:[email protected]]
Think before you print.
This email may contain confidential and privileged material for the sole use of
the intended recipient. Any review, use, distribution or disclosure by others
is strictly prohibited. If you are not the intended recipient (or authorized to
receive for the recipient), please contact the sender by reply email and delete
all copies of this message.
Please click
here<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for
Company Registration Information.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com