Hi,

I have a ceph cluster consisting of 4 hosts. Two of them have 3 SSD OSD
each and the other two 8 HDD OSD each. I have different crush rules for ssd
and hdd.

Now when  i first made the cluster i only gave one ssd for journaling to
all 8 hdd osd on the host. The host has 10 sata ports. One is used for OS,
one for journaling and 8 for osd. Now i want to add another journal ssd on
the 2 hosts each. So i need to remove one hdd osd from each host,

Following the docs, i set an osd as out and the cluster starts rebalancing
data. My problem is that it never achieves active+clean state. I always end
up with some pgs stuck unclean. If i bring the osd back in the cluster
returns to an active+clean state (with an error of too many pgs per osd
431-max 300).

I run a mds server aswell and radosgw.

What could be the problem. How can i shrink the cluster to add 2 more
journals?!

Should i restart the mons and osd after rebalancing?

Thank you!
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to