Hi,
My config:
osd op threads = 8
osd disk threads = 4
osd recovery threads = 1
osd recovery max active = 1
osd recovery op priority = 10
osd client op priority = 100
osd max backfills = 1
I set it to maximize client operation priority and slow backfill
operations ( client first !! :-) )
Once the osd holding the rgw index died, after the restart the cluster
got stuck on "25 active+recovery_wait, 1 active+recovering;"
Please help me choose optimal values for osd recovery threads and
priorty on ceph s3 optimized cluster.
Cluster:
12 server x 12 osd
3 mons, 144 osds, 32424 pgs
--
Regards
Dominik
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html