I have the following 4 pools:

pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash 
rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool stripe_width 0
pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset 1 object_hash 
rjenkins pg_num 256 pgp_num 256 last_change 154 flags hashpspool stripe_width 0
pool 20 'ec104osd' erasure size 14 min_size 10 crush_ruleset 7 object_hash 
rjenkins pg_num 256 pgp_num 256 last_change 163 flags hashpspool stripe_width 
4160
pool 21 'ec32osd' erasure size 5 min_size 3 crush_ruleset 6 object_hash 
rjenkins pg_num 256 pgp_num 256 last_change 165 flags hashpspool stripe_width 
4128

with 15 up osds.

and ceph health tells me I have too many PGs per OSD (375 > 300)

I'm not sure where the 375 comes from, since there are 896 pgs and 15 osds = 
approx. 60 pgs per OSD.

-- Tom

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to