On 05/28/2017 09:43 PM, David Turner wrote:
What are your pg numbers for each pool? Your % used in each pool? And
number of OSDs?
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
89380G 74755G 14625G 16.36
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
replicated_rbd 1 3305G 12.10 24007G 850006
ec_rbd 2 2674G 5.83 43212G 686555
ec_cache 3 82281M 0.33 24007G 20765
pool 1 'replicated_rbd' replicated size 3 min_size 2 crush_ruleset 1
object_hash rjenkins pg_num 256 pgp_num 256 last_change 218 flags hashpspool
stripe_width 0
removed_snaps [1~3,5~2,8~2,e~2]
pool 2 'ec_rbd' erasure size 5 min_size 4 crush_ruleset 0 object_hash rjenkins
pg_num 256 pgp_num 256 last_change 188 lfor 107 flags hashpspool tiers 3
read_tier 3 write_tier 3 stripe_width 4128
removed_snaps [1~5]
pool 3 'ec_cache' replicated size 3 min_size 2 crush_ruleset 1 object_hash
rjenkins pg_num 16 pgp_num 16 last_change 1117 flags
hashpspool,incomplete_clones tier_of 2 cache_mode writeback target_bytes
107400000000 target_objects 40000 hit_set bloom{false_positive_probability:
0.05, target_size: 0, seed: 0} 0s x1 decay_rate 0 search_last_n 0 stripe_width 0
removed_snaps [1~5]
ec_cache settings:
ceph osd pool set ec_cache target_max_bytes 107400000000 # 100Gb
ceph osd pool set ec_cache cache_target_dirty_ratio 0.3
ceph osd pool set ec_cache cache_target_dirty_high_ratio 0.6
ceph osd pool set ec_cache cache_target_full_ratio 0.8
ceph osd pool set ec_cache target_max_objects 40000
Number of OSD is: 6x osd nodes * 4 osd = 24 OSDs.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com