Hi

I suspect something more sinister may be going on. I have set the values 
(though smaller) on my cluster but the same issue happens. I also find when the 
VM is trying to start there might be an IRQ flood as processes like ksoftirqd 
seem to use more CPU than they should.

####################
pool 1 'ssd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins 
pg_num 128 pgp_num 128 last_change 60 flags hashpspool,incomplete_clones 
tier_of 0 cache_mode writeback target_bytes 120000000000 target_objects 1000000 
hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1800s 
x1 stripe_width 0
####################

Thanks 

Pieter

On Aug 05, 2015, at 03:37 PM, Burkhard Linke 
<[email protected]> wrote:

Hi,


On 08/05/2015 03:09 PM, Pieter Koorts wrote:
Hi,

This is my OSD dump below

#######################
osc-mgmt-1:~$ sudo ceph osd dump | grep pool
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 128 pgp_num 128 last_change 43 lfor 43 flags hashpspool tiers 1 
read_tier 1 write_tier 1 stripe_width 0
pool 1 'ssd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins 
pg_num 128 pgp_num 128 last_change 44 flags hashpspool,incomplete_clones 
tier_of 0 cache_mode writeback hit_set bloom{false_positive_probability: 0.05, 
target_size: 0, seed: 0} 0s x0 stripe_width 0
 #######################

I have also attached my crushmap (plain text version) if that can provide any 
detail too.

This is the setup of my VM cache pool:
pool 9 'ssd_cache' replicated size 2 min_size 1 crush_ruleset 2 object_hash 
rjenkins pg_num 128 pgp_num 128 last_change 182947 flags 
hashpspool,incomplete_clones tier_of 5 cache_mode writeback target_bytes 
500000000000 target_objects 1000000 hit_set bloom{false_positive_probability: 
0.05, target_size: 0, seed: 0} 3600s x1 min_read_recency_for_promote 1 
stripe_width 0

You proabably need to set at least either target_bytes or target_objects. These 
are the values the flush/evict ratios of cache pools refer to.

Best regards,
Burkhard
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to