[ceph-users] ceph df showing wrong MAX AVAIL for hybrid CRUSH Rule

2017-12-19 Thread Patrick Fruh
Hi, I have the following configuration of OSDs: ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS 0 hdd 5.45599 1.0 5587G 2259G 3327G 40.45 1.10 234 1 hdd 5.45599 1.0 5587G 2295G 3291G 41.08 1.11 231 2 hdd 5.45599 1.0 5587G 2321G 3265G 41.56 1.13 232 3 hdd 5.45

[ceph-users] Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

2017-12-11 Thread Patrick Fruh
Hi, after reading a lot about I/O schedulers and performance gains with blk-mq, I switched to a custom 4.14.5 kernel with CONFIG_SCSI_MQ_DEFAULT enabled to have blk-mq for all devices on my cluster. This allows me to use the following schedulers for HDDs and SSDs: mq-deadline, kyber, bfq, none

[ceph-users] CRUSH - adding device class to existing rule without causing complete rebalance

2017-11-13 Thread Patrick Fruh
Hi everyone, I only have a single rule in my crushmap and only OSDs classed as hdd (after the luminous update): rule replicated_ruleset { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step