Testing -rw=write without -sync=1 or -fsync=1 (or -fsync=32 for batch IO, or just fio -ioengine=rbd from outside a VM) is rather pointless - you're benchmarking the RBD cache, not Ceph itself. RBD cache is coalescing your writes into big sequential writes. Of course bluestore is faster in this case - it has no double write for big writes.

I'll probably try to test these settings - I'm also interested in random write iops in an all-flash bluestore cluster :) but I don't think any rocksdb options will help. I found bluestore pretty untunable in terms of performance :)

The best thing to do for me was to disable CPU powersaving (set governor to performance + cpupower idle-set -D 1). Your CPUs become frying pans but write IOPS, especially single-thread write IOPS which are the worst-case scenario AND at the same time the thing applications usually need increase 2-3 times. Test it with fio -ioengine=rbd -bs=4k -iodepth=1.

Another thing that I've done on my cluster was to set `bluestore_min_alloc_size_ssd` to 4096. The reason to do that is that it's 16kb by default which means all writes below 16kb use the same deferred write path as with HDDs. Deferred writes only increase WA factor for SSDs and lower the performance. You have to recreate OSDs after changing this variable - it's only applied at the time of OSD creation.

I'm also currently trying another performance fix, kind of... but it involves patching ceph's code, so I'll share it later if I succeed.

Hello list,

while the performance of sequential writes 4k on bluestore is very high
and even higher than filestore i was wondering what i can do to optimize
random pattern as well.

While using:
fio --rw=write --iodepth=32 --ioengine=libaio --bs=4k --numjobs=4
--filename=/tmp/test --size=10G --runtime=60 --group_reporting
--name=test --direct=1

I get 36000 iop/s on bluestore while having 11500 on filestore.

Using randwrite gives me 17000 on filestore and only 9500 on bluestore.

This is on all flash / ssd running luminous 12.2.10.

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to