Thank you very much, Jason.Our cluster’s target workload is something like monitoring system data center, we need save a lot of video stream into cluster.I have to reconsider test case.Besides, a lot tests to do about the config parameters as you mentioned.Help me a lot, thanks.在
On Thu, Nov 15, 2018 at 2:30 PM 赵赵贺东 wrote:
>
> I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to
> 300MB, performance seems not change obviously.
> But at the beginning , librbd works in better performance in 12 osds cluster.
> So it seems meaning less for me.
>
I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to
300MB, performance seems not change obviously.
But at the beginning , librbd works in better performance in 12 osds cluster.
So it seems meaning less for me.
In a small cluster(12 osds), 4m seq write performance for
Thanks you for your suggestion.
It really give me a lot of inspirations.
I will test as your suggestion, and browse through src/common/config_opts.h to
see if I can find some configs performance related.
But, our osd nodes hardware itself is very poor, that is the truth…we have to
face it.
Attempting to send 256 concurrent 4MiB writes via librbd will pretty
quickly hit the default "objecter_inflight_op_bytes = 100 MiB" limit,
which will drastically slow (stall) librados. I would recommend
re-testing librbd w/ a much higher throttle override.
On Thu, Nov 15, 2018 at 11:34 AM 赵赵贺东
Thank you for your attention.
Our test are in run in physical machine environments.
Fio for KRBD:
[seq-write]
description="seq-write"
direct=1
ioengine=libaio
filename=/dev/rbd0
numjobs=1
iodepth=256
group_reporting
rw=write
bs=4M
size=10T
runtime=180
*/dev/rbd0 mapped by rbd_pool/image2, so
You'll need to provide more data about how your test is configured and run
for us to have a good idea. IIRC librbd is often faster than krbd because
it can support newer features and things, but krbd may have less overhead
and is not dependent on the VM's driver configuration in QEMU...
On Thu,
Hi cephers,
All our cluster osds are deployed in armhf.
Could someone say something about what is the rational performance rates for
librbd VS KRBD ?
Or rational performance loss range when we use librbd compare to KRBD.
I googled a lot, but I could not find a solid criterion.
In fact , it