Hi.
It's not Ceph to blame!
Linux does not support cached asynchronous I/O, except for the new
io-uring! I.e. it supports aio calls, but they just block when you're
trying to do them on an FD opened without O_DIRECT.
So basically what happens when you benchmark it with -ioengine=libaio
-dir
Hi,
I think one of your problem is bcache.
Here is one example:
https://habr.com/en/company/selectel/blog/450818
BR,
Sebastian
> On 16 Aug 2019, at 00:49, Rich Bade wrote:
>
> Unfortunately the scsi reset on this vm happened again last night so this
> hasn't resolved the issue.
> Thanks for
Hi,
I think one of your problem is bcache.
Here is one example:
https://habr.com/en/company/selectel/blog/450818
BR,
Sebastian
> On 16 Aug 2019, at 00:49, Rich Bade wrote:
>
> Unfortunately the scsi reset on this vm happened again last night so this
> hasn't resolved the issue.
> Thanks for
The overall latency in the cluster may be too high, but it was worth a
shot. I've noticed that these settings really reduces the latency
distribution so that it becomes more predictable and prevented some single
VMs from hanging for long periods of time while others worked just fine
usually when on
Unfortunately the scsi reset on this vm happened again last night so this
hasn't resolved the issue.
Thanks for the suggestion though.
Rich
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks Robert, I'm trying those settings to see if they make a difference for
our case. It's usually around the weekend we have issues so should have some
idea by next week.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
I've found that these setting make Ceph clients have much more consistent
latency than the default scheduler, it also reduces the impact of backfills
and recoveries. It may not give you better performance (although I have
seen it allow all disks to be utilized to 100% rather than only as fast as
th