On Sun, Feb 17, 2019 at 9:51 PM wrote:
>
> > Probably not related to CephFS. Try to compare the latency you are
> > seeing to the op_r_latency reported by the OSDs.
> >
> > The fast_read option on the pool can also help a lot for this IO pattern.
>
> Magic, that actually cut the read-latency in
> Probably not related to CephFS. Try to compare the latency you are
> seeing to the op_r_latency reported by the OSDs.
>
> The fast_read option on the pool can also help a lot for this IO pattern.
Magic, that actually cut the read-latency in half - making it more
aligned with what to expect from
Probably not related to CephFS. Try to compare the latency you are
seeing to the op_r_latency reported by the OSDs.
The fast_read option on the pool can also help a lot for this IO pattern.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit
Hi.
I've got a bunch of "small" files moved onto CephFS as archive/bulk storage
and now I have the backup (to tape) to spool over them. A sample of the
single-threaded backup client delivers this very consistent pattern:
$ sudo strace -T -p 7307 2>&1 | grep -A 7 -B 3 open
write(111,