Le 13/08/2018 à 15:55, Jason Dillaman a écrit :
>
>
>
>     so problem seems located on "rbd" side  ...
>
>
> That's a pretty big apples-to-oranges comparison (4KiB random IO to
> 4MiB full-object IO). With your RBD workload, the OSDs will be seeking
> after each 4KiB read but w/ your RADOS bench workload, it's reading a
> full 4MiB object before seeking.
>

yes you're right, but if we compare cluster to cluser, on new cluster,
rados bench is faster (2 times) rbd fio is 7 times slower.

that's why I suppose rbd is th problem here, but I really do not
understand how to fix it. I looked at 3 old hammer cluster and two new
luminous/buestore clusters and those results are constants. I do not
think ceph decided to put bluestore
as default luminous filestore if random reads are 7 time slower ;)


(BTW: thanks for helping me Jason :) ).

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to