On 17 December 2015 at 21:36, Francois Lafont <flafdiv...@free.fr> wrote:

> Hi,
>
> I have ceph cluster currently unused and I have (to my mind) very low
> performances.
> I'm not an expert in benchs, here an example of quick bench:
>
> ---------------------------------------------------------------
> # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=readwrite --filename=rw.data --bs=4k --iodepth=64 --size=300MB
> --readwrite=randrw --rwmixread=50
> readwrite: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio,
> iodepth=64
> fio-2.1.3
>
>  ...

I am seeing the same sort of issue.
If i run your 'fio' command sequence on my cephfs, i see ~120 iops.
If i run it on one of the underlying osd (e.g. in /var... on the mount
point of the xfs), i get ~20k iops.

On the single SSD mount point it completes in ~1s.
On the cephfs, it takes ~17min.

I'm on Ubuntu 15.10 4.3.0-040300-generic kernel.

my 'ceph -w' while this fio is running shows ~550kB/s read/write.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to