On 18 December 2015 at 15:48, Don Waterloo <don.water...@gmail.com> wrote:

>
>
> On 17 December 2015 at 21:36, Francois Lafont <flafdiv...@free.fr> wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances.
>> I'm not an expert in benchs, here an example of quick bench:
>>
>> ---------------------------------------------------------------
>> # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
>> --name=readwrite --filename=rw.data --bs=4k --iodepth=64 --size=300MB
>> --readwrite=randrw --rwmixread=50
>> readwrite: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio,
>> iodepth=64
>> fio-2.1.3
>>
>>  ...
>
> I am seeing the same sort of issue.
> If i run your 'fio' command sequence on my cephfs, i see ~120 iops.
> If i run it on one of the underlying osd (e.g. in /var... on the mount
> point of the xfs), i get ~20k iops.
>
>
> If i run:
rbd -p mypool create speed-test-image --size 1000
rbd -p mypool bench-write speed-test-image

I get

bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern seq
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     79053  79070.82  323874082.50
    2    144340  72178.81  295644410.60
    3    221975  73997.57  303094057.34
elapsed:    10  ops:   262144  ops/sec: 26129.32  bytes/sec: 107025708.32

which is *much* faster than the cephfs.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to