Hello,

I'm doing some tests with fio from a qemu 1.2 guest (virtio disk,cache=none), 
randread, with 4K block size on a small size of 1G (so it can be handle by the 
buffer cache on ceph cluster)


fio --filename=/dev/vdb -rw=randread --bs=4K --size=1000M --iodepth=40  
--group_reporting --name=file1 --ioengine=libaio --direct=1


I can't get more than 5000 iops.


RBD cluster is :
---------------
3 nodes,with each node : 
-6 x osd 15k drives (xfs), journal on tmpfs, 1 mon 
-cpu: 2x 4 cores intel xeon [email protected]
rbd 0.53

ceph.conf

        journal dio = false
        filestore fiemap = false
        filestore flusher = false
        osd op threads = 24
        osd disk threads = 24
        filestore op threads = 6

kvm host is : 4 x 12 cores opteron
------------


During the bench:

on ceph nodes:
- cpu  is around 10% used
- iostat show no disks activity on osds. (so I think that the 1G file is handle 
in the linux buffer)


on kvm host:

-cpu is around 20% used


I really don't see where is the bottleneck....

Any Ideas, hints ?


Regards,

Alexandre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to