Dear ceph users,

I just set up a small cluster with two osds and 3 mon.
(0.56.4-1~bpo70+1)

OSDs are xfs (defaults mkfs options, mounted defaults,noatime) over lvm over 
hwraid.

dd if=/dev/zero of=... bs=1M count=10000 conv=fdatasync on each ceph-*
osd mounted partitions show 120MB/s on one server and 50MB/s on the
second one.

iperf between servers gives 580Mb/s

I created a rbd, mapped it and did the same dd on it (direct to
/dev/rbd/...).

I get only 15MB/s :(


(network interfaces shows ~ 120-150Mb/s, each server show ~30% IO wait)



Any hint to increase the performance so it's not so far from non-ceph
one?


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to