Dear all,

I have a Ceph RBD cluster with around 31 OSDs running SSD drives, and I
tried to use the benchmark tools recommended by Sebastien on his blog here:

http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/

Our configuration:

- Ceph version 0.67.7
- 31 OSDs of 500 GB SSD drives each
- Journal for each OSD is configured on the same SSD drive itself
- Journal size 10 GB

After doing some tests recommended on the article, I find out that
generally:

- Local disk benchmark tests using dd is fast, around 245 MB/s since we are
using SSDs.
- Network benchmark tests using iperf and netcat is also fast, I can get
around 9.9 Mbit/sec since we are using 10G network.

However:

- RADOS bench test (rados bench -p my_pool 300 write) on the whole cluster
is slow, averaging around 112 MB/s for write.
- Invididual test using "ceph tell osd.X bench" gives different results per
OSD but also averaging around 110-130 MB/s only.

Anyone can advise what could be the reason of why our RADOS/Ceph benchmark
test result is slow compared to a direct physical drive test on the OSDs
directly? Anything on Ceph configuration that we need to optimise further?

Looking forward to your reply, thank you.

Cheers.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to