Hi Ceph users,
I am stuck with the benchmark results that I
obtained from the ceph cluster.
Ceph Cluster:
1 Mon node, 4 osd nodes of 1 TB. I have one journal for each osd.
All disks are identical and nodes are connected by 10 G. Below is the dd
results
dd if=/dev/zero of=/home/ubuntu/deleteme bs=10G count=1 oflag=direct
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 17.0705 s, 126 MB/s
I created 1 osd(xfs) on each node as below.
mkfs.xfs /dev/sdo1
mount /dev/sdo1 /node/nodeo
sudo mkfs.xfs /dev/sdp1
ceph-deploy osd prepare mynode:/node/nodeo:/dev/sdp1
ceph-deploy osd activate mynode:/node/nodeo:/dev/sdp1
Now, when I run rados bechmarks, I am just getting ~4 MB/s for writes and
~40 Mbps for reads. What am I doing wrong?.
I have seen Christian's post regarding the block sizes and parallelism. My
benchmark arguments seem to be right.
Replica size of test-pool - 2
No of pgs: 256
rados -p test-pool bench 120 write -b 4096 -t 16 --no-cleanup
Total writes made: 245616
Write size: 4096
Bandwidth (MB/sec): 3.997
Stddev Bandwidth: 2.19989
Max bandwidth (MB/sec): 8.46094
Min bandwidth (MB/sec): 0
Average Latency: 0.0156332
Stddev Latency: 0.0460168
Max latency: 2.94882
Min latency: 0.001725
rados -p test-pool bench 120 seq -t 16 --no-cleanup
Total reads made: 245616
Read size: 4096
Bandwidth (MB/sec): 40.276
Average Latency: 0.00155048
Max latency: 3.25052
Min latency: 0.000515
Please help me out how to debug? The results are very less than expected.
Thanks
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com