Hi Mark,
The results are below. These numbers look good but I'm not really sure what
to conclude now.
# rados -p performance_test bench 120 write -b 4194304 -t 100 --no-cleanup
Total time run: 120.133251
Total writes made: 17529
Write size: 4194304
Bandwidth
Hi Jay,
The -b parameter to rados bench controls the size of the object being
written. previously you were writing out 8KB objects which behind the
scenes translates into writing out lots of small files on the OSDs
behind the scenes. Your DD tests were doing 1MB writes which are much
Hello Mark,
sorry for barging in there but are you sure this is correct? In my tests
the -b parameter in rados bench does exactly one thing and that is it
uses the value in its output to calculate IO bandwidth: taking the OPS
value and multiplies it with the -b value for display. However it
Hi Rene,
The easiest way to check is to create a fresh pool and look at the files
that are created under an OSD for a PG associated with that pool.
Here's an example using firefly:
perf@magna003:/$ ceph-osd --version
ceph version 0.80.7-129-gc069bce (c069bce4e8180da3c0ca4951365032a45df76468)
Can someone help me what I can tune to improve the performance? The cluster
is pushing data at about 13 MB/s with a single copy of data while the
underlying disks can push 100+MB/s.
Can anyone help me with this?
*rados bench results:*
Concurrency Replication size Write(MB/s) Seq
On 11/19/2014 06:51 PM, Jay Janardhan wrote:
Can someone help me what I can tune to improve the performance? The
cluster is pushing data at about 13 MB/s with a single copy of data
while the underlying disks can push 100+MB/s.
Can anyone help me with this?
*rados bench results:*
Concurrency