Hi, 

we made some benchmarks about object read/write latencies on the CERN ceph 
installation.

The cluster has 44 nodes and ~1k disks, all on 10GE and the pool configuration 
has 3 copies. 
Client & Server is 0.67.

The latencies we observe (using tiny objects ... 5 bytes) on the idle pool:

write full object(sync) ~65-80ms
append to object ~60-75ms
set xattr object ~65-80ms
lock object ~65-80ms
stat object ~1ms

We seem to saturate the pools writing ~ 20k objects/s (= internally 60k/s).

Is there an easy explanation for 80 ms (quasi without payload) and a possible 
tuning to reduce that?
I measured (append few bytes +fsync) on such a disk around 33ms which explains 
probably part of the latency.

Then I tried with the async API to see if there is a difference in the 
measurement between wait_for_complete or wait_for_safe ... shouldn't 
wait_for_complete be much shorter, but I get always comparable results ...

Thanks, Andreas.--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to