Hi folks,

      I am testing the replication performance of ceph-0.26 with
libceph, write 1G data in with ceph_write() and read it out with
ceph_read(),

rep_size    1                               2
    3                      4
write:  78.8 MB/s               39.38 MB/s              27.7 MB/s               
20.90 MB/s
read:   85.3 MB/s               85.33 MB/s              78.77MB/s               
78.77MB/s

I think if the replication strategy is splay or primary copy, not the
chain, as the thesis said,   writing speed for 3, 4 or even more
replication will be a little worse than  2 replication, should be near
with 39.38 MB/s.
But the write performance  I got is affect so much by size of replication.

What is the replication strategy in ceph-0.26, not splay?  If splay,
why not near with 39.38 MB/s?

There is 5 OSDs in 2 hosts, 2 in one and 3 int the other.

Thx!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to