Currently, yes before we can improve the osd code efficiency further.
You can achieve better performance by using client writeback cache if
application allowed.
Regards
Ning Yao


2015-12-11 18:00 GMT+08:00 Zhi Zhang <zhang.david2...@gmail.com>:
> Hi Guys,
>
> We have a small 4 nodes cluster. Here is the hardware configuration.
>
> 11 x 300GB SSD, 24 cores, 32GB memory per one node.
> all the nodes connected within one 1Gb/s network.
>
> So we have one Monitor and 44 OSDs for testing kernel RBD IOPS using
> fio. Here are the major fio options.
>
> -direct=1
> -rw=randwrite
> -ioengine=psync
> -size=1000M
> -bs=4k
> -numjobs=1
>
> The max IOPS we can achieve for single write (numjobs=1) is close to
> 1000. This means each IO from RBD takes 1.x ms.
>
> From osd logs, we can also observe most of osd_ops will take 1.x ms,
> including op processing, journal writing, replication, etc, before
> sending commit back to client.
>
> The network RTT is around 0.04 ms;
> Most osd_ops on primary OSD take around 0.5~0.7 ms, journal write takes 0.3 
> ms;
> Most osd_repops including writing journal on peer OSD take around 0.5 ms.
>
> We even tried to modify journal to write page cache only, but didn't
> get very significant improvement. Does it mean this is the best result
> we can get for single write on single RBD?
>
> Thanks.
>
> --
> Regards,
> Zhi Zhang (David)
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to