On Wed, Nov 7, 2018 at 10:52 PM Raju Rangoju <[email protected]> wrote:

> Hello All,
>
>
>
> I have been collecting performance numbers on our ceph cluster, and I had
> noticed a very poor throughput on ceph async+rdma when compared with tcp. I
> was wondering what tunings/settings should I do to the cluster that would
> improve the *ceph rdma* (async+rdma) performance.
>
>
>
> Currently, from what we see: Ceph rdma throughput is less than half of the
> ceph tcp throughput (ran fio over iscsi mounted disks).
>
> Our ceph cluster has 8 nodes and configured with two networks, cluster and
> client networks.
>
>
>
> Can someone please shed some light.
>

Unfortunately the RDMA implementations are still fairly experimental and
the community doesn't have much experience with them. I think the last I
heard, the people developing that feature were planning to port it over to
a different RDMA library (though that might be wrong/out of date) — it's
not something I would consider a stable implementation. :/
-Greg


>
>
> I’d be glad to provide any further information regarding the setup.
>
>
>
> Thanks in Advance,
>
> Raju
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to