Hi, Am I the only person noticing disappointing results from the preliminary RDMA testing, or am I reading the numbers wrong?
Yes, it's true that on a very small cluster you do see a great improvement in rdma, but in real life rdma is used in large infrastructure projects, not on a few servers with a handful of osds. In fact, from what i've seen from the slides, the rdma implementation scales horribly to the point that it becomes slower the more osds you through at it. >From my limited knowledge, i have expected a much higher performance gains >with rdma, taking into account that you should have much lower latency and >overhead and lower cpu utilisation when using this transport in comparison >with tcp. Are we likely to see a great deal of improvement with ceph and rdma in a near future? Is there a roadmap for having a stable and reliable rdma protocol support? Thanks Andrei ----- Original Message ----- > From: "Andrey Korolyov" <[email protected]> > To: "Somnath Roy" <[email protected]> > Cc: [email protected], "ceph-devel" > <[email protected]> > Sent: Wednesday, 8 April, 2015 9:28:12 AM > Subject: Re: [ceph-users] Preliminary RDMA vs TCP numbers > On Wed, Apr 8, 2015 at 11:17 AM, Somnath Roy > <[email protected]> wrote: > > > > Hi, > > Please find the preliminary performance numbers of TCP Vs RDMA > > (XIO) implementation (on top of SSDs) in the following link. > > > > http://www.slideshare.net/somnathroy7568/ceph-on-rdma > > > > The attachment didn't go through it seems, so, I had to use > > slideshare. > > > > Mark, > > If we have time, I can present it in tomorrow's performance > > meeting. > > > > Thanks & Regards > > Somnath > > > Those numbers are really impressive (for small numbers at least)! > What > are TCP settings you using?For example, difference can be lowered on > scale due to less intensive per-connection acceleration on CUBIC on a > larger number of nodes, though I do not believe that it was a main > reason for an observed TCP catchup on a relatively flat workload such > as fio generates. > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
