Craig Prescott wrote:
Jim Mott wrote:
Now that SDP is shipping with a non-zero default value for
sdp_zcopy_thresh (64K), I need some feedback from the list.  Does
anybody except me see a performance gain on large messages?

Oh, I see it - absolutely. For SDP on iWARP, you can see the improvement for large messages at.

http://hpc.ufl.edu/benchmarks/iwarp_sdp/

Scan down to SDP Benchmarks. The page is not really done yet, but I'm trying to finish up today.

I'll run the same tests on IB (we have 4X SDR Lion Cubs) shortly and post.


Hi Jim - I did exactly the same commands as in your post.  Here is for
4X SDR Lion Cubs on dual Opteron 2218s with CentOS 5.0. The nodes were idle, except for the netperf. For us, it looks like we should
investigate set the sdp_zcopy_thresh to something higher than the
default.  But the effect is still clear.

throughput:

              64K    128K      1M
  SDP      7602.40  7560.57  5791.56
  BZCOPY   5454.20  6378.48  7316.28

and for usec/KB:

               64K         128K          1M
            LCL   RMT    LCL   RMT    LCL   RMT
  SDP      0.574 1.079  0.836 1.084  1.398 1.244
  BZCOPY   1.518 1.602  1.291 1.331  0.923 1.082

Cheers,
Craig
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to