Rick,

I have dual core AMD opteron, the IB card are connected to 4X mellanox infiniscale 2400, I am running Sles 10. when i used to run the test without any message size (-m ) The SDP test didn't work for me I am try to figure out why also I had noproblem in loading sdp library.

I also changed the txqueuelen to 4096 the default was 128,

t1:~ # ifconfig ib0
ib0 Link encap:UNSPEC HWaddr 80-00-04-04- FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.2.101 Bcast:192.168.2.255 Mask: 255.255.255.0
          inet6 addr: fe80::202:c902:24:2909/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:65520  Metric:1
          RX packets:11673701 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14069560 errors:0 dropped:5 overruns:0 carrier:0
          collisions:0 txqueuelen:4096
RX bytes:59793476658 (57023.5 Mb) TX bytes:192547529035 (183627.6 Mb)


t1:~ # netperf -H 192.168.2.100 -c -C --
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.100 (192.168.2.100) port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/ KB us/KB

87380 16384 16384 10.00 3989.07 12.81 23.66 0.526 0.972


adding the -m with 128K show a better results

netperf -H 192.168.2.100 -c -C -- -m 128K
bytes bytes bytes secs. 10^6bits/s % S % S us/ KB us/KB 87380 16384 131072 10.00 4646.27 12.91 24.76 0.455 0.873

in case of -m 1m the cpu util showed a real different in this case.

bytes bytes bytes secs. 10^6bits/s % S % S us/ KB us/KB 87380 16384 1048576 10.00 5796.64 21.91 31.91 0.619 0.902


Thanks
H. N. Harake

On 11-Sep-2007, at 19:17, Rick Jones wrote:

SDP

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to