Hi,
64K is borderline for seeing bzcopy effect. Using an AMD 6000+ (3 Ghz
dual core) in Asus M2A-VM motherboard with ConnectX running 2.3 firmware
and OFED 1.3-rc3 stack running on 2.6.23.8 kernel.org kernel, I ran the
test for 128K:
5546 sdp_zcopy_thresh=0 (off)
8709 sdp_zcopy_thresh=65536
For these tests, I just have LD_PRELOAD set in my environment.
=======================
I see that TCP_MAXSEG is not being handled by libsdp and will look into
it.
[EMAIL PROTECTED] ~]# modprobe ib_sdp
[EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
-C -- -m 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 193.168.10.198
(193.168.10.198) port 0 AF_INET
netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send
Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB
us/KB
87380 16384 131072 30.01 5545.69 51.47 14.43 1.521
1.706
Alignment Offset Bytes Bytes Sends Bytes
Recvs
Local Remote Local Remote Xfered Per Per
Send Recv Send Recv Send (avg) Recv (avg)
8 8 0 0 2.08e+10 131072.00 158690 33135.60
627718
Maximum
Segment
Size (bytes)
-1
[EMAIL PROTECTED] ~]# echo 65536
>/sys/module/ib_sdp/parameters/sdp_zcopy_thresh
[EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
-C -- -m 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 193.168.10.198
(193.168.10.198) port 0 AF_INET
netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send
Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB
us/KB
87380 16384 131072 30.01 8708.58 50.63 14.55 0.953
1.095
Alignment Offset Bytes Bytes Sends Bytes
Recvs
Local Remote Local Remote Xfered Per Per
Send Recv Send Recv Send (avg) Recv (avg)
8 8 0 0 3.267e+10 131072.00 249228 26348.30
1239807
Maximum
Segment
Size (bytes)
-1
Thanks,
JIm
Jim Mott
Mellanox Technologies Ltd.
mail: [EMAIL PROTECTED]
Phone: 512-294-5481
-----Original Message-----
From: Weikuan Yu [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 24, 2008 9:09 AM
To: Scott Weitzenkamp (sweitzen)
Cc: Jim Mott; [EMAIL PROTECTED]; [email protected]
Subject: Re: [ofa-general] RE: [ewg] Not seeing any SDP performance
changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh
Hi, Scott,
I have been running SDP tests across two woodcrest nodes with 4x DDR
cards using OFED-1.2.5.4. The card/firmware info is below.
CA 'mthca0'
CA type: MT25208
Number of ports: 2
Firmware version: 5.1.400
Hardware version: a0
Node GUID: 0x0002c90200228e0c
System image GUID: 0x0002c90200228e0f
I could not get a bandwidth more than 5Gbps like you have shown here.
Wonder if I need to upgrade to the latest software or firmware? Any
suggestions?
Thanks,
--Weikuan
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.225.77
(192.168
.225.77) port 0 AF_INET
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send
Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB
us/KB
131072 131072 131072 10.00 4918.95 21.29 24.99 1.418
1.665
Scott Weitzenkamp (sweitzen) wrote:
> Jim,
>
> I am trying OFED-1.3-20071231-0600 and RHEL4 x86_64 on a dual CPU
> (single core each CPU) Xeon system. I do not see any performance
> improvement (either throughput or CPU utilization) using netperf when
I
> set /sys/module/ib_sdp/sdp_zcopy_thresh to 16384. Can you elaborate
on
> your HCA type, and performance improvement you see?
>
> Here's an example netperf command line when using a Cheetah DDR HCA
and
> 1.2.917 firmware (I have also tried ConnectX and 2.3.000 firmware
too):
>
> [EMAIL PROTECTED] ~]$ LD_PRELOAD=libsdp.so netperf241 -v2 -4 -H
> 192.168.1.201 -l 30 -t TCP_STREAM -c -C -- -m 65536
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.201
> (192.168.1.201) port 0 AF_INET : histogram : demo
>
> Recv Send Send Utilization Service
> Demand
> Socket Socket Message Elapsed Send Recv Send
> Recv
> Size Size Size Time Throughput local remote local
> remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB
> us/KB
>
> 87380 16384 65536 30.01 7267.70 55.06 61.27 1.241
> 1.381
>
> Alignment Offset Bytes Bytes Sends Bytes
> Recvs
> Local Remote Local Remote Xfered Per Per
> Send Recv Send Recv Send (avg) Recv (avg)
> 8 8 0 0 2.726e+10 65536.00 415942 48106.01
> 566648
>
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general