What operating system? If a linux, what's the output of uname -r? There are
some kernel settings that could be in play. For example, later kernel's do
things to limit buffer bloat
[root@dhcpl7-sv1-147 ~]# sysctl -a | grep output
net.ipv4.tcp_limit_output_bytes = 256000
Not sure if this would affect things per bidirectional or not. There are other
kernel settings that *could* impact TCP performance. It usually takes a bit
of digging to understand why TCP isn't operating at full network capacity.
TCP's congestion feedback loop is the major difference between it and UDP.
From an iperf application perspective there really isn't much difference
between TCP and UDP (though I wouldn't completely rule it out until after
instrumenting the code.)
Bob
From: Varun Sharma [mailto:vsd...@gmail.com]
Sent: Friday, February 21, 2014 9:54 AM
To: Bob (Robert) McMahon
Cc: iperf-users@lists.sourceforge.net; am...@mellanox.com
Subject: Re: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional test
.
With UDP packets It's work fine.
When I perform tcp bi-directional test for loop back interface (means on same
machine server and client exists), same decrease in tcp sending side happen. Is
it mean problem related with TCP/IP stack ?
On Thu, Feb 20, 2014 at 11:46 PM, Bob (Robert) McMahon
<rmcma...@broadcom.com<mailto:rmcma...@broadcom.com>> wrote:
It's hard to say without a lot more information (and difficult to debug via
email.) Do you see the same phenomena when using UDP packets?
If it's TCP only a next step may be to analyze the tcp flows with something
like tcp trace. http://www.tcptrace.org/
Bob
From: Varun Sharma [mailto:vsd...@gmail.com<mailto:vsd...@gmail.com>]
Sent: Thursday, February 20, 2014 1:45 AM
To: Bob (Robert) McMahon
Cc:
iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>;
am...@mellanox.com<mailto:am...@mellanox.com>
Subject: Re: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional test
.
Hi Bob
Thanks for reply.
I change in iperf as you suggested , but problem still occur. Is there any
another setting for overcome this problem ?
can you tell me why this happen ?
Varun
On Thu, Feb 20, 2014 at 11:11 AM, Bob (Robert) McMahon
<rmcma...@broadcom.com<mailto:rmcma...@broadcom.com>> wrote:
I've had to increase the NUM_REPORT_STRUCTS to get better iperf performance in
2.0.5
improved/iperf] $ svn diff include/*.h
Index: include/Reporter.h
===================================================================
--- include/Reporter.h (revision 2)
+++ include/Reporter.h (working copy)
@@ -61,7 +61,7 @@
#include "Settings.hpp"
-#define NUM_REPORT_STRUCTS 700
+#define NUM_REPORT_STRUCTS 7000
#define NUM_MULTI_SLOTS 5
Bob
From: Varun Sharma [mailto:vsd...@gmail.com<mailto:vsd...@gmail.com>]
Sent: Wednesday, February 19, 2014 9:15 PM
To:
iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>;
am...@mellanox.com<mailto:am...@mellanox.com>
Subject: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional test .
Hi,
My machine ethtool -i info.
driver: mlx4_en
version: 2.1.6 (Aug 27 2013)
firmware-version: 2.5.0
bus-info: 0000:19:00.0
Even after applying patch problem still occur. Can you tell me the reason for
this why decrease in Sending side happen ?
Is this problem regarding iperf or TCP/IP stack or nic card ?
Regards
Varun Sharma
---------- Forwarded message ----------
From: Amir Ancel <am...@mellanox.com<mailto:am...@mellanox.com>>
Date: Wed, Feb 19, 2014 at 2:36 PM
Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional test .
To: Varun Sharma <vsd...@gmail.com<mailto:vsd...@gmail.com>>,
"iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>"
<iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>>
Cc: Sagi Schlanger <sa...@mellanox.com<mailto:sa...@mellanox.com>>
Hi Varun,
Can you please share your driver and firmware versions using "ethtool -i ethX" ?
Also, attached a patch that fixes bidirectional functional issue.
Thanks,
Amir Ancel
Performance and Power Group Manager
www.mellanox.com<http://www.mellanox.com/>
From: Varun Sharma [mailto:vsd...@gmail.com<mailto:vsd...@gmail.com>]
Sent: Wednesday, February 19, 2014 10:44 AM
To: iperf-users@lists.sourceforge.net<mailto:iperf-users@lists.sourceforge.net>
Subject: [Iperf-users] Sending rate decrease in TCP bidirectional test .
Hi,
I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a dual
port 10G card. Two 16 core machines with 64GB RAM are connected back to back.
In TCP Bidirectional test case sending throughput decrease as compare to TCP
Unidirectional test case sending .
All cases use default setting . No extra parameter is set.
In case of 4 Client threads Output comes :--
Unidirectional Send Process --- 9.6 Gbps
Bidirectional Send Process ---- 4.9 Gbps
In case of 8 Client threads Output comes :--
Unidirectional Send Process --- 9.7 Gbps
Bidirectional Send Process -- 6.4 Gbps
In case of 16 Client threads Output comes :--
Unidirectional Send Process --- 9.7 Gbps
Bidirectional Send Process -- 8 Gbps
Any reason for this outcome ?
Regards
Varun
------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users