Hi,
I have one observation regarding iperf 2.0.5.
if we off tso (tcp segmentation offload) on sending side , *TCP
Bidirectional* test sending throughput decrease as compare to *TCP
Unidirectional *test sending throughput .
whereas if we on tso(tcp segmentation offload) on sending side , both test
give almost same sending throughput .
How tso(tcp segmentation offload) effect TCP Bidirectional sending
throughput ?
Regards
Varun
On Mon, Mar 24, 2014 at 10:48 PM, Varun Sharma <vsd...@gmail.com> wrote:
> Hi,
> I go through iperf source code. I want to discuss some observation .
>
> In Unidirectional case sending side loop run more iteration as compared to
> Bidirectional sending side.Due to this less data
> transfer in bidirectional case. Now I does not get reason why it run less
> iteration whereas time for loop run is same in both cases ? and How this
> reason affects bidirectional sending side performance ?
>
> Regards
> Varun
>
>
> On Sun, Feb 23, 2014 at 2:02 AM, Sandro Bureca <sbur...@gmail.com> wrote:
>
>> Hi,
>> do you have any kind of iptables or packet filtering mechanism on the
>> machine ?
>> even if all traffic is permitted it might affect somehow the tcp flow.
>> Someone tunes the tcp stack with
>> ifconfig buffer
>> and sysctrl
>>
>> See also:
>> http://dak1n1.com/blog/7-performance-tuning-intel-10gbe
>>
>>
>>
>> Kind regards,
>> Sandro
>>
>> On 21 February 2014 18:54, Varun Sharma <vsd...@gmail.com> wrote:
>> > With UDP packets It's work fine.
>> > When I perform tcp bi-directional test for loop back interface (means on
>> > same machine server and client exists), same decrease in tcp sending
>> side
>> > happen. Is it mean problem related with TCP/IP stack ?
>> >
>> >
>> > On Thu, Feb 20, 2014 at 11:46 PM, Bob (Robert) McMahon
>> > <rmcma...@broadcom.com> wrote:
>> >>
>> >> It's hard to say without a lot more information (and difficult to debug
>> >> via email.) Do you see the same phenomena when using UDP packets?
>> >>
>> >>
>> >>
>> >> If it's TCP only a next step may be to analyze the tcp flows with
>> >> something like tcp trace. http://www.tcptrace.org/
>> >>
>> >>
>> >>
>> >> Bob
>> >>
>> >> From: Varun Sharma [mailto:vsd...@gmail.com]
>> >> Sent: Thursday, February 20, 2014 1:45 AM
>> >> To: Bob (Robert) McMahon
>> >> Cc: iperf-users@lists.sourceforge.net; am...@mellanox.com
>> >> Subject: Re: [Iperf-users] Fwd: Sending rate decrease in TCP
>> bidirectional
>> >> test .
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Hi Bob
>> >>
>> >> Thanks for reply.
>> >>
>> >> I change in iperf as you suggested , but problem still occur. Is there
>> any
>> >> another setting for overcome this problem ?
>> >>
>> >> can you tell me why this happen ?
>> >>
>> >>
>> >>
>> >> Varun
>> >>
>> >>
>> >>
>> >> On Thu, Feb 20, 2014 at 11:11 AM, Bob (Robert) McMahon
>> >> <rmcma...@broadcom.com> wrote:
>> >>
>> >> I've had to increase the NUM_REPORT_STRUCTS to get better iperf
>> >> performance in 2.0.5
>> >>
>> >>
>> >>
>> >> improved/iperf] $ svn diff include/*.h
>> >>
>> >> Index: include/Reporter.h
>> >>
>> >> ===================================================================
>> >>
>> >> --- include/Reporter.h (revision 2)
>> >>
>> >> +++ include/Reporter.h (working copy)
>> >>
>> >> @@ -61,7 +61,7 @@
>> >>
>> >>
>> >>
>> >> #include "Settings.hpp"
>> >>
>> >>
>> >>
>> >> -#define NUM_REPORT_STRUCTS 700
>> >>
>> >> +#define NUM_REPORT_STRUCTS 7000
>> >>
>> >> #define NUM_MULTI_SLOTS 5
>> >>
>> >>
>> >>
>> >> Bob
>> >>
>> >>
>> >>
>> >> From: Varun Sharma [mailto:vsd...@gmail.com]
>> >> Sent: Wednesday, February 19, 2014 9:15 PM
>> >> To: iperf-users@lists.sourceforge.net; am...@mellanox.com
>> >> Subject: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional
>> >> test .
>> >>
>> >>
>> >>
>> >> Hi,
>> >>
>> >> My machine ethtool -i info.
>> >>
>> >> driver: mlx4_en
>> >>
>> >> version: 2.1.6 (Aug 27 2013)
>> >> firmware-version: 2.5.0
>> >> bus-info: 0000:19:00.0
>> >>
>> >> Even after applying patch problem still occur. Can you tell me the
>> reason
>> >> for this why decrease in Sending side happen ?
>> >> Is this problem regarding iperf or TCP/IP stack or nic card ?
>> >>
>> >> Regards
>> >>
>> >> Varun Sharma
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> ---------- Forwarded message ----------
>> >> From: Amir Ancel <am...@mellanox.com>
>> >> Date: Wed, Feb 19, 2014 at 2:36 PM
>> >> Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional
>> test
>> >> .
>> >> To: Varun Sharma <vsd...@gmail.com>, "
>> iperf-users@lists.sourceforge.net"
>> >> <iperf-users@lists.sourceforge.net>
>> >> Cc: Sagi Schlanger <sa...@mellanox.com>
>> >>
>> >> Hi Varun,
>> >>
>> >>
>> >>
>> >> Can you please share your driver and firmware versions using "ethtool
>> -i
>> >> ethX" ?
>> >>
>> >> Also, attached a patch that fixes bidirectional functional issue.
>> >>
>> >>
>> >>
>> >> Thanks,
>> >>
>> >>
>> >>
>> >> Amir Ancel
>> >>
>> >> Performance and Power Group Manager
>> >>
>> >> www.mellanox.com
>> >>
>> >>
>> >>
>> >> From: Varun Sharma [mailto:vsd...@gmail.com]
>> >> Sent: Wednesday, February 19, 2014 10:44 AM
>> >> To: iperf-users@lists.sourceforge.net
>> >> Subject: [Iperf-users] Sending rate decrease in TCP bidirectional test
>> .
>> >>
>> >>
>> >>
>> >> Hi,
>> >>
>> >> I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card.
>> Its a
>> >> dual port 10G card. Two 16 core machines with 64GB RAM are connected
>> back to
>> >> back.
>> >>
>> >> In TCP Bidirectional test case sending throughput decrease as compare
>> to
>> >> TCP Unidirectional test case sending .
>> >>
>> >> All cases use default setting . No extra parameter is set.
>> >>
>> >>
>> >>
>> >> In case of 4 Client threads Output comes :--
>> >>
>> >> Unidirectional Send Process --- 9.6 Gbps
>> >>
>> >> Bidirectional Send Process ---- 4.9 Gbps
>> >>
>> >>
>> >> In case of 8 Client threads Output comes :--
>> >>
>> >> Unidirectional Send Process --- 9.7 Gbps
>> >>
>> >> Bidirectional Send Process -- 6.4 Gbps
>> >>
>> >>
>> >> In case of 16 Client threads Output comes :--
>> >>
>> >> Unidirectional Send Process --- 9.7 Gbps
>> >>
>> >> Bidirectional Send Process -- 8 Gbps
>> >>
>> >> Any reason for this outcome ?
>> >>
>> >> Regards
>> >>
>> >> Varun
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >
>> >
>> >
>> >
>> ------------------------------------------------------------------------------
>> > Managing the Performance of Cloud-Based Applications
>> > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
>> > Read the Whitepaper.
>> >
>> http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
>> > _______________________________________________
>> > Iperf-users mailing list
>> > Iperf-users@lists.sourceforge.net
>> > https://lists.sourceforge.net/lists/listinfo/iperf-users
>> >
>>
>
>
------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users