Hi Amir

Thanks for reply .

Even after running this script  (set_irq_affinity.sh ethX)  problem still
occur. Is this problem related to iperf s/w or TCP/IP stack ?

Varun



On Thu, Feb 20, 2014 at 3:30 PM, Amir Ancel <am...@mellanox.com> wrote:

>  Please try the following script which is provided with the package:
>
>
>
> # set_irq_affinity.sh ethX
>
>
>
> Thanks,
>
>
>
> Amir Ancel
>
> Performance and Power Group Manager
>
> www.mellanox.com
>
>
>
> *From:* Varun Sharma [mailto:vsd...@gmail.com]
> *Sent:* Thursday, February 20, 2014 11:46 AM
> *To:* Bob (Robert) McMahon
> *Cc:* iperf-users@lists.sourceforge.net; Amir Ancel
> *Subject:* Re: [Iperf-users] Fwd: Sending rate decrease in TCP
> bidirectional test .
>
>
>
>
>
> Hi Bob
>
> Thanks for reply.
>
> I change in iperf as you suggested , but problem still occur. Is there any
> another setting for overcome this problem ?
>
> can you tell me why this happen ?
>
>
>
> Varun
>
>
>
> On Thu, Feb 20, 2014 at 11:11 AM, Bob (Robert) McMahon <
> rmcma...@broadcom.com> wrote:
>
> I've had to increase the NUM_REPORT_STRUCTS to get better iperf
> performance in 2.0.5
>
>
>
> improved/iperf] $ svn diff include/*.h
>
> Index: include/Reporter.h
>
> ===================================================================
>
> --- include/Reporter.h   (revision 2)
>
> +++ include/Reporter.h                (working copy)
>
> @@ -61,7 +61,7 @@
>
>
>
>  #include "Settings.hpp"
>
>
>
> -#define NUM_REPORT_STRUCTS 700
>
> +#define NUM_REPORT_STRUCTS 7000
>
> #define NUM_MULTI_SLOTS    5
>
>
>
> Bob
>
>
>
> *From:* Varun Sharma [mailto:vsd...@gmail.com]
> *Sent:* Wednesday, February 19, 2014 9:15 PM
> *To:* iperf-users@lists.sourceforge.net; am...@mellanox.com
> *Subject:* [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional
> test .
>
>
>
> Hi,
>
> My machine ethtool -i info.
>
> driver: mlx4_en
>
> version: 2.1.6 (Aug 27 2013)
> firmware-version: 2.5.0
> bus-info: 0000:19:00.0
>
> Even after applying patch problem still occur. Can you tell me the reason
> for this why decrease in Sending side happen ?
>  Is this problem regarding iperf  or TCP/IP stack or nic card ?
>
> Regards
>
> Varun Sharma
>
>
>
>
>
> ---------- Forwarded message ----------
> From: *Amir Ancel* <am...@mellanox.com>
> Date: Wed, Feb 19, 2014 at 2:36 PM
> Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional test
> .
> To: Varun Sharma <vsd...@gmail.com>, "iperf-users@lists.sourceforge.net" <
> iperf-users@lists.sourceforge.net>
> Cc: Sagi Schlanger <sa...@mellanox.com>
>
> Hi Varun,
>
>
>
> Can you please share your driver and firmware versions using "ethtool -i
> ethX" ?
>
> Also, attached a patch that fixes bidirectional functional issue.
>
>
>
> Thanks,
>
>
>
> Amir Ancel
>
> Performance and Power Group Manager
>
> www.mellanox.com
>
>
>
> *From:* Varun Sharma [mailto:vsd...@gmail.com]
> *Sent:* Wednesday, February 19, 2014 10:44 AM
> *To:* iperf-users@lists.sourceforge.net
> *Subject:* [Iperf-users] Sending rate decrease in TCP bidirectional test .
>
>
>
> Hi,
>
> I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a
> dual port 10G card. Two 16 core machines with 64GB RAM are connected back
> to back.
>
> In *TCP Bidirectional test* case sending throughput decrease as compare
> to *TCP Unidirectional test* case sending .
>
> All cases use default setting . No extra parameter is set.
>
>
>
> In case of 4 Client threads Output comes :--
>
> Unidirectional Send Process --- *9.6* Gbps
>
> Bidirectional Send Process ----  *4.9*  Gbps
>
>
> In case of 8 Client threads Output comes :--
>
> Unidirectional Send Process --- *9.7* Gbps
>
> Bidirectional Send Process -- *6.4* Gbps
>
>
> In case of 16 Client threads Output comes :--
>
>  Unidirectional Send Process ---* 9.7* Gbps
>
>  Bidirectional Send Process --    *8* Gbps
>
> Any reason for this outcome ?
>
> Regards
>
> Varun
>
>
>
>
>
>
>
------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to