Not a direct answer but a first suggestion towards isolating a network
performance problem is to use a later and supported version of iperf.
Iperf 2.0.5 has known performance issues and is no longer being actively
supported. Use either iperf 2.0.8 or iperf 3 and compile source from the
latest. With 2.0.8 use -e (enhanced mode) and -i <interval> to get
complete tcp stats. I think iperf3 will provide these without requiring
the -e.
https://en.wikipedia.org/wiki/Iperf
iperf 3 : http://software.es.net/iperf/
iperf 2.0.8 : https://sourceforge.net/projects/iperf2/
Another thing that might help is to capture the tcp flow with tcpdump and
then use tcptrace to analyze.
http://www.tcptrace.org/
Also, run with UDP and to compare those results in each direction.
Note: The information provided so far is insufficient to do much analysis.
TCP is a fairly complex protocol and it takes a bit to fully understand
what's driving its performance. The windows (-w) may be small but without
congestion window information it's hard to know for sure.
Bob
On Mon, Feb 15, 2016 at 12:08 AM, Prithvi Raj <prithvipsg...@gmail.com>
wrote:
> Hi,
>
> I am trying to use iperf 2.0.5 to measure TCP throughput between two Linux
> systems connected back to back.
>
> The topology I am using is below:
>
> Linux A eth1(192.138.14.1)----eth4(192.138.14.4) Linux B
> eth2(192.138.4.3)------eth3(192.138.4.2) Linux C
>
> All links between Linuxes are 1gig links. All Linux interfaces are
> configured to 1000Mbps speed.
>
> Throughout measured from B to C gives
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Linux C# iperf
> -s------------------------------------------------------------Server
> listening on TCP port 5001TCP window size: 85.3 KByte
> (default)------------------------------------------------------------[ 4]
> local 192.138.4.2 port 5001 connected with 192.138.4.3 port 60918[ ID]
> Interval Transfer Bandwidth[ 4] 0.0-10.1 sec 1.11 GBytes 941
> Mbits/secLinuxB# iperf -c
> 192.138.4.2------------------------------------------------------------Client
> connecting to 192.138.4.2, TCP port 5001TCP window size: 23.2 KByte
> (default)------------------------------------------------------------[ 3]
> local 192.138.4.3 port 60918 connected with 192.138.4.2 port 5001[ ID]
> Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 1.11 GBytes 952
> Mbits/sec*
> When I send from C to B,
>
>
>
>
>
>
>
>
>
>
> *Linux B# iperf
> -s------------------------------------------------------------Server
> listening on TCP port 5001TCP window size: 85.3 KByte
> (default)------------------------------------------------------------[ 4]
> local 192.138.4.3 port 5001 connected with 192.138.4.2 port 38576[ ID]
> Interval Transfer Bandwidth[ 4] 0.0-10.0 sec 970 MBytes 813
> Mbits/sec*
>
>
>
>
>
>
>
> *Linux C# iperf -c
> 192.138.4.3------------------------------------------------------------Client
> connecting to 192.138.4.3, TCP port 5001TCP window size: 23.2 KByte
> (default)------------------------------------------------------------[ 3]
> local 192.138.4.2 port 38576 connected with 192.138.4.3 port 5001[ ID]
> Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 970 MBytes 814
> Mbits/sec*
>
> Why am I seeing a marked difference in throughput measured in different
> directions between two back to back connected systems?
>
> All my sysctl parameters in both Linux are same:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Linux B# sudo vim /etc/sysctl.conf# Kernel sysctl configuration file for
> Red Hat Linux## For binary values, 0 is disabled, 1 is enabled. See
> sysctl(8) and# sysctl.conf(5) for more details.# Controls IP packet
> forwardingnet.ipv4.ip_forward = 1# Controls source route
> verificationnet.ipv4.conf.default.rp_filter = 1# Do not accept source
> routingnet.ipv4.conf.default.accept_source_route = 0# Controls the System
> Request debugging functionality of the kernelkernel.sysrq = 0# Controls
> whether core dumps will append the PID to the core filename.# Useful for
> debugging multi-threaded applications.kernel.core_uses_pid = 1# Controls
> the use of TCP syncookiesnet.ipv4.tcp_syncookies = 1#Enables/Disables TCP
> SACK (default 1)net.ipv4.tcp_sack = 1# Window
> Scalingnet.ipv4.tcp_window_scaling = 1#Maximum Receive window
> size#net.core.rmem_max = 16777216# Receive Window Size Min Avg
> Max#net.ipv4.tcp_rmem = 4096 87380 16777216# Send Window Size Min Avg
> Max#net.ipv4.tcp_wmem = 4096 16384 16777216# Disable netfilter on
> bridges.net.bridge.bridge-nf-call-ip6tables =
> 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables
> = 0# Controls the default maxmimum size of a mesage queuekernel.msgmnb =
> 65536# Controls the maximum size of a message, in byteskernel.msgmax =
> 65536# Controls the maximum shared segment size, in byteskernel.shmmax =
> 68719476736# Controls the maximum number of shared memory segments, in
> pageskernel.shmall = 4294967296*
>
> All Linux systems are running Linux Centos 6.4 and same kernel
> 2.6.32-358.el6.x86_64. This means they all should have the same defualt
> buffer sizes and same tunable TCP parameters.
>
> On checking the network stats using *netstat -s*, I found the number of
> TCP segments sent out to be lesser. Linux B - 21047 segments sent (B to C)
> and Linux C - 16132 segments sent (C to B). Why is this ? Is there
> something apart from link speed, linux interface configs, tunable TCP
> parameters that is affecting the throughput values ?
>
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
> _______________________________________________
> Iperf-users mailing list
> Iperf-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/iperf-users
>
>
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users