Re: IP performance question

2015-05-27 Thread Bob Proulx
Petter Adsen wrote:
 Reco wrote:
  May I suggest using etckeeper for this? The tool is invaluable if one
  needs to answer a question such as what exactly did I changed a
  couple of days ago?. The usual caveat is that using etckeeper
  requires at least casual knowledge of any RCS that's supported by
  etckeeper (I prefer git for this).
 
 I looked at etckeeper a while back, but I'm not familiar with revision
 control. It is something I could use, to keep track of changes to
 translations I do.

+1 for etckeeper.  It is a tool that I came to lately.  But now having
used it I wouldn't be without it.  It is a really useful safety net.

 From what I understand, it seems git is what most people use these
 days, so maybe that is the best one to learn? I just need something
 that is simple to learn and use.

The biggest advantage to git is that it has the critical mass of users
behind it.  There will always be someone to help you with it.  There
are a huge amount of documentation and tutorials written about it.  If
you learn it then you will be able to use it with the majority of
every other project on the net these days.  There does seem to be a
lot of griping about it but I find it relatively easy to use and so
personally don't understand why some people dislike it so much.

The disadvantage is that people who use hg mercurial and other systems
will complain that their system is easier to use but disadvantaged by
the huge mass of git users.  (shrug)

Bob


signature.asc
Description: Digital signature


Re: IP performance question

2015-05-27 Thread Petter Adsen
On Tue, 26 May 2015 18:18:15 +0300
Reco recovery...@gmail.com wrote:

 On Tue, May 26, 2015 at 12:42:50PM +0200, Petter Adsen wrote:
  And even worse, after starting to mess with this, browsing is
  _abysmal_. After taking a few speed tests online (speed.io etc),
  upload/download and ping times seem good, but the number of
  connections per minute are severely limited, hovering at ~700. A
  friend on the same network, just down the street and with the same
  connection gets over 1800. We are connected to the same node.
  Whether these tests are trustworthy, though, I have no idea.
 
 And that means jumbo frames bit you. Don't worry, good old iproute
 comes to the rescue.
 
 A start state of non-router host (I'm assuming that eth0 has MTU 
 1500):
 
 # ip ro l
 default via 192.168.32.1 dev eth0  metric 303
 192.168.32.0/20 dev eth0  proto kernel  scope link  src
 192.168.32.227 metric 303
 
 Needed changes:
 
 # ip ro d default
 # ip ro a default via 192.168.32.1 dev eth0 mtu 1500
 
 
 So, you keep non-standard MTU for your network, but set standard MTU
 for outside world.
 
 An example assumes that 192.168.32.0/20 is an internal network and
 192.168.32.1 is a router.
 
 The implementation I'd use is a post-up script in
 /etc/network/interfaces. I'm not that familiar with DHCP so I cannot
 comment if it's possible to advertise different MTUs on different
 routes.

Nice, I didn't know I can set MTU for each route. I will need to read
the docs for the DHCP server running on the OpenWRT router, to see if I
can set it there. I'm sure I will find some way around this now.

  I've tried to set everything back to the defaults, as I've
  documented every change I've made, but it doesn't seem to help.
  I'll try to reboot later today if I can, I have so much context up
  right now that I really don't want to lose, but I haven't made any
  permanent changes yet, so it should come up the way it was.
 
 May I suggest using etckeeper for this? The tool is invaluable if one
 needs to answer a question such as what exactly did I changed a
 couple of days ago?. The usual caveat is that using etckeeper
 requires at least casual knowledge of any RCS that's supported by
 etckeeper (I prefer git for this).

I looked at etckeeper a while back, but I'm not familiar with revision
control. It is something I could use, to keep track of changes to
translations I do. From what I understand, it seems git is what most
people use these days, so maybe that is the best one to learn? I just
need something that is simple to learn and use.

Thank you for all your help and advice, I have learned a lot and really
appreciate you taking the time.

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgp4SORz_Q1yc.pgp
Description: OpenPGP digital signature


Re: IP performance question

2015-05-26 Thread Petter Adsen
On Sun, 24 May 2015 16:01:41 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sun, 24 May 2015 13:47:48 +0200
 Petter Adsen pet...@synth.no wrote:
 
  On Sun, 24 May 2015 13:26:52 +0200
  Petter Adsen pet...@synth.no wrote:
   Thanks to you, I now get ~880Mbps, which is a lot better. It seems
   increasing the MTU was what had the most effect, so I won't bother
   with TCP window size.
  
  Now, this is a little odd:
  
  petter@monster:/etc$ iperf -i 1 -c fenris -r 
  
  Server listening on TCP port 5001
  TCP window size: 85.3 KByte (default)
  
  
  Client connecting to fenris, TCP port 5001
  TCP window size:  280 KByte (default)
  
  [  5] local 192.168.0.105 port 49636 connected with 192.168.0.103
  port 5001 [ ID] Interval   Transfer Bandwidth
  [  5]  0.0- 1.0 sec   104 MBytes   875 Mbits/sec
  [  5]  1.0- 2.0 sec  97.8 MBytes   820 Mbits/sec
  [  5]  2.0- 3.0 sec   104 MBytes   868 Mbits/sec
  [  5]  3.0- 4.0 sec   104 MBytes   876 Mbits/sec
  [  5]  4.0- 5.0 sec   104 MBytes   876 Mbits/sec
  [  5]  5.0- 6.0 sec  83.0 MBytes   696 Mbits/sec
  [  5]  6.0- 7.0 sec   105 MBytes   879 Mbits/sec
  [  5]  7.0- 8.0 sec   104 MBytes   875 Mbits/sec
  [  5]  8.0- 9.0 sec   105 MBytes   884 Mbits/sec
  [  5]  9.0-10.0 sec   104 MBytes   877 Mbits/sec
  [  5]  0.0-10.0 sec  1016 MBytes   852 Mbits/sec
  [  4] local 192.168.0.105 port 5001 connected with 192.168.0.103
  port 34815 [  4]  0.0- 1.0 sec  98.5 MBytes   826 Mbits/sec
  [  4]  1.0- 2.0 sec  98.5 MBytes   826 Mbits/sec
  [  4]  2.0- 3.0 sec  97.4 MBytes   817 Mbits/sec
  [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
  [  4]  4.0- 5.0 sec  98.5 MBytes   827 Mbits/sec
  [  4]  5.0- 6.0 sec  98.1 MBytes   823 Mbits/sec
  [  4]  6.0- 7.0 sec  98.6 MBytes   827 Mbits/sec
  [  4]  7.0- 8.0 sec  98.5 MBytes   826 Mbits/sec
  [  4]  8.0- 9.0 sec  98.5 MBytes   827 Mbits/sec
  [  4]  9.0-10.0 sec  98.5 MBytes   826 Mbits/sec
  [  4]  0.0-10.0 sec   984 MBytes   825 Mbits/sec
  
  I have run it many times, and the results are consistently ~50Mbps
  lower in the other direction. MTU is set to 7152 on both hosts, but
  the window size is back to the default values (212992).
 
 Hmm. A first thought is that you have a different TCP window size on
 client and a server.

Nope. Exactly the same.

 And a second thought is that you probably should check interface
 statistics with ifconfig or 'ip -s link show'. Every packet that is
 not RX or TX means trouble.

Clean. On both hosts.

And even worse, after starting to mess with this, browsing is
_abysmal_. After taking a few speed tests online (speed.io etc),
upload/download and ping times seem good, but the number of connections
per minute are severely limited, hovering at ~700. A friend on the same
network, just down the street and with the same connection gets over
1800. We are connected to the same node. Whether these tests are
trustworthy, though, I have no idea.

I've tried to set everything back to the defaults, as I've documented
every change I've made, but it doesn't seem to help. I'll try to reboot
later today if I can, I have so much context up right now that I really
don't want to lose, but I haven't made any permanent changes yet, so it
should come up the way it was.

I really don't want to lose the extra 150Mbps I gained by increasing
MTU, though, as that would have an impact on my day-to-day workflow.

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpL9BK3N0IBu.pgp
Description: OpenPGP digital signature


Re: IP performance question

2015-05-26 Thread Petter Adsen
On Sun, 24 May 2015 15:53:17 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sun, 24 May 2015 13:26:52 +0200
 Petter Adsen pet...@synth.no wrote:
 
   On Sun, 24 May 2015 11:28:36 +0200
   Petter Adsen pet...@synth.no wrote:
   
 On Sun, 24 May 2015 10:36:39 +0200
 Petter Adsen pet...@synth.no wrote:
 
  I've been trying to improve NFS performance at home, and in
  that process i ran iperf to get an overview of general
  network performance. I have two Jessie hosts connected to a
  dumb switch with Cat-5e. One host uses a Realtek RTL8169
  PCI controller, and the other has an Intel 82583V on the
  motherboard.
  
  iperf maxes out at about 725Mbps. At first I thought maybe
  the switch could be at fault, it's a really cheap one, so I
  connected both hosts to my router instead. Didn't change
  anything, and it had no significant impact on the load on
  the router. I can't try to run iperf on the router
  (OpenWRT), though, as it maxes out the CPU.
  
  Should I be getting more than 725Mbps in the real world?
 
 A quick test in my current environment shows this:
 
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
 
 Two hosts, connected via Cisco 8-port unmanaged switch,
 Realtek 8168e on one host, Atheros Attansic L1 on another.
 
 On the other hand, the same test, Realtek 8139e on one side,
 but with lowly Marvell ARM SOC on the other side shows this:
 
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
 
 So - you can, definitely, and yes, it depends.

That last one, would that be limited because of CPU power?
   
   That too. You cannot extract that much juice from a single-core
   ARM5. Another possibility is that Marvell is unable to design a
   good chipset even in the case it would be a matter of life and
   death :)
  
  That might be why I'm not using the Marvell adapter :) I remember
  reading somewhere that either Marvell or Realtek were bad, but I
  couldn't remember which one, so I kept using the Realtek one since I
  had obviously switched for a reason :)
 
 Both are actually. Realtek *was* good at least 5 years ago, but since
 then they managed to introduce multiple chips that are managed by the
 same r8169 kernel module. Since then it became a matter of luck.
 Either your NIC works flawlessly without any firmware (mine does), or
 you're getting all kinds of weird glitches.

The Realtek is not at all new, but I have no idea just how old, as it
was given to me by a friend. 5 years sounds about right, though. I do
have the firmware installed, haven't tried without it.

I'm slowly beginning to think about getting another NIC, but what? I've
heard good things about Intel, and the Intel in the other box is
behaving well. Are there any specific chipsets to buy or stay away
from? The one I have is a 82583V.

I haven't bought a separate NIC since the days of the DEC 21140 :)

 Try the same test but use UDP instead of TCP.

Only gives me 1.03Mbits/sec :)
   
   iperf(1) says that by default UDP is capped on 1Mbit. Use -b
   option on client side to set desired bandwidth to 1024m like this:
   
   iperf -c server -u -b 1024M
   
   Note that -b should be the last option. Gives me 812 Mbits/sec
   with the default UDP buffer settings.
  
  Didn't notice that. I get 808, so close to what you get.
 
 Good. The only thing is left to do is to apply that 'udp' flag to NFS
 clients, and you're set. Just don't mix it with 'async' flag, as Bad
 Things ™ can happen if you do so (see nfs(5) for the gory details).

Yes, I always use 'sync' anyways - performance isn't _that_
important, data integrity is :)

   net.core.rmem_max = 4194304
   net.core.wmem_max = 1048576
  
  OK, I've set them on both sides, but it doesn't change the results,
  no matter what values I give iperf with -w.
 
 Now that's weird. I picked those sysctl values from one of NFS
 performance tuning guides. Maybe I misunderstood something.

I'll do a little more searching online, I need to better understand
what I'm messing with in any case. I seriously dislike setting
parameters I don't understand. In my bookcase is a copy of Computer
Networks by Tanenbaum, I guess that's my next stop.

   Of course, for this to work it would require to increase MTU on
   every host between your two, so that's kind of a last resort
   measure.
  
  Well, both hosts are connected to the same switch (or right now, to
  the router, but I could easily put them back on the switch if that
  matters). One of the hosts would not accept a value larger than
  7152, but it did have quite an effect: I now get up to 880Mbps :)
 
 Consider yourself lucky as MTU experiments on server hardware usually
 lead to a long trip to a datacenter :)

I'd be hard pressed to call any of this server hardware or a
datacenter :)

  Will this 

Re: IP performance question

2015-05-26 Thread Reco
On Tue, May 26, 2015 at 12:42:50PM +0200, Petter Adsen wrote:
 On Sun, 24 May 2015 16:01:41 +0300
 Reco recovery...@gmail.com wrote:
 
   Hi.
  
  On Sun, 24 May 2015 13:47:48 +0200
  Petter Adsen pet...@synth.no wrote:
  
   On Sun, 24 May 2015 13:26:52 +0200
   Petter Adsen pet...@synth.no wrote:
Thanks to you, I now get ~880Mbps, which is a lot better. It seems
increasing the MTU was what had the most effect, so I won't bother
with TCP window size.
   
   Now, this is a little odd:
   
   petter@monster:/etc$ iperf -i 1 -c fenris -r 
   
   Server listening on TCP port 5001
   TCP window size: 85.3 KByte (default)
   
   
   Client connecting to fenris, TCP port 5001
   TCP window size:  280 KByte (default)
   
   [  5] local 192.168.0.105 port 49636 connected with 192.168.0.103
   port 5001 [ ID] Interval   Transfer Bandwidth
   [  5]  0.0- 1.0 sec   104 MBytes   875 Mbits/sec
   [  5]  1.0- 2.0 sec  97.8 MBytes   820 Mbits/sec
   [  5]  2.0- 3.0 sec   104 MBytes   868 Mbits/sec
   [  5]  3.0- 4.0 sec   104 MBytes   876 Mbits/sec
   [  5]  4.0- 5.0 sec   104 MBytes   876 Mbits/sec
   [  5]  5.0- 6.0 sec  83.0 MBytes   696 Mbits/sec
   [  5]  6.0- 7.0 sec   105 MBytes   879 Mbits/sec
   [  5]  7.0- 8.0 sec   104 MBytes   875 Mbits/sec
   [  5]  8.0- 9.0 sec   105 MBytes   884 Mbits/sec
   [  5]  9.0-10.0 sec   104 MBytes   877 Mbits/sec
   [  5]  0.0-10.0 sec  1016 MBytes   852 Mbits/sec
   [  4] local 192.168.0.105 port 5001 connected with 192.168.0.103
   port 34815 [  4]  0.0- 1.0 sec  98.5 MBytes   826 Mbits/sec
   [  4]  1.0- 2.0 sec  98.5 MBytes   826 Mbits/sec
   [  4]  2.0- 3.0 sec  97.4 MBytes   817 Mbits/sec
   [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
   [  4]  4.0- 5.0 sec  98.5 MBytes   827 Mbits/sec
   [  4]  5.0- 6.0 sec  98.1 MBytes   823 Mbits/sec
   [  4]  6.0- 7.0 sec  98.6 MBytes   827 Mbits/sec
   [  4]  7.0- 8.0 sec  98.5 MBytes   826 Mbits/sec
   [  4]  8.0- 9.0 sec  98.5 MBytes   827 Mbits/sec
   [  4]  9.0-10.0 sec  98.5 MBytes   826 Mbits/sec
   [  4]  0.0-10.0 sec   984 MBytes   825 Mbits/sec
   
   I have run it many times, and the results are consistently ~50Mbps
   lower in the other direction. MTU is set to 7152 on both hosts, but
   the window size is back to the default values (212992).
  
  Hmm. A first thought is that you have a different TCP window size on
  client and a server.
 
 Nope. Exactly the same.
 
  And a second thought is that you probably should check interface
  statistics with ifconfig or 'ip -s link show'. Every packet that is
  not RX or TX means trouble.
 
 Clean. On both hosts.

I'm out of ideas then, sorry.


 And even worse, after starting to mess with this, browsing is
 _abysmal_. After taking a few speed tests online (speed.io etc),
 upload/download and ping times seem good, but the number of connections
 per minute are severely limited, hovering at ~700. A friend on the same
 network, just down the street and with the same connection gets over
 1800. We are connected to the same node. Whether these tests are
 trustworthy, though, I have no idea.

And that means jumbo frames bit you. Don't worry, good old iproute comes
to the rescue.

A start state of non-router host (I'm assuming that eth0 has MTU  1500):

# ip ro l
default via 192.168.32.1 dev eth0  metric 303
192.168.32.0/20 dev eth0  proto kernel  scope link  src 192.168.32.227 metric 
303

Needed changes:

# ip ro d default
# ip ro a default via 192.168.32.1 dev eth0 mtu 1500


So, you keep non-standard MTU for your network, but set standard MTU for
outside world.

An example assumes that 192.168.32.0/20 is an internal network and
192.168.32.1 is a router.

The implementation I'd use is a post-up script in
/etc/network/interfaces. I'm not that familiar with DHCP so I cannot
comment if it's possible to advertise different MTUs on different
routes.


 I've tried to set everything back to the defaults, as I've documented
 every change I've made, but it doesn't seem to help. I'll try to reboot
 later today if I can, I have so much context up right now that I really
 don't want to lose, but I haven't made any permanent changes yet, so it
 should come up the way it was.

May I suggest using etckeeper for this? The tool is invaluable if one
needs to answer a question such as what exactly did I changed a couple
of days ago?. The usual caveat is that using etckeeper requires at
least casual knowledge of any RCS that's supported by etckeeper (I
prefer git for this).

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150526151520.GA30331@x101h



Re: IP performance question

2015-05-26 Thread Reco
On Tue, May 26, 2015 at 12:57:52PM +0200, Petter Adsen wrote:

   I've been trying to improve NFS performance at home, and in
   that process i ran iperf to get an overview of general
   network performance. I have two Jessie hosts connected to a
   dumb switch with Cat-5e. One host uses a Realtek RTL8169
   PCI controller, and the other has an Intel 82583V on the
   motherboard.
   
   iperf maxes out at about 725Mbps. At first I thought maybe
   the switch could be at fault, it's a really cheap one, so I
   connected both hosts to my router instead. Didn't change
   anything, and it had no significant impact on the load on
   the router. I can't try to run iperf on the router
   (OpenWRT), though, as it maxes out the CPU.
   
   Should I be getting more than 725Mbps in the real world?
  
  A quick test in my current environment shows this:
  
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
  
  Two hosts, connected via Cisco 8-port unmanaged switch,
  Realtek 8168e on one host, Atheros Attansic L1 on another.
  
  On the other hand, the same test, Realtek 8139e on one side,
  but with lowly Marvell ARM SOC on the other side shows this:
  
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
  
  So - you can, definitely, and yes, it depends.
 
 That last one, would that be limited because of CPU power?

That too. You cannot extract that much juice from a single-core
ARM5. Another possibility is that Marvell is unable to design a
good chipset even in the case it would be a matter of life and
death :)
   
   That might be why I'm not using the Marvell adapter :) I remember
   reading somewhere that either Marvell or Realtek were bad, but I
   couldn't remember which one, so I kept using the Realtek one since I
   had obviously switched for a reason :)
  
  Both are actually. Realtek *was* good at least 5 years ago, but since
  then they managed to introduce multiple chips that are managed by the
  same r8169 kernel module. Since then it became a matter of luck.
  Either your NIC works flawlessly without any firmware (mine does), or
  you're getting all kinds of weird glitches.
 
 The Realtek is not at all new, but I have no idea just how old, as it
 was given to me by a friend. 5 years sounds about right, though. I do
 have the firmware installed, haven't tried without it.
 
 I'm slowly beginning to think about getting another NIC, but what? I've
 heard good things about Intel, and the Intel in the other box is
 behaving well. Are there any specific chipsets to buy or stay away
 from? The one I have is a 82583V.
 
 I haven't bought a separate NIC since the days of the DEC 21140 :)

I'd recommend anything Intel 82576-based. Especially Intel 82576EB.
Server-grade card, multiple ports, goes into PCI-X, sells for about $50
on Ebay near you. Accept no substitutes as anything else is a toy
NIC anyway :)


Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150526153204.GB30331@x101h



Re: IP performance question

2015-05-24 Thread Petter Adsen
On Sun, 24 May 2015 13:20:04 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sun, 24 May 2015 11:28:36 +0200
 Petter Adsen pet...@synth.no wrote:
 
   On Sun, 24 May 2015 10:36:39 +0200
   Petter Adsen pet...@synth.no wrote:
   
I've been trying to improve NFS performance at home, and in that
process i ran iperf to get an overview of general network
performance. I have two Jessie hosts connected to a dumb switch
with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
the other has an Intel 82583V on the motherboard.

iperf maxes out at about 725Mbps. At first I thought maybe the
switch could be at fault, it's a really cheap one, so I
connected both hosts to my router instead. Didn't change
anything, and it had no significant impact on the load on the
router. I can't try to run iperf on the router (OpenWRT),
though, as it maxes out the CPU.

Should I be getting more than 725Mbps in the real world?
   
   A quick test in my current environment shows this:
   
   [ ID] Interval   Transfer Bandwidth
   [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
   
   Two hosts, connected via Cisco 8-port unmanaged switch, Realtek
   8168e on one host, Atheros Attansic L1 on another.
   
   On the other hand, the same test, Realtek 8139e on one side, but
   with lowly Marvell ARM SOC on the other side shows this:
   
   [ ID] Interval   Transfer Bandwidth
   [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
   
   So - you can, definitely, and yes, it depends.
  
  That last one, would that be limited because of CPU power?
 
 That too. You cannot extract that much juice from a single-core ARM5.
 Another possibility is that Marvell is unable to design a good chipset
 even in the case it would be a matter of life and death :)

That might be why I'm not using the Marvell adapter :) I remember
reading somewhere that either Marvell or Realtek were bad, but I
couldn't remember which one, so I kept using the Realtek one since I
had obviously switched for a reason :)

   Try the same test but use UDP instead of TCP.
  
  Only gives me 1.03Mbits/sec :)
 
 iperf(1) says that by default UDP is capped on 1Mbit. Use -b option on
 client side to set desired bandwidth to 1024m like this:
 
 iperf -c server -u -b 1024M
 
 Note that -b should be the last option. Gives me 812 Mbits/sec with
 the default UDP buffer settings.

Didn't notice that. I get 808, so close to what you get.

   Increase TCP window size (those net.core.rmem/wmem sysctls) on
   both sides.
  
  It is currently 85KB and 85.3KB, what should I try setting them to?
 
 Try these:
 
 net.core.rmem_max = 4194304
 net.core.wmem_max = 1048576

OK, I've set them on both sides, but it doesn't change the results, no
matter what values I give iperf with -w.

   Try increasing MTU above 1500 on both sides.
  
  Likewise, increase to what?
 
 The amount of your NICs support. No way of knowing the maximum unless
 you try. A magic value seems to be 9000. Any value above 1500 is
 non-standard (so nothing is guaranteed)
 
 A case study (that particular NIC claims to support MTU of 9200 in
 dmesg):
 
 # ip l s dev eth0 mtu 1500
 # ip l s dev eth0 mtu 9000
 # ip l s dev eth0 mtu 65536
 RTNETLINK answers: Invalid argument
 
 Of course, for this to work it would require to increase MTU on every
 host between your two, so that's kind of a last resort measure.

Well, both hosts are connected to the same switch (or right now, to the
router, but I could easily put them back on the switch if that
matters). One of the hosts would not accept a value larger than 7152,
but it did have quite an effect: I now get up to 880Mbps :)

Will this setting have an impact on communication with machines where
the MTU is smaller? In other words, will it have a negative impact on
general network performance, or is MTU adjusted automatically?

And what is the appropriate way of setting it permanently - rc.local?

  Also, the machine with the Realtek PCI adapter has a Marvell
  88E8001 on the motherboard, but I haven't used it for years since
  there were once driver problems. Those are probably fixed now, I
  will try that once I can. Didn't think of it before.
 
 If the Marvell NIC uses sky2 kernel module - I would not even hope.
 Like I said earlier, Marvel is unable to design a good chip even if
 someone life would depend on it.

Then I will keep using the Realtek card :)

Thanks to you, I now get ~880Mbps, which is a lot better. It seems
increasing the MTU was what had the most effect, so I won't bother with
TCP window size.

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpnReJuA23hX.pgp
Description: OpenPGP digital signature


Re: IP performance question

2015-05-24 Thread Reco
 Hi.

On Sun, 24 May 2015 10:36:39 +0200
Petter Adsen pet...@synth.no wrote:

 I've been trying to improve NFS performance at home, and in that
 process i ran iperf to get an overview of general network performance.
 I have two Jessie hosts connected to a dumb switch with Cat-5e. One
 host uses a Realtek RTL8169 PCI controller, and the other has an Intel
 82583V on the motherboard.
 
 iperf maxes out at about 725Mbps. At first I thought maybe the switch
 could be at fault, it's a really cheap one, so I connected both hosts
 to my router instead. Didn't change anything, and it had no significant
 impact on the load on the router. I can't try to run iperf on the
 router (OpenWRT), though, as it maxes out the CPU.
 
 Should I be getting more than 725Mbps in the real world?

A quick test in my current environment shows this:

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

Two hosts, connected via Cisco 8-port unmanaged switch, Realtek 8168e
on one host, Atheros Attansic L1 on another.

On the other hand, the same test, Realtek 8139e on one side, but with
lowly Marvell ARM SOC on the other side shows this:

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec

So - you can, definitely, and yes, it depends.


 Could there be a driver issue, or some settings that aren't optimal?

Check your iptables rules if any. Especially nat and mangle tables.
Try the same test but use UDP instead of TCP.
Increase TCP window size (those net.core.rmem/wmem sysctls) on both
sides.
Try increasing MTU above 1500 on both sides.
Use crossover cable if everything else fails.


 Unfortunately,
 these are the only two hosts I have with Gbit interfaces (except the
 router), so I can't test with another host.
 
 Could this be a MB/MiB issue? The iperf man page doesn't say which it
 reports. (Well, it says Mbit/Mbyte, so I assume it does not mean MiB)

No. (1024*1024*1024 - 1000*1000*1000)/1024/1024 = 70.32.
You can mistake by 70Mbps at most in this scenario, not by 300.


Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20150524120232.4c7492a6a04f89cdc7a6d...@gmail.com



Re: IP performance question

2015-05-24 Thread Reco
 Hi.

On Sun, 24 May 2015 11:28:36 +0200
Petter Adsen pet...@synth.no wrote:

  On Sun, 24 May 2015 10:36:39 +0200
  Petter Adsen pet...@synth.no wrote:
  
   I've been trying to improve NFS performance at home, and in that
   process i ran iperf to get an overview of general network
   performance. I have two Jessie hosts connected to a dumb switch
   with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
   the other has an Intel 82583V on the motherboard.
   
   iperf maxes out at about 725Mbps. At first I thought maybe the
   switch could be at fault, it's a really cheap one, so I connected
   both hosts to my router instead. Didn't change anything, and it had
   no significant impact on the load on the router. I can't try to run
   iperf on the router (OpenWRT), though, as it maxes out the CPU.
   
   Should I be getting more than 725Mbps in the real world?
  
  A quick test in my current environment shows this:
  
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
  
  Two hosts, connected via Cisco 8-port unmanaged switch, Realtek 8168e
  on one host, Atheros Attansic L1 on another.
  
  On the other hand, the same test, Realtek 8139e on one side, but with
  lowly Marvell ARM SOC on the other side shows this:
  
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
  
  So - you can, definitely, and yes, it depends.
 
 That last one, would that be limited because of CPU power?

That too. You cannot extract that much juice from a single-core ARM5.
Another possibility is that Marvell is unable to design a good chipset
even in the case it would be a matter of life and death :)


   Could there be a driver issue, or some settings that aren't optimal?
  
  Check your iptables rules if any. Especially nat and mangle tables.
 
 None. iptables are currently disabled on both sides.

Was worth the try :)


  Try the same test but use UDP instead of TCP.
 
 Only gives me 1.03Mbits/sec :)

iperf(1) says that by default UDP is capped on 1Mbit. Use -b option on
client side to set desired bandwidth to 1024m like this:

iperf -c server -u -b 1024M

Note that -b should be the last option. Gives me 812 Mbits/sec with the
default UDP buffer settings.


  Increase TCP window size (those net.core.rmem/wmem sysctls) on both
  sides.
 
 It is currently 85KB and 85.3KB, what should I try setting them to?

Try these:

net.core.rmem_max = 4194304
net.core.wmem_max = 1048576


  Try increasing MTU above 1500 on both sides.
 
 Likewise, increase to what?

The amount of your NICs support. No way of knowing the maximum unless
you try. A magic value seems to be 9000. Any value above 1500 is
non-standard (so nothing is guaranteed)

A case study (that particular NIC claims to support MTU of 9200 in
dmesg):

# ip l s dev eth0 mtu 1500
# ip l s dev eth0 mtu 9000
# ip l s dev eth0 mtu 65536
RTNETLINK answers: Invalid argument

Of course, for this to work it would require to increase MTU on every
host between your two, so that's kind of a last resort measure.


  Use crossover cable if everything else fails.
 
 If I have one. I read somewhere that newer interfaces will
 auto-negotiate if you use a straight cable as a crossover, is that true?

They should. I never encountered desktop-class NIC that is not able to
negotiate the cable in last 15 years. Consumer-grade routers, on the
other hand … shrugs.

 
 Also, the machine with the Realtek PCI adapter has a Marvell 88E8001 on
 the motherboard, but I haven't used it for years since there were once
 driver problems. Those are probably fixed now, I will try that once I
 can. Didn't think of it before.

If the Marvell NIC uses sky2 kernel module - I would not even hope.
Like I said earlier, Marvel is unable to design a good chip even if
someone life would depend on it.

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20150524132004.885ab8f7c32f717e6bdbd...@gmail.com



Re: IP performance question

2015-05-24 Thread Petter Adsen
On Sun, 24 May 2015 12:02:32 +0300
Reco recovery...@gmail.com wrote:

  Hi.
 
 On Sun, 24 May 2015 10:36:39 +0200
 Petter Adsen pet...@synth.no wrote:
 
  I've been trying to improve NFS performance at home, and in that
  process i ran iperf to get an overview of general network
  performance. I have two Jessie hosts connected to a dumb switch
  with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
  the other has an Intel 82583V on the motherboard.
  
  iperf maxes out at about 725Mbps. At first I thought maybe the
  switch could be at fault, it's a really cheap one, so I connected
  both hosts to my router instead. Didn't change anything, and it had
  no significant impact on the load on the router. I can't try to run
  iperf on the router (OpenWRT), though, as it maxes out the CPU.
  
  Should I be getting more than 725Mbps in the real world?
 
 A quick test in my current environment shows this:
 
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
 
 Two hosts, connected via Cisco 8-port unmanaged switch, Realtek 8168e
 on one host, Atheros Attansic L1 on another.
 
 On the other hand, the same test, Realtek 8139e on one side, but with
 lowly Marvell ARM SOC on the other side shows this:
 
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
 
 So - you can, definitely, and yes, it depends.

That last one, would that be limited because of CPU power?

  Could there be a driver issue, or some settings that aren't optimal?
 
 Check your iptables rules if any. Especially nat and mangle tables.

None. iptables are currently disabled on both sides.

 Try the same test but use UDP instead of TCP.

Only gives me 1.03Mbits/sec :)

 Increase TCP window size (those net.core.rmem/wmem sysctls) on both
 sides.

It is currently 85KB and 85.3KB, what should I try setting them to?

 Try increasing MTU above 1500 on both sides.

Likewise, increase to what?

 Use crossover cable if everything else fails.

If I have one. I read somewhere that newer interfaces will
auto-negotiate if you use a straight cable as a crossover, is that true?

Also, the machine with the Realtek PCI adapter has a Marvell 88E8001 on
the motherboard, but I haven't used it for years since there were once
driver problems. Those are probably fixed now, I will try that once I
can. Didn't think of it before.

  Unfortunately,
  these are the only two hosts I have with Gbit interfaces (except the
  router), so I can't test with another host.
  
  Could this be a MB/MiB issue? The iperf man page doesn't say which
  it reports. (Well, it says Mbit/Mbyte, so I assume it does not
  mean MiB)
 
 No. (1024*1024*1024 - 1000*1000*1000)/1024/1024 = 70.32.
 You can mistake by 70Mbps at most in this scenario, not by 300.

I knew it wouldn't account for 275MB, but it could be a portion of it.

Thanks for your response, though, I'm already learning things :)

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpG7_dBq3jNh.pgp
Description: OpenPGP digital signature


Re: IP performance question

2015-05-24 Thread Petter Adsen
On Sun, 24 May 2015 13:26:52 +0200
Petter Adsen pet...@synth.no wrote:
 Thanks to you, I now get ~880Mbps, which is a lot better. It seems
 increasing the MTU was what had the most effect, so I won't bother
 with TCP window size.

Now, this is a little odd:

petter@monster:/etc$ iperf -i 1 -c fenris -r 

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)


Client connecting to fenris, TCP port 5001
TCP window size:  280 KByte (default)

[  5] local 192.168.0.105 port 49636 connected with 192.168.0.103 port
5001 [ ID] Interval   Transfer Bandwidth
[  5]  0.0- 1.0 sec   104 MBytes   875 Mbits/sec
[  5]  1.0- 2.0 sec  97.8 MBytes   820 Mbits/sec
[  5]  2.0- 3.0 sec   104 MBytes   868 Mbits/sec
[  5]  3.0- 4.0 sec   104 MBytes   876 Mbits/sec
[  5]  4.0- 5.0 sec   104 MBytes   876 Mbits/sec
[  5]  5.0- 6.0 sec  83.0 MBytes   696 Mbits/sec
[  5]  6.0- 7.0 sec   105 MBytes   879 Mbits/sec
[  5]  7.0- 8.0 sec   104 MBytes   875 Mbits/sec
[  5]  8.0- 9.0 sec   105 MBytes   884 Mbits/sec
[  5]  9.0-10.0 sec   104 MBytes   877 Mbits/sec
[  5]  0.0-10.0 sec  1016 MBytes   852 Mbits/sec
[  4] local 192.168.0.105 port 5001 connected with 192.168.0.103 port
34815 [  4]  0.0- 1.0 sec  98.5 MBytes   826 Mbits/sec
[  4]  1.0- 2.0 sec  98.5 MBytes   826 Mbits/sec
[  4]  2.0- 3.0 sec  97.4 MBytes   817 Mbits/sec
[  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
[  4]  4.0- 5.0 sec  98.5 MBytes   827 Mbits/sec
[  4]  5.0- 6.0 sec  98.1 MBytes   823 Mbits/sec
[  4]  6.0- 7.0 sec  98.6 MBytes   827 Mbits/sec
[  4]  7.0- 8.0 sec  98.5 MBytes   826 Mbits/sec
[  4]  8.0- 9.0 sec  98.5 MBytes   827 Mbits/sec
[  4]  9.0-10.0 sec  98.5 MBytes   826 Mbits/sec
[  4]  0.0-10.0 sec   984 MBytes   825 Mbits/sec

I have run it many times, and the results are consistently ~50Mbps
lower in the other direction. MTU is set to 7152 on both hosts, but the
window size is back to the default values (212992).

Petter

-- 
I'm ionized
Are you sure?
I'm positive.


pgpWyK2ykAX5S.pgp
Description: OpenPGP digital signature


Re: IP performance question

2015-05-24 Thread Reco
 Hi.

On Sun, 24 May 2015 13:26:52 +0200
Petter Adsen pet...@synth.no wrote:

  On Sun, 24 May 2015 11:28:36 +0200
  Petter Adsen pet...@synth.no wrote:
  
On Sun, 24 May 2015 10:36:39 +0200
Petter Adsen pet...@synth.no wrote:

 I've been trying to improve NFS performance at home, and in that
 process i ran iperf to get an overview of general network
 performance. I have two Jessie hosts connected to a dumb switch
 with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
 the other has an Intel 82583V on the motherboard.
 
 iperf maxes out at about 725Mbps. At first I thought maybe the
 switch could be at fault, it's a really cheap one, so I
 connected both hosts to my router instead. Didn't change
 anything, and it had no significant impact on the load on the
 router. I can't try to run iperf on the router (OpenWRT),
 though, as it maxes out the CPU.
 
 Should I be getting more than 725Mbps in the real world?

A quick test in my current environment shows this:

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

Two hosts, connected via Cisco 8-port unmanaged switch, Realtek
8168e on one host, Atheros Attansic L1 on another.

On the other hand, the same test, Realtek 8139e on one side, but
with lowly Marvell ARM SOC on the other side shows this:

[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec

So - you can, definitely, and yes, it depends.
   
   That last one, would that be limited because of CPU power?
  
  That too. You cannot extract that much juice from a single-core ARM5.
  Another possibility is that Marvell is unable to design a good chipset
  even in the case it would be a matter of life and death :)
 
 That might be why I'm not using the Marvell adapter :) I remember
 reading somewhere that either Marvell or Realtek were bad, but I
 couldn't remember which one, so I kept using the Realtek one since I
 had obviously switched for a reason :)

Both are actually. Realtek *was* good at least 5 years ago, but since
then they managed to introduce multiple chips that are managed by the
same r8169 kernel module. Since then it became a matter of luck. Either
your NIC works flawlessly without any firmware (mine does), or you're
getting all kinds of weird glitches.


Try the same test but use UDP instead of TCP.
   
   Only gives me 1.03Mbits/sec :)
  
  iperf(1) says that by default UDP is capped on 1Mbit. Use -b option on
  client side to set desired bandwidth to 1024m like this:
  
  iperf -c server -u -b 1024M
  
  Note that -b should be the last option. Gives me 812 Mbits/sec with
  the default UDP buffer settings.
 
 Didn't notice that. I get 808, so close to what you get.

Good. The only thing is left to do is to apply that 'udp' flag to NFS
clients, and you're set. Just don't mix it with 'async' flag, as Bad
Things ™ can happen if you do so (see nfs(5) for the gory details).


Increase TCP window size (those net.core.rmem/wmem sysctls) on
both sides.
   
   It is currently 85KB and 85.3KB, what should I try setting them to?
  
  Try these:
  
  net.core.rmem_max = 4194304
  net.core.wmem_max = 1048576
 
 OK, I've set them on both sides, but it doesn't change the results, no
 matter what values I give iperf with -w.

Now that's weird. I picked those sysctl values from one of NFS
performance tuning guides. Maybe I misunderstood something.

 
Try increasing MTU above 1500 on both sides.
   
   Likewise, increase to what?
  
  The amount of your NICs support. No way of knowing the maximum unless
  you try. A magic value seems to be 9000. Any value above 1500 is
  non-standard (so nothing is guaranteed)
  
  A case study (that particular NIC claims to support MTU of 9200 in
  dmesg):
  
  # ip l s dev eth0 mtu 1500
  # ip l s dev eth0 mtu 9000
  # ip l s dev eth0 mtu 65536
  RTNETLINK answers: Invalid argument
  
  Of course, for this to work it would require to increase MTU on every
  host between your two, so that's kind of a last resort measure.
 
 Well, both hosts are connected to the same switch (or right now, to the
 router, but I could easily put them back on the switch if that
 matters). One of the hosts would not accept a value larger than 7152,
 but it did have quite an effect: I now get up to 880Mbps :)

Consider yourself lucky as MTU experiments on server hardware usually
lead to a long trip to a datacenter :)


 Will this setting have an impact on communication with machines where
 the MTU is smaller? In other words, will it have a negative impact on
 general network performance, or is MTU adjusted automatically?

Well, back in the old days it was simple. Hardware simply rejected all
frames which MTU exceeded their settings (i.e. 1500).
Since then they introduced all those smart switches which presumably
fragment such frames to their MTU (i.e. 1 big 

Re: IP performance question

2015-05-24 Thread Reco
 Hi.

On Sun, 24 May 2015 13:47:48 +0200
Petter Adsen pet...@synth.no wrote:

 On Sun, 24 May 2015 13:26:52 +0200
 Petter Adsen pet...@synth.no wrote:
  Thanks to you, I now get ~880Mbps, which is a lot better. It seems
  increasing the MTU was what had the most effect, so I won't bother
  with TCP window size.
 
 Now, this is a little odd:
 
 petter@monster:/etc$ iperf -i 1 -c fenris -r 
 
 Server listening on TCP port 5001
 TCP window size: 85.3 KByte (default)
 
 
 Client connecting to fenris, TCP port 5001
 TCP window size:  280 KByte (default)
 
 [  5] local 192.168.0.105 port 49636 connected with 192.168.0.103 port
 5001 [ ID] Interval   Transfer Bandwidth
 [  5]  0.0- 1.0 sec   104 MBytes   875 Mbits/sec
 [  5]  1.0- 2.0 sec  97.8 MBytes   820 Mbits/sec
 [  5]  2.0- 3.0 sec   104 MBytes   868 Mbits/sec
 [  5]  3.0- 4.0 sec   104 MBytes   876 Mbits/sec
 [  5]  4.0- 5.0 sec   104 MBytes   876 Mbits/sec
 [  5]  5.0- 6.0 sec  83.0 MBytes   696 Mbits/sec
 [  5]  6.0- 7.0 sec   105 MBytes   879 Mbits/sec
 [  5]  7.0- 8.0 sec   104 MBytes   875 Mbits/sec
 [  5]  8.0- 9.0 sec   105 MBytes   884 Mbits/sec
 [  5]  9.0-10.0 sec   104 MBytes   877 Mbits/sec
 [  5]  0.0-10.0 sec  1016 MBytes   852 Mbits/sec
 [  4] local 192.168.0.105 port 5001 connected with 192.168.0.103 port
 34815 [  4]  0.0- 1.0 sec  98.5 MBytes   826 Mbits/sec
 [  4]  1.0- 2.0 sec  98.5 MBytes   826 Mbits/sec
 [  4]  2.0- 3.0 sec  97.4 MBytes   817 Mbits/sec
 [  4]  3.0- 4.0 sec  98.0 MBytes   822 Mbits/sec
 [  4]  4.0- 5.0 sec  98.5 MBytes   827 Mbits/sec
 [  4]  5.0- 6.0 sec  98.1 MBytes   823 Mbits/sec
 [  4]  6.0- 7.0 sec  98.6 MBytes   827 Mbits/sec
 [  4]  7.0- 8.0 sec  98.5 MBytes   826 Mbits/sec
 [  4]  8.0- 9.0 sec  98.5 MBytes   827 Mbits/sec
 [  4]  9.0-10.0 sec  98.5 MBytes   826 Mbits/sec
 [  4]  0.0-10.0 sec   984 MBytes   825 Mbits/sec
 
 I have run it many times, and the results are consistently ~50Mbps
 lower in the other direction. MTU is set to 7152 on both hosts, but the
 window size is back to the default values (212992).

Hmm. A first thought is that you have a different TCP window size on
client and a server.
And a second thought is that you probably should check interface
statistics with ifconfig or 'ip -s link show'. Every packet that is
not RX or TX means trouble.

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20150524160141.ca8d140c0a8fc3cf151b0...@gmail.com