Στις 03/01/2014 02:35 μμ, ο/η Alkis Georgopoulos έγραψε:
> I think that the best solution would be to limit the rate of the data
> that the server sends to the clients (X, NBD, SSHFS/NFS...) to e.g. 90
> Mbps per client at the TCP level, probably using iptables and tc.
>


Yup, success! I verified that this theory is working fine.
So we no longer need to throw away our dlink switches and our atheros 
cards that don't support disabling ethernet flow control! :)

With the defaults, I got transmission rate = 12 Mbps per LTSP client 
(testing with 3 clients), due to flow control pausing the server:

[  6] local port 5001 connected with 10.161.254.234 port 44008
[  4] local port 5001 connected with 10.161.254.243 port 32924
[  5] local port 5001 connected with 10.161.254.245 port 44006
[  4]  0.0-10.0 sec   112 MBytes  94.0 Mbits/sec
[  6]  0.0-10.1 sec   113 MBytes  94.1 Mbits/sec
[  5]  0.0-10.0 sec   112 MBytes  93.9 Mbits/sec
[  4] local port 39956 connected with 10.161.254.243 port 5001
[  5] local port 38365 connected with 10.161.254.234 port 5001
[  6] local port 40314 connected with 10.161.254.245 port 5001
[  4]  0.0-10.1 sec  15.4 MBytes  12.7 Mbits/sec
[  5]  0.0-10.1 sec  14.8 MBytes  12.2 Mbits/sec
[  6]  0.0-10.2 sec  15.0 MBytes  12.3 Mbits/sec


After limiting the transmission rate to 55 Mbps per client with `tc`, I 
got transmission rate = 45 Mbps, many times better:

[  7] local port 5001 connected with 10.161.254.234 port 44006
[  4] local port 5001 connected with 10.161.254.245 port 44004
[  5] local port 5001 connected with 10.161.254.243 port 32922
[  5]  0.0-10.0 sec   112 MBytes  93.9 Mbits/sec
[  7]  0.0-10.0 sec   113 MBytes  94.1 Mbits/sec
[  5] local port 39670 connected with 10.161.254.243 port 5001
[  4]  0.0-10.0 sec   112 MBytes  93.9 Mbits/sec
[  7] local port 38079 connected with 10.161.254.234 port 5001
[  4] local port 40028 connected with 10.161.254.245 port 5001
[  5]  0.0-10.0 sec  54.0 MBytes  45.1 Mbits/sec
[  7]  0.0-10.0 sec  54.0 MBytes  45.1 Mbits/sec
[  4]  0.0-10.0 sec  54.0 MBytes  45.1 Mbits/sec


I'm including the demo code I used, but it needs to be improved etc so 
that we don't have to list each client IP:

#!/bin/sh

DEV=eth0
RATE=55

tc qdisc del dev "$DEV" root
tc qdisc add dev "$DEV" root handle 1: cbq avpkt 1000 bandwidth 1gbit

tc class add dev "$DEV" parent 1: classid 1:1 cbq rate ${RATE}mbit allot 
1500 prio 5 bounded isolated
tc class add dev "$DEV" parent 1: classid 1:2 cbq rate ${RATE}mbit allot 
1500 prio 5 bounded isolated
tc class add dev "$DEV" parent 1: classid 1:3 cbq rate ${RATE}mbit allot 
1500 prio 5 bounded isolated

tc filter add dev "$DEV" parent 1: protocol ip prio 16 u32 match ip dst 
10.161.254.243 flowid 1:1
tc filter add dev "$DEV" parent 1: protocol ip prio 16 u32 match ip dst 
10.161.254.245 flowid 1:2
tc filter add dev "$DEV" parent 1: protocol ip prio 16 u32 match ip dst 
10.161.254.234 flowid 1:3

------------------------------------------------------------------------------
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
_____________________________________________________________________
Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
      https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
For additional LTSP help,   try #ltsp channel on irc.freenode.net

Reply via email to