The calculations make sense. I think my first attempt to test this would
be to run parallel netperfs (one netperf per class) and look at the
output of 'shorewall show tc'. That will show you what the actual speeds
are. I don't recall if netperf provides any latency data.
Well, I did quite a bit of testing in the past couple of hours, but I am, quite frankly, unimpressed!

I used netperf's biggest brother - iperf - instead.

The values of dmax and umax do not seem to have any effect on the net speed whatsoever - at least that is what the tests seem to indicate. The speed limits are very strictly observed, so is the priority of each class, but the results are roughly the same regardless of whether dmax:umax have been specified in tcclasses (see netspeed-tests.txt attached).

I did run "shorewall show tc eth0" (as this was the device I was testing everything on) after each test, but there was no speed indication there - just the number of packets passed through each class, which the result of iperf shows anyway.
10.1.0.3 (client) -> 10.1.0.7 (server)

tcclasses
a:13    - 320kbit               320kbit 2
a:13:14 - 120kbit:60ms:1500b    full    2
a:13:15 - 80kbit:100ms:1500b    full    3
a:13:16 - 80kbit:224ms1500b     full    4
a:13:17 - 40kbit:374ms:1500b    full    5

tcrules
a:13 - 10.1.0.7 tcp 10013
a:13 - 10.1.0.7 udp 10013
a:14 - 10.1.0.7 tcp 10014
a:14 - 10.1.0.7 udp 10014
a:15 - 10.1.0.7 tcp 10015   
a:15 - 10.1.0.7 udp 10015   
a:16 - 10.1.0.7 tcp 10016   
a:16 - 10.1.0.7 udp 10016   
a:17 - 10.1.0.7 tcp 10017   
a:17 - 10.1.0.7 udp 10017   


iperf tests and results:
   
1. Class 13 (main parent) test - class speed set @ 320kbit/s
   (TCP) (Server) iperf -f k -p 10013 -B 10.1.0.7 -s
   (TCP) (Client) iperf -c 10.1.0.7 -p 10013 -f k -t 60 
   (UDP) (Server) iperf -f k -p 10013 -B 10.1.0.7 -s -u
   (UDP) (Client) iperf -c 10.1.0.7 -p 10013 -f k -t 60 -u -b 400k

         (Result) Interval      Transfer        Bandwidth       Jitter          
Lost/Total Datagrams    
   (TCP) (Result) 0.0-216.4 sec 7936 KBytes     300 Kbits/sec
   (UDP) (Result) 0.0-61.1 sec  2320 KBytes     311 Kbits/sec   36.268 ms       
145/1616 (9%)

2. Classes 14, 15, 16 & 17 simultaneous tests
   (TCP) (Server) iperf -f k -p 10014 -B 10.1.0.7 -s
   (TCP) (Server) iperf -f k -p 10015 -B 10.1.0.7 -s
   (TCP) (Server) iperf -f k -p 10016 -B 10.1.0.7 -s
   (TCP) (Server) iperf -f k -p 10017 -B 10.1.0.7 -s
   (TCP) (Client) iperf -c 10.1.0.7 -p 10014 -f k -t 60 -w 16k & iperf -c 
10.1.0.7 -p 10015 -f k -t 60 -w 16k & iperf -c 10.1.0.7 -p 10016 -f k -t 60 -w 
16k & iperf -c 10.1.0.7 -p 10017 -f k -t 60 -w 16k 
   (UDP) (Server) iperf -f k -p 10014 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10015 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10016 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10017 -B 10.1.0.7 -s -u
   (UDP) (Client) iperf -c 10.1.0.7 -p 10014 -f k -t 60 -w 16k -u -b 400k & 
iperf -c 10.1.0.7 -p 10015 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 
10016 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 10017 -f k -t 60 -w 
16k -u -b 400k 

         (Result) Interval      Transfer        Bandwidth       Jitter          
Lost/Total Datagrams    
   (TCP) (Res-14) 0.0-74.9 sec  1024 KBytes     112.0 Kbits/sec
   (TCP) (Res-15) 0.0-83.3 sec   768 KBytes      75.6 Kbits/sec
   (TCP) (Res-16) 0.0-83.3 sec   768 KBytes      75.5 Kbits/sec
   (TCP) (Res-17) 0.0-87.6 sec   512 KBytes      47.9 Kbits/sec
   (UDP) (Res-14) 0.0-61.4 sec   876 KBytes     117.0 Kbits/sec 209.105 ms      
48/609 (7.9%)
   (UDP) (Res-15) 0.0-61.9 sec   589 KBytes      77.8 Kbits/sec 385.135 ms      
48/409 (12%)
   (UDP) (Res-16) 0.0-62.0 sec   589 KBytes      77.8 Kbits/sec 378.021 ms      
48/409 (12%)
   (UDP) (Res-17) 0.0-62.9 sec   307 KBytes      40.0 Kbits/sec 1392.421 ms     
48/213 (23%)

3. Classes 14, 15, 16 & 17 simultaneous tests (dmax:umax values for clases 16 & 
17 have been removed)
   (UDP) (Server) iperf -f k -p 10014 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10015 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10016 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10017 -B 10.1.0.7 -s -u
   (UDP) (Client) iperf -c 10.1.0.7 -p 10014 -f k -t 60 -w 16k -u -b 400k & 
iperf -c 10.1.0.7 -p 10015 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 
10016 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 10017 -f k -t 60 -w 
16k -u -b 400k 

         (Result) Interval      Transfer        Bandwidth       Jitter          
Lost/Total Datagrams    
   (UDP) (Res-14) 0.0-61.4 sec  876 KBytes      117.0 Kbits/sec 209.139 ms      
48/609 (7.9%)
   (UDP) (Res-15) 0.0-61.9 sec  589 KBytes       77.8 Kbits/sec 467.770 ms      
48/409 (12%)
   (UDP) (Res-16) 0.0-62.0 sec  589 KBytes       77.8 Kbits/sec 378.150 ms      
48/409 (12%)
   (UDP) (Res-17) 0.0-62.9 sec  307 KBytes       40.0 Kbits/sec 1400.246 ms     
48/213 (23%)

4. Classes 14, 15, 16 & 17 simultaneous tests (dmax:umax values for ALL clases 
have been removed)
   (UDP) (Server) iperf -f k -p 10014 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10015 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10016 -B 10.1.0.7 -s -u
   (UDP) (Server) iperf -f k -p 10017 -B 10.1.0.7 -s -u
   (UDP) (Client) iperf -c 10.1.0.7 -p 10014 -f k -t 60 -w 16k -u -b 400k & 
iperf -c 10.1.0.7 -p 10015 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 
10016 -f k -t 60 -w 16k -u -b 400k & iperf -c 10.1.0.7 -p 10017 -f k -t 60 -w 
16k -u -b 400k 

         (Result) Interval      Transfer        Bandwidth       Jitter          
Lost/Total Datagrams    
   (UDP) (Res-14) 0.0-61.4 sec  874 KBytes      117.0 Kbits/sec 199.642 ms      
48/608 (7.9%)
   (UDP) (Res-15) 0.0-62.0 sec  589 KBytes       77.8 Kbits/sec 338.291 ms      
48/409 (12%)
   (UDP) (Res-16) 0.0-62.0 sec  589 KBytes       77.8 Kbits/sec 338.802 ms      
48/409 (12%)
   (UDP) (Res-17) 0.0-62.6 sec  299 KBytes       39.1 Kbits/sec 1334.606 ms     
 4/212 (1.9%)

------------------------------------------------------------------------------
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to