On 2.12.2013. 10:05, Andy wrote:
> Hmm surprised by that!
> 
> Henning, could you please confirm for us if the 32bit bandwidth limit
> was lifted in the new queuing subsystem, or if it is just still in place
> whilst dual-running the new and the old?
> 
> I guess considering Hrvoje's findings the limit is still in place until
> ALTQ is removed completely in 5.5??
> 
> Cheers, Andy.
> 



Hi,
second ix (82599) card is here and I have directly connected two servers.

With kern.pool_debug=0, net.inet.ip.ifq.maxlen=1024 and mtu 16110 on ix
cards bandwidth is ~7Gbps.
tcpbench runs with -B 262144 -S 262144

pf.conf:
set skip on lo
block
pass


10Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 10G max 10G
queue bulk@ix0 parent queue@ix0 bandwidth 10G default

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench shows 1404Mbps


9Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 9G max 9G
queue bulk@ix0 parent queue@ix0 default bandwidth 9G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 410M, max 410M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 410M default qlimit 50

tcpbench shows 206Mbps


8Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 8G max 8G
queue bulk@ix0 parent queue@ix0 default bandwidth 8G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 3G, max 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50

tcpbench shows 3690Mbps


7Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 7G max 7G
queue bulk@ix0 parent queue@ix0 default bandwidth 7G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 2G, max 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50

tcpbench shows 2695Mbps


6Gbps queue:
queue queue@ix0 on ix0 bandwidth 6G max 6G
queue bulk@ix0 parent queue@ix0 default bandwidth 6G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench shows 1699Mbps


5Gbps queue
pf.conf
queue queue@ix0 on ix0 bandwidth 5G max 5G
queue bulk@ix0 parent queue@ix0 default bandwidth 5G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 705M, max 705M qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 705M default qlimit 50

tcpbench shows 218Mbps


4Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 4G max 4G
queue bulk@ix0 parent queue@ix0 bandwidth 4G default

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 4G, max 4G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 4G default qlimit 50

tcpbench shows 3986Mbps which is 99.65% of 4000Mbps.
Could this 0,35% or 14Mbps bandwidth loss be interpret as queue overhead?
If yes, then this is wonderful :)


3Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 3G max 3G
queue bulk@ix0 parent queue@ix0 default bandwidth 3G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 3G, max 3G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 3G default qlimit 50

tcpbench shows 2988Mbps which is 0.40% bandwidth loss.


2Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 2G max 2G
queue bulk@ix0 parent queue@ix0 default bandwidth 2G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 2G, max 2G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 2G default qlimit 50

tcpbench shows 1993Mbps which is 0,35% bandwidth loss


1Gbps queue:
pf.conf
queue queue@ix0 on ix0 bandwidth 1G max 1G
queue bulk@ix0 parent queue@ix0 default bandwidth 1G

pfctl -vsq
queue queue@ix0 on ix0 bandwidth 1G, max 1G qlimit 50
queue bulk@ix0 parent queue@ix0 on ix0 bandwidth 1G default qlimit 50

tcpbench show 996Mbps which is 0.7% bandwidth loss

Reply via email to