W dniu 2010-11-18 16:52, David Coppa pisze:
2010/11/18 RLW<seran...@o2.pl>:

there is no tcpbench in packages for 4.8 and for debian linux

Because it's in base: /usr/bin/tcpbench

ciao,
david



I removed Intel NIC and run test on broadcom integrated Gbit NIC to see if there is problem with em driver

bge0 at pci1 dev 11 function 0 "Broadcom BCM5705K" rev 0x03, BCM5705 A3 (0x3003): apic 1 int 16 (irq 5), address XX:XX:XX:XX:XX:XX
brgphy0 at bge0 phy 1: BCM5705 10/100/1000baseT PHY, rev. 2


1.
pf enabled, queue 950mbit, qlimit 500
iperf test: 410 Mbits/sec

r...@router-test (/root)# top

load averages:  0.95,  0.53,  0.26
23 processes:  1 running, 21 idle, 1 on processor
CPU states: 1.2% user, 0.0% nice, 84.4% system, 14.4% interrupt, 0.0% idle
Memory: Real: 8972K/42M act/tot  Free: 443M  Swap: 0K/759M used/tot


2. test made between two OpenBSD 4.8 boxes (there is no tcpbench for debian)


transfers by tcpbench:

Conn:   1 Mbps:      399.972 Peak Mbps:      406.093 Avg Mbps:      399.972
      133996       45932008      370.419  100.00%
Conn:   1 Mbps:      370.419 Peak Mbps:      406.093 Avg Mbps:      370.419
      134999       46833528      373.920  100.00%
Conn:   1 Mbps:      373.920 Peak Mbps:      406.093 Avg Mbps:      373.920
      136074       43531224      323.953  100.00%
Conn:   1 Mbps:      323.953 Peak Mbps:      406.093 Avg Mbps:      323.953
      137002       41013960      353.950  100.00%
Conn:   1 Mbps:      353.950 Peak Mbps:      406.093 Avg Mbps:      353.950
      137996       50500448      406.442  100.00%
Conn:   1 Mbps:      406.442 Peak Mbps:      406.442 Avg Mbps:      406.442


r...@router-test (/root)# top (while running tcpbench)

load averages:  1.26,  0.80,  0.49
22 processes:  1 running, 20 idle, 1 on processor
CPU states: 0.0% user, 0.0% nice, 77.2% system, 15.6% interrupt, 7.2% idle
Memory: Real: 8752K/43M act/tot  Free: 442M  Swap: 0K/759M used/tot


r...@router-test (/root)# systat queue (while running tcpbench)

2 users    Load 0.82 0.69 0.51                      Thu Nov 18 17:13:10 2010

QUEUE BW SCH PR PKTS BYTES DROP_P DROP_B QLEN BORR SUSP P/S B/S root_bge0 1000M cbq 0 7300K 10G 0 0 0 0 0 314 47M q_lan 950M cbq 7300K 10G 0 0 0 0 0 314 47M

----

Now back on Intel NIC

1.
pf enabled, queue 950mbit, qlimit 500
iperf test: 347 Mbits/sec


2.
transfers by tcpbench:

Conn:   1 Mbps:      328.701 Peak Mbps:      336.374 Avg Mbps:      328.701
       29002       41936224      335.490  100.00%
Conn:   1 Mbps:      335.490 Peak Mbps:      336.374 Avg Mbps:      335.490
       30001       41394096      331.484  100.00%
Conn:   1 Mbps:      331.484 Peak Mbps:      336.374 Avg Mbps:      331.484
       30999       39930144      320.402  100.00%
Conn:   1 Mbps:      320.402 Peak Mbps:      336.374 Avg Mbps:      320.402
       32003       42171560      336.363  100.00%
Conn:   1 Mbps:      336.363 Peak Mbps:      336.374 Avg Mbps:      336.363
       33001       41970888      336.440  100.00%
Conn:   1 Mbps:      336.440 Peak Mbps:      336.440 Avg Mbps:      336.440
       34002       38258208      305.760  100.00%
Conn:   1 Mbps:      305.760 Peak Mbps:      336.440 Avg Mbps:      305.760


r...@router-test (/root)# top (whiel runing tcpbench)

load averages:  1.20,  0.59,  0.25
24 processes:  1 running, 22 idle, 1 on processor
CPU states: 0.2% user, 0.0% nice, 75.6% system, 21.2% interrupt, 3.0% idle
Memory: Real: 8904K/43M act/tot  Free: 442M  Swap: 0K/759M used/tot


r...@router-test (/root)# systat queue  (while running tcpbench)
2 users    Load 0.57 0.54 0.28                      Thu Nov 18 17:25:26 2010

QUEUE BW SCH PR PKTS BYTES DROP_P DROP_B QLEN BORR SUSP P/S B/S root_em0 1000M cbq 0 2963K 4381M 0 0 0 0 0 279 42M q_lan 950M cbq 2963K 4381M 0 0 0 0 0 279 42M

----

so... the same machine, different NIC, different testing programs, same behavior (~50% of defined queue speed, very high CPU usage)

the only thing i can try now is moving intel nic and hdd to other computer with pcie slot and run test on it to see is it hardware (motherboard) problem.


----
best regards,
RLW

Reply via email to