Lennert:

> Did you just look at 'top' for determining this?  Time spent in
> interrupt context isn't counted in that figure :-)
 I used 'vmstat'.

******* I am sorry to send too long mail.

Here, are some results. I used 'iperf' and 'nttcp' both. Before, you read
further in details, I got performance of around 520-580 Mbits/sec both in
UDP/TCP (using both iperf and ttcp). And one of my end machines is running
FreeBSD 4.5

Please read /Questions/ at the end of the results.

My observation during testing was, Linux box could not push more than 580
Mbits/sec ( I am using Interl 82543 Gigcards, P-III 1.26 GHz, 133 MHz bus
speed and 1 GB memory on all machines).

Setup: (/nttcp/)
        nttcp sender    : test-machine  (OS: Linux-2.4.18)
        bridge          : br-machine    (OS: Linux-2.4.18)
        nttcp receiver  : test-machine2 (OS: FreeBSD 4.5) // Please Note
        l=local, 1=remote(receiver),
        For man page of 'nttcp'
        goto:   http://www.leo.org/~elmar/nttcp/nttcp.1.html


1) UDP:
    -n 252144 (that many number of pkt has been sent, each has 4096Bytes
long)
    Bytes       Real s CPU s Real-MBit/s CPU-Mbit/s Calls Real-C/s   CPU-C/s
l 1032781824    14.37  4.30  575.1554    1921.4546  252147 17552.56  58638.8
1 945160192     14.37  1.51  526.3189    5012.7961  230753 16062.05  152979.1
    br-machine cpu consumption went 14% (vmstat)

2) TCP:
    -n 252144
    Bytes       Real s CPU s Real-MBit/s CPU-Mbit/s Calls   Real-C/s   CPU-C/s
l 1032781824    15.15  2.69  545.2294    3071.4701  252144  16639.08  93733.8
1 1032781824    15.15  2.36  545.1938    3505.5973  299033  19732.02  126876.9
    br-machine cpu consumtion went 43% (vmstat)


Setup: (/iperf/)

        iperf Server   : test-machine2  (OS: FreeBSD 4.5)
        bridge         : br-machine     (OS: Linux 2.4.18)
        iperf Client   : test-machine   (OS: Linux 2.4.18)

1) UDP:
         Server: iperf -s -u
         Client: iperf -c 10.4.2.5 -u -b 900m -t 10

        Server Result:
                Interval:   0.0 - 10.0 sec
                Transfer:   691 MBytes
                B/W     :   580 Mbits/sec
                Jitter  :   0.049 ms
                Lost/Total : 0/454456 (0.0%)

        Client Result:
                Interval:   0.0 - 10 sec
                Transfer:   691 MBytes
                B/w     :   553 Mbits/sec
                Sent    :   493105 datagrams

                bridge used 15% of cpu

2) TCP:
        Server: iperf -s (Default window 32KB)
        Client: iperf -c 10.4.2.5 (Default window 16KB) -t 40

        Client/Server Result:
                Interval:       0.0 - 40 sec
                Transfer:       2.6 GBytes
                 B/W    :       560 Mbits/sec  (Client 534 Mbits/sec)

        bridge used 44% of cpu



Questions:

1) Why br-machine (running bridge) cpu load went high during TCP (upto
44%) and remained low(upto 18%) during UDP ?

2) Why Linux machine could not push more than 580 Mbits/sec. (even i gave
iperf -c -u -b 900 m). While FreeBSD could push 950 Mbits/sec. (BEWARE:
Here I am not saying that Linux is bad or whatever...). Is it a driver
issue ?

3) Currently i am only doing bridging, but i am planning to do more on top
of bridging (e.g., shaping, policing based on different attributes). So,
pkt might goto Layer 3 in that case. Will performance degrade in that case
?

4) Lennert:

        > Yeah, odd.  I get full wire speed (120 megabytes/sec) over TCP
        > between a dual Athlon 1.2ghz and a Pentium III 1.2ghz (where
        > the athlon is the transmitting host, network card interrupt
        > bound to CPU#0).
If it is possible, could you tell me what else did you use  ?
(Specification of machine/card etc., application which measured
performance etc.)



Many Thanks,
-Kunal

_______________________________________________
Bridge mailing list
[EMAIL PROTECTED]
http://www.math.leidenuniv.nl/mailman/listinfo/bridge

Reply via email to