Hi Lennert,
I did following testing initially.
Test 1------ On regular machine
Bridge Machine Spec:
Processor Intel PII 400 MHz, 100 MHz bus speed
Cache size 512 KB
Memory 128 MB
Network Card 3Com's 3c905B,
Intel's 82557 (Ethernet Pro 100)
Network Speed 100 Mbps
Two End machines - same as above
I ran iperf. I got 93.6 Mbps b/w (utilization). So, that's really
great. But, then on above performance we bought high end servers.
And then,
Test 2------- Dell PowerEdge Rack Servers 2550
Bridge Machine Spec:
Processor Intel PIII 1.26 GHz, 133 MHz bus speed
Cache size 512 KB
Memory 1 GB
Network Card Intel Corp. 82543GC Gigabit Ethernet,
Intel Corp. 82543GC Gigabit Ethernet
*Note* It had 2 BroadComm gig cards. But
unfortunately BroadComm cards don't work in promiscuous mode. So, we had
to put these 2 Intel gig cards, which only have module. (so as broadcomm)
I ran iperf. I got very bad results.
TCP: 113 Mbps (with window size 32KB), so i increased window sized
by 64KB and i got 260 Mbps. But that's still low on Gig network.
UDP: It drops more than 1/3 of the packets. If i try to push 500
Mb then i get around 150 Mb bandwidth and if i try to push 900 Mb then i
get 140 Mb.
Initially i was running with my modified code. Then i got original code
(without my modifications) and it didn't make any difference.
We really need to solve this problem, b'coz we are planning to build more
services on top of bridging and vlan and traffic shapping code.
Any idea what is the bottlenack in above configuration ?
Many Thanks,
-Kunal
_______________________________________________
Bridge mailing list
[EMAIL PROTECTED]
http://www.math.leidenuniv.nl/mailman/listinfo/bridge