Hi, I'm not Lennert, but I suspect your problem is due to the fundamental limitations of interrupt driven I/O. Interrupts are expensive, and when you're pushing 100Mbit of data, that's 100k Interrupts/sec. Eventually, interrupts are so frequent that the machine is not able to do anything useful between them and the interrupt handlers consume all of the CPU's time. I suspect this is what is happening. That would explain why the throughput goes down when you pound the machine harder (more interrupts consume more time, and the ability of the machine to process packets decreases).
I don't know what state the project is in, but do a google search for the Click Modular Router Project. They continuously poll the ethernet cards instead of using interrupts as a way to increase the total throughput. Supposedly they should be able to achieve ultra-high throughputs very efficiently using this method. Logan Bowers Kunal Trivedi wrote: > > Hi Lennert, > I did following testing initially. > Test 1------ On regular machine > Bridge Machine Spec: > Processor Intel PII 400 MHz, 100 MHz bus speed > Cache size 512 KB > Memory 128 MB > Network Card 3Com's 3c905B, > Intel's 82557 (Ethernet Pro 100) > > Network Speed 100 Mbps > Two End machines - same as above > > I ran iperf. I got 93.6 Mbps b/w (utilization). So, that's really > great. But, then on above performance we bought high end servers. > And then, > > Test 2------- Dell PowerEdge Rack Servers 2550 > Bridge Machine Spec: > Processor Intel PIII 1.26 GHz, 133 MHz bus speed > Cache size 512 KB > Memory 1 GB > Network Card Intel Corp. 82543GC Gigabit Ethernet, > Intel Corp. 82543GC Gigabit Ethernet > *Note* It had 2 BroadComm gig cards. But > unfortunately BroadComm cards don't work in promiscuous mode. So, we had > to put these 2 Intel gig cards, which only have module. (so as broadcomm) > > I ran iperf. I got very bad results. > TCP: 113 Mbps (with window size 32KB), so i increased window sized > by 64KB and i got 260 Mbps. But that's still low on Gig network. > UDP: It drops more than 1/3 of the packets. If i try to push 500 > Mb then i get around 150 Mb bandwidth and if i try to push 900 Mb then i > get 140 Mb. > > Initially i was running with my modified code. Then i got original code > (without my modifications) and it didn't make any difference. > > We really need to solve this problem, b'coz we are planning to build more > services on top of bridging and vlan and traffic shapping code. > > Any idea what is the bottlenack in above configuration ? > > Many Thanks, > -Kunal > > _______________________________________________ > Bridge mailing list > [EMAIL PROTECTED] > http://www.math.leidenuniv.nl/mailman/listinfo/bridge _______________________________________________ Bridge mailing list [EMAIL PROTECTED] http://www.math.leidenuniv.nl/mailman/listinfo/bridge
