A fascinating discussion. Can any machine flood a Gigabit Ethernet segment?
My NetWare server at home has gotten a Gigabit segment up to about 40%
utilization in a test. It's just a PII-450, and nothing else is special in
the least. I haven't tried optimizing it to get more utilization,
primarily 'cause I'd have to borrow the hardware again, and that was a very
special favor. More to the point, I'd be looking for throughput. 1000MB/s
= what for bytes, maybe 120MBytes/s? Do I have a disk drive channel
capable of delivering data that fast?
The concept of jumbo frames is also interesting. The network I oversee is
Token-Ring and Ethernet. The overwhelming majority (70%+) of packets are
<128bytes. Most 64-128 bytes. What earthly good does an 8K frame do me?
The Token segment rarely sees a 4k frame, and then *ONLY* when a NetWare
user gets a LIP or burst going. All other services, especially SMB, never
get over 1500 bytes. Of course, the Ethernet segment doesn't know a packet
larget than 1536. And the routed packets over our WAN often are chopped
into 576 bytes, for no good reason. Jumbo frames must appeal to those who
send massive data streams. for the real world, this does not seem attractive.
All of the above is IMHO, of course. Along with the notion that real-world
performance has as much to do with other factors, such as switch
performance, OS, and application. My favorite whine here is an app that
users complain takes to long to load, normally about 10-14 seconds. It
takes the first 7 seconds to display the logo screen. That's all it does.
*Then* it goes about loading code. Can I tell my users to whine to the
vendor, and get rid of the splash?
Rick
ps- great list. great contributors.
At 01:14 PM 2/8/00 +0100, you wrote:
>>>>>> "Ingo" == Ingo Molnar <[EMAIL PROTECTED]> writes:
>
>Ingo> On Tue, 8 Feb 2000, Anton Ivanov wrote:
>
>>> Wrong. All GigE cards I have checked so far have interrupt
>>> mitigation. At init you program them to delay IRQ until that many
>>> packets are in the queue or a timer expires and the timer
>>> value. The only problem is that these are usually not passed as
>>> module params. So you have to recompile if you find your current
>>> mitigation params bad.
>
>Ingo> yep, also with jumbo frames (mtu 9000) there is no problem at
>Ingo> all. Eg. the SysKonnect cards i use do just over 20k IRQs/sec
>Ingo> when i'm saturating 107MB/sec TCP bandwith - this IRQ load is
>Ingo> simply not a problem at all for an APIC controller. I've seen
>Ingo> IRQ rates of 80k/sec as well.
>
>Sorry but thats *BAD* performance by the SK card. I do around 2.5K
>ints/sec with the Alteon when doing 65MB/sec traffic in one direction
>with regular sized frames. The load is maybe not a problem for the
>APIC, but 80k/sec truly sucks for the CPUs considering the number of
>context saves/restores they have to do.
>
>>> See above. If you program a sane GigE NIC correctly you actually
>>> transfer more than 8K at a time. Donald Becker's hamachi driver is
>>> a good example.
>
>Ingo> also other cards are using jumbo frames as well (and it actually
>Ingo> makes sense to increase packet size).
>
>It's the switch vendors who are causing the problems.
>
>Jes
>-
>To unsubscribe from this list: send the line "unsubscribe linux-net" in
>the body of a message to [EMAIL PROTECTED]
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]