--- Matthew Dillon <[EMAIL PROTECTED]>
wrote:

> :The issue is that RX is absolute, as you
> cannot
> :"decide" to delay or selectively drop since
> you
> :don't know whats coming. Better to have some
> :latency than dropped packets. But if you don't
> :dedicate to RX, then you have an unknown
> amount
> :of cpu resources doing "other stuff". The
> :capacity issues always first manifest
> themselves
> :as rx overruns, and they always happen a lot
> :sooner on MP machines than UP machines. The
> LINUX
> :camp made the mistake of not making RX
> important
> :enough, and now their 2.6 kernels drop packets
> :all over the ranch. But audio is nice and
> :smooth... 
> :
> :How to do it or why its difficult is a
> designer's
> :issue. I've not yet been convinced that MP is
> :something thats suitable for a network
> intensive
> :environment as I've never seen an MP OS that
> has
> :come close to Freebsd 4.x UP performance.
> :
> :DT
> 
>     RX interrupts can be hardware moderated
> just like TX interrupts.
>     The EM device, for example, allows you to
> set the minimum
>     delay between RX intrrupts. 
> 
>     For example, lets say a packet comes in and
> EM interrupts immediately,
>     resulting in a single packet processed on
> that interrupt.  Once the
>     interrupt has occured EM will not generate
> another interrupt for N
>     microseconds, no matter how many packets
> come in, where N is programmable.
>     Of course N is programmed to a value that
> will not result in the RX
>     ring overflowing.  The result is that
> further RX interrupts may bundle
>     10-50 receive packets on each interrupt
> depending on the packet size.
> 
>     This aggregation feature of (nearly all)
> GiGE ethernet devices reduces
>     the effective overhead of interrupt entry
> and exit to near zero, which
>     means that the device doesn't need to be
> polled even under the most
>     adverse circumstances.
> 
>     I don't know what the issue you bring up
> with the Linux kernels is,
>     but at GigE speeds the ethernet hardware is
> actually flow-controlled.
>     There should not be any packet loss even if
> the cpu cannot keep up with
>     a full-bandwidth packet stream.   There is
> certainly no need to fast-path
>     the network interrupt, it simply needs to
> be processed in a reasonable
>     period of time.  A few milliseconds of
> latency occuring every once in a
>     while would not have any adverse effect.

I see you haven't done much empirical testing;
the assumption that "all is well because intel
has it all figured out" is not a sound one.
Interrupt moderation is given but at some point
you hit a wall, and my point is that you hit a
wall a lot sooner with MP than with UP, because
you have to get back to the ring, no matter what
the intervals are, before they wrap. As you
increase the intervals (and thus decrease the
ints/second) you'll lose even more packets,
because there is less space in the ring when the
interrupt is generated and less time for the cpu
to get to it. 

Flow control isn't like XON/OFF where we say "hey
our buffer is almost full so lets send some flow
control" at 9600 baud. By the time you're flow
controlling you've already lost enough packets to
piss off your customer base. Plus flow
controlling a big switch will just result in the
switch dropping the packets instead of you, so
what have you really gained?

Packet loss is real, no matter how much you deny
it. If you don't believe it, then you need a
better traffic generator.

DT


        
                
__________________________________ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs

Reply via email to