> You can get to this same point in -CURRENT, if you are using up to
> date sources, by enabling direct dispatch, which disables NETISR.
> This will help somewhat more than polling, since it will remove the
> normal timer latency between receipt of a packet, and processing of
> the packet through the networks stack.  This should reduce overall
> pool retention time for individual mbufs that don't end up on a
> socket so_rcv queue.  Because interrupts on the card are not
> acknowledged until the code runs to completion, this also tends to
> requlate interupt load.
> 
My source seems to be a few days older than when this stuff went 
in, will update and try it out.

> This also has the desirable side effect that stack processing will
> occur on the same CPU as the interrupt processing occurred.  This
> avoids inter-CPU memory bus arbitration cycles, and ensures that
> you won't engage in a lot of unnecessary L1 cache busting.  Hence
> I prefer this method to polling.
> 
Anywhere I could read up on the associated overhead and how the whole
stuff works out in the worst case where data is DMAd into memory, 
read up to CPU1 and then to CPU2 and then discarded and if there would be
any roads that can be taken to optimize this.  
> 
> You will get much better load capacity scaling out of two cheaper
> boxes, if you implement correctly, IMO.

Synchronization of the unformatted data can probably never get as good as 
it gets if you optimize the system for your case. But I agree it should be better
than it is now, however it does not really seem to get any better.
(unless you consider the EV7 and Opteron approaches better than the current
Intel approach)

Pete


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to