Peter Jeremy wrote:

On Wed, 2006-Jul-19 22:38:56 -0400, Ed Maste wrote:

- You may have to adjust some parameters in the kern.polling sysctl
tree - specifically, kern.polling.burst_max, kern.polling.each_burst
and kern.polling.user_frac might need tweaking.


Note that increasing kern.polling.burst_max and kern.polling.each_burst
will also increase the number of soft interrupts.


- The polling feedback algorithm does not work very well if your
workload is focused largely on per-packet tasks (such as routing or
bridging).  You'll find that there is still idle CPU time at the
point you start dropping packets.  I have some work in progress to
address this, but it's not yet committed.


I thought setting kern.polling.idle_poll would allow the CPU to
utilise all idle time.  The downside is that the system always shows
as 100% utilised so it's very difficult to know how busy the system
actually is.


- Polling's major advantage is the avoidance of livelock on UP systems,
and not improved performance.


The limited testing I've done on a Sun V20z at work suggests that you
can get better routing throughput in interrupt mode than polling mode.
YMMV and this is before tweaking the polling parameters.  (My testing
also suggests that I don't really need to do any tweaking because
the limiting factor is the gigabit interfaces rather than the V20z).


This might not apply to bge, but the adaptive polling + fast interrupt
changes that I made to if_em earlier in the year were a huge win over
the standard polling code in terms of CPU utilization and packets per
second.  I think it also survived a load that caused normal polling to
essentially livelock the machine.  And, it had the advantage of
automatically adapting to bursty loads.

Scott

_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to