John Ronciak a écrit :
On 12/7/05, David S. Miller <[EMAIL PROTECTED]> wrote:


Keyword, "this box".

We don't disagree and never have with this.  It's why we were asking
the question of find us a case where the prefetch shows a detriment to
performance.  I think Jesse's data and recommendation of only keeping
the #1, #2 and #5 prefetches seem like the right thing to do with data
to back it up. It also goes along with what Robert showed as well. Can we just do this and see if anyone once again can show a problem
with it?  It seems like a reasonable thing to do and may put this to
sleep for a while.  :-)

I think that in case of network flood, your #1#2#3#4#5... could be a gain.
But for most cases of servers/desktop uses, less than 8000 packets received per second, (ie standard e1000 interrupt mitigation not triggered and one packet per interrupt... 8000 interrupts per second), then prefetching of 'next dexcriptor, next skb data, nextnext descriptor, nextnext skbdata) *should* hurt, since it will certainly force more memory pingpongs, and *should* make IRQ handling more intrusive for the CPU/bus.

So instead of using tests delivering 700.000 packets that maybe one machine out of 10.000 can see during its whole life, could we use realistic benches and see the throughput of a compute/memory user land task ?

Being able to receive XXX.XXX packets per second just to drop then instead of injecting them in the stack (deliver or forward) and using 100% of the CPU is not a fair benchmark. It's not even funny. I would prefer to drop 50% at NIC level and get back 50% of the CPU so that user programs can make progress and clear the sockets queues.

Eric
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to