On Jun 29, 2009, at 12:10 PM, Dave Love wrote:

When I test Open-MX, I turn interrupt coalescing off. I run
omx_pingpong to determine the lowest latency (LL). If the NIC's driver
allows one to specify the interrupt value, I set it to LL-1.

Right, and that's what I did before, with sensible results I thought.
Repeating it now on Centos 5.2 and OpenSuSE 10.3, it doesn't behave
sensibly, and I don't know what's different from the previous SuSE
results apart, probably, from the minor kernel version.  If I set
rx-frames=0, I see this:

rx-usec    latency (µs)
20         34.6
12         26.3
6          20.0
1          14.8

whereas if I just set rx-frames=1, I get 14.7 µs, roughly independently
of rx-usec.  (Those figures are probably ±∼0.2µs.)

That is odd. I have only tested with Intel e1000 and our myri10ge Ethernet driver. The Intel driver does not let you specify value other than certain settings (0, 25, etc.). The myri10ge driver does allow you to specify any value.

Your results may be specific to that driver.

Brice and Nathalie have a paper which implements an adaptive interrupt
coalescing so that you do not have to manually tune anything:

Isn't that only relevant if you control the firmware?  I previously
didn't really care about free firmware for devices in the same way as
free software generally, but am beginning to see reasons to care.

True, I believe that had to make two very small modifications to the myri10ge firmware.

I have head that some Ethernet drivers do or will support adaptive coalescing which may give better performance than manually tuning and without modifying the NIC firmware for OMX.

Scott
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to