Hello,
I've been working with the e1000 driver for linux and noticed something
very strange:
On the one hand the TADV register is initialized with 32 once during the
initialization of the NIC (which translates to approx 32 microseconds).
On the other hand ITR is constantly updated according to the type of load
(throughput or latency) to values between 195 and 950 (which translate to
49 and 243 microseconds respectively)
According to the spec of the PRO1000 PCI/PCI-X NIC family TADV "can be used
to ENSURE that a transmit interrupt occurs at some predefined interval
after a transmit is completed" and ITR is the "Minimum inter-interrupt
interval".
Now let us consider a common throughput case where the NIC is sending many
large packets. ITR is set to it seems that there is a contradiction
between ITR and TADV. TADV commands interrupts to be raised no slower than
every 32 microseconds, and ITR askes them to be raised at most every 243
microseconds
My question is - who wins?? what will be the interrupt rate set in the NIC?
I suspect that ITR wins, since it is changed dynamically by the driver, but
maybe I'm missing something?
Also why was RDAV deprecated (according to the spec) and TADV was not?
Thanks in advance,
Arthur
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired