On Wed, 2007-06-27 at 17:26, Jason Gunthorpe wrote: > On Wed, Jun 27, 2007 at 05:13:40PM -0400, Hal Rosenstock wrote: > > > > - The kernel periodically fetches the performance stats and aggregates > > > them into a 64 wrapping counter. The kernel sends PMA mads into the > > > mellanox firmware to read and reset the counters > > > - The new 64 bit stats are exported via sysfs/proc/whatever as > > > wrapping counters > > > - When a PMA packet comes in the kernel services it rather than > > > passing it on to the chip firmware. > > > > In this way, both 32 and 64 bit counters could be presented by the PMA > > but how would it know when the a counter has maxed out in terms of the > > PMA and how would a remote clear be handled ? > > Each time the counter is cleared
So it doesn't matter whether the clear is local (from Linux) or remote (from IB), right ? > the kernel would store the 64 bit > value as the 'last PMA counter'. Then the calculation is just > > if ((current - stored) >= saturation) > return saturation; > return current - stored; > > After 2**64 counts the saturation computation will stop working. It > would take 24 years of constant maxed out data transfer for a 12x QDR > link to wrap a 64 bit dword byte counter. Is that even for the 4 octet counts ? (I didn't calculate this out). > A nice side benifit would that linux drivers could present a > consistent PMA interface with new extended 64 bit counters even with > older hardware. Indeed. The question may now be how to get from where we are today to this model. -- Hal > Jason _______________________________________________ general mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
