Hi Stefane,
I've been thinking about the per-counter stuff. Most of the counters
have features that occur in groups. Consider SiCortex 2 CPU counters
and 256 off-core counters, or SGI's CrayLink counters... These groups
have common features that we should be able to exploit for
performance. For instance all through the code I have stuff like (if
(test_bit(counter,&used_pmcs)), due to the length of the bit vector,
this can add 5-10 instructions for what is a very sparse vector,
which usually has a VERY similar pattern. Handling groups would allow
us efficiently to define counters with properties like overflow bits
in different places, different lengths, different characteristics/
flags. Certainly the current implementation is most flexible, but
there is a price for complete flexibility.
Phil
On Mar 26, 2007, at 10:06 AM, Stephane Eranian wrote:
Phil,
On Sun, Mar 25, 2007 at 07:12:09PM +0200, Philip Mucci wrote:
Hi folks,
I think it should not be too much work to put the field with in the
description table. With a flag, high level perfmon can just skip
consulting this field and go with a default. I think having both 16
and 32 bit counters would be useful on the cell, the 16 specifically
because one can allocate one counter per VPE. This functionality is
necessary for more than the cell, i.e. when supporting off-chip
counters that are different than those of the core.
We would need to make small changes to the generic code to get the
overflow mask per PMD as opposed to global. This add a small overhead
but that's probably ok.
The arch-specific routines would need to read this mask on a per-
counter
basis as well.
Of course, we should certainly encourage folks to support at least 32
bits per counter...
Agreed.
With perfmon2, I believe I saw a mode in the code that did split the
counters between logical processor if requested...this should
probably be a module load time option.
I am not sure where you saw that. The split is ony done in the P4
specific
code.
--
-Stephane
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/