Corey,
On Thu, Jun 4, 2009 at 8:50 PM, Corey J Ashford <cjash...@us.ibm.com> wrote:

> stephane eranian <eran...@googlemail.com> wrote on 06/04/2009 12:53:02 AM:
> > Corey,
> >
> > On Wed, Jun 3, 2009 at 1:22 AM, Corey Ashford
> > <cjash...@linux.vnet.ibm.com> wrote:
> > >
> > > It seems like you could call the kernel-specific code for event
> numbers
> > > greater than the pmu-specific hardware event numbers.  Basically, you
> just
> > > need a way to plug in kernel-specific code that knows about the extra
> events
> > > exposed by it. Something like:
> > >
> > > int pfm_kernel_get_num_events()
> > > char *pfm_kernel_get_event(int event);
> > >
> > > For PCL, pfm_kernel_get_num_events() would return the number of
> software
> > > events + the number of generic events.  libpfm would number these
> events
> > > last_pmu_hardware_event + 1 .. (last_pmu_hardware_event +
> > > pfm_kernel_get_num_events()).
> > >
> > > So the "linearizing" of the PCL events into a single space (rather
> than a
> > > separate software and generalized events) would be done by the PCL
> > > kernel-specific code.  And the ordering of the PMU-specific and
> > > kernel-specific code would be done by the PCL generic code.
> > >
> > Yes, this is one solution. But it would have to be implemented by the
> > libpfm generic layer, not the PCL specific layer.
> >
>
> I don't understand this reasoning, unless we are really saying the same
> thing.  If libpfm was being layered on top of a different kernel
> implementation (not PCL), which had, say, six event spaces, would you
> still want the generic layer trying to deal with six event spaces, or
> should the kernel-specific layer deal with unifying those six spaces?  To
> me, this is best dealt with in the kernel-specific layer.
>

I think we are talking about the same thing using a different terminology.
I think yours is better. Yes, the 'term' generic was meant to refer to PCL
generic. Thus, it is tied to the OS API.

> But here is another issue with PCL generic events. The generic HW
> > events such as PERF_COUNT_CPU_CYCLES amd such. For some
> > PMU models, there may not be any mapping. The kernel can return
> > an error for certain events. But what about libpfm? If we implement
> > all PCL generic HW events in a generic layer, then we would still need
> > to customize on a PMU-basis to turn off certain event which we know
> > are not mapped on the host PMU. I think it would be pretty confusing
> > to have libpfm let you use PERF_COUNT_LAST_LEVEL_CACHE_MISSES
> > if the PCL kernel has no mapping for it.
>
> Ok, so here we have a combination where the kernel-specific layer may not
> know that a particular generic event is not supported on the arch and/or
> PMU.  Perhaps at start up time, the kernel-specific layer could query all
> of the generic events in PCL and find out which ones are supported, then
> present only the supported ones to the generic layer.  This would cost
> some start up time.
>

Yes, that's a possibility or PCL advertises somewhere in /sys the generic
hardware events it supports.


> > It is solved if in a large cluster PAPI is configured to always use
> > the hardcoded
> > event table but at the price of losing the flexibility of the external
> file.
>
> Right.  But if you cache up into a file the event data that was stored as
> a result of processing the XML file, it would essentially be a C array of
> event data, nearly identical to the hard-coded arrays what we have now.
> You only have to parse the XML file once (until someone changes the XML
> file).  You would have to read in or mmap one additional file at start up,
> but the file would be small and require no processing.
>

Yes. That would avoid the burst problem but only if the generated file
lives on a local filesystem. In general such a file would be smaller
than the original XML file.
------------------------------------------------------------------------------
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
_______________________________________________
perfmon2-devel mailing list
perfmon2-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/perfmon2-devel

Reply via email to