HI Stephane and all,
I'm happy to report that I a snapshot of PAPI (papi-3-2-0 branch)
running on top of perfmon2 and libpfm from sourceforge. I'm currently
testing it on my laptop, a PIII Coppermine. Thanks to Stephane for all
his hard work in getting this far...so far, just the simple cases are
passing but it's a step in getting perfmon/libpfm widely accepted by the
tools community.
Now a few things. I may have my head screwed on wrong in regards to one
or more of the following points.
1) Repeatability
Ok, so when I build with debugging, (-O2 optimizes away the no-op loop
on gcc 4-1), I get this from self. (3 runs, reasonably repeatable) Self
does a start/stop/read.
PMD0 7003294746 CPU_CLK_UNHALTED
PMD1 9000003363 INST_RETIRED
PMD0 7004045112 CPU_CLK_UNHALTED
PMD1 9000003709 INST_RETIRED
PMD0 7003623967 CPU_CLK_UNHALTED
PMD1 9000003477 INST_RETIRED
When I use the PAPI test zero, which does start/read/stop, I'm seeing
much higher variation in the counts. Note the test is different from the
NOOP loop. This test does Using 20000000 iterations of c += a*b
PAPI_FP_INS : 39490396
PAPI_TOT_CYC : 256809659
PAPI_FP_INS : 39663314
PAPI_TOT_CYC : 257938240
PAPI_FP_INS : 38598545
PAPI_TOT_CYC : 250904862
PAPI_FP_INS : 38988574
PAPI_TOT_CYC : 253545113
The different in this test, aside from the actual compute loop, is that
PAPI does a start/read/stop instead of start/stop/read.
These numbers certainly appear to vary too much, especially when
compared with the perfctr implementation. I'm not ready (at all) to
blame PFM or PERFMON for this yet...Just raising eyebrows here. I see
the same behavior on other tests that repeatedly do a start/read/stop.
Here's the output from first which does start/read/stop, start/read/stop
Test type : 1 2
PAPI_FP_INS: 37067718 39519150
PAPI_TOT_CYC: 241024466 260401182
-------------------------------------------
That's a 93% variation thereabouts.
Could this be a CONFIG_PREEMPT issue? I'm thinking that somehow the
counters are still running
Anyways, I'm using 2.6.17.1 with the stock patch.
In the course of writing this, I decided to run two copies of the test
simultaneously. Low and behold:
PAPI_FP_INS: 143669636 39732686
PAPI_TOT_CYC: 939423588 258242486
360% difference.
Ok, so something must be going on here. PAPI sets the counters always to
zero upon start with write_pmds, so that can't be it. I'll see if I can
get a pfm test case to do this also.
2) Of secondary importance at this point,
I noticed that user-mode mmaped counter reads are now gone. Are they
coming back? I'm a big advocate of this...
3) Even less important, oh please, when do I get a virtualized cycle
counter?
Many regards,
Phil
P.S. Other differences, I load the context immediately after creation,
and then write the pmcs, pmds and call start.
P.P.S. I'm attaching a pfm test case, which when four copies are running
simultaneously seems to show the variance.
[EMAIL PROTECTED] libpfm-3.x]$ [perfsel0=0x510079 emask=0x79 umask=0x0
os=0 usr=1 en=1 int=1 inv=0 edge=0 cnt_mask=0] CPU_CLK_UNHALTED
[perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] INST_RETIRED
[perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] CPU_CLK_UNHALTED
[perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] INST_RETIRED
[perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] CPU_CLK_UNHALTED
[perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] INST_RETIRED
[perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] CPU_CLK_UNHALTED
[perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
edge=0 cnt_mask=0] INST_RETIRED
PMD0 70001343 CPU_CLK_UNHALTED
PMD1 89938804 INST_RETIRED
PMD0 66164489 CPU_CLK_UNHALTED
PMD1 84920320 INST_RETIRED
PMD0 66500635 CPU_CLK_UNHALTED
PMD1 85460041 INST_RETIRED
PMD0 70031244 CPU_CLK_UNHALTED
PMD1 90000092 INST_RETIRED
PMD0 65636983 CPU_CLK_UNHALTED
PMD1 84354349 INST_RETIRED
PMD0 101211878 CPU_CLK_UNHALTED
PMD1 130070012 INST_RETIRED
PMD0 111426597 CPU_CLK_UNHALTED
PMD1 143195770 INST_RETIRED
PMD0 125678282 CPU_CLK_UNHALTED
PMD1 161509204 INST_RETIRED
/*
* self.c - example of a simple self monitoring task
*
* Copyright (c) 2002-2006 Hewlett-Packard Development Company, L.P.
* Contributed by Stephane Eranian <[EMAIL PROTECTED]>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
* of the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
* This file is part of libpfm, a performance monitoring support library for
* applications on Linux.
*/
#include <sys/types.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <errno.h>
#include <unistd.h>
#include <string.h>
#include <signal.h>
#include <perfmon/pfmlib.h>
#include <perfmon/perfmon.h>
#include "detect_pmcs.h"
#define NUM_PMCS PFMLIB_MAX_PMCS
#define NUM_PMDS PFMLIB_MAX_PMDS
#define MAX_EVT_NAME_LEN 128
/*
* our test code (function cannot be made static otherwise it is optimized away)
*/
uint64_t
noploop(uint64_t loop)
{
while (loop--) {}
return loop;
}
static void fatal_error(char *fmt,...) __attribute__((noreturn));
static void
fatal_error(char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
exit(1);
}
int
main(int argc, char **argv)
{
char **p;
unsigned int i, j;
int ret, ctx_fd;
pfmlib_input_param_t inp;
pfmlib_output_param_t outp;
pfarg_pmd_t pd[NUM_PMDS];
pfarg_pmc_t pc[NUM_PMCS];
pfarg_ctx_t ctx;
pfarg_load_t load_args;
pfmlib_options_t pfmlib_options;
unsigned int num_counters;
char name[MAX_EVT_NAME_LEN];
/*
* Initialize pfm library (required before we can use it)
*/
ret = pfm_initialize();
if (ret != PFMLIB_SUCCESS)
fatal_error("Cannot initialize library: %s\n", pfm_strerror(ret));
pfm_get_num_counters(&num_counters);
/*
* pass options to library (optional)
*/
memset(&pfmlib_options, 0, sizeof(pfmlib_options));
pfmlib_options.pfm_debug = 0; /* set to 1 for debug */
pfmlib_options.pfm_verbose = 1; /* set to 1 for debug */
pfm_set_options(&pfmlib_options);
memset(pd, 0, sizeof(pd));
memset(pc, 0, sizeof(pc));
memset(&ctx, 0, sizeof(ctx));
memset(&load_args, 0, sizeof(load_args));
/*
* prepare parameters to library.
*/
memset(&inp,0, sizeof(inp));
memset(&outp,0, sizeof(outp));
/*
* be nice to user!
*/
if (argc > 1) {
p = argv+1;
for (i=0; *p ; i++, p++) {
if (pfm_find_event(*p, &inp.pfp_events[i].event) != PFMLIB_SUCCESS)
fatal_error("Cannot find %s event\n", *p);
}
} else {
if (pfm_get_cycle_event(&inp.pfp_events[0].event) != PFMLIB_SUCCESS)
fatal_error("cannot find cycle event\n");
if (pfm_get_inst_retired_event(&inp.pfp_events[1].event) != PFMLIB_SUCCESS)
fatal_error("cannot find inst retired event\n");
i = 2;
}
/*
* set the default privilege mode for all counters:
* PFM_PLM3 : user level only
*/
inp.pfp_dfl_plm = PFM_PLM3;
if (i > num_counters) {
i = num_counters;
printf("too many events provided (max=%d events), using first %d event(s)\n", num_counters, i);
}
/*
* how many counters we use
*/
inp.pfp_event_count = i;
/*
* now create a new context, per process context.
* This just creates a new context with some initial state, it is not
* active nor attached to any process.
*/
if (pfm_create_context(&ctx, NULL, 0) == -1 ) {
if (errno == ENOSYS)
fatal_error("Your kernel does not have performance monitoring support!\n");
fatal_error("Can't create PFM context %s\n", strerror(errno));
}
/*
* extract the unique identifier for our context, a regular file descriptor
*/
ctx_fd = ctx.ctx_fd;
/*
* build the pfp_unavail_pmcs bitmask by looking
* at what perfmon has available. It is not always
* the case that all PMU registers are actually available
* to applications. For instance, on IA-32 platforms, some
* registers may be reserved for the NMI watchdog timer.
*
* With this bitmap, the library knows which registers NOT to
* use. Of source, it is possible that no valid assignement may
* be possible if certina PMU registers are not available.
*/
detect_unavail_pmcs(ctx_fd, &inp.pfp_unavail_pmcs);
/*
* let the library figure out the values for the PMCS
*/
if ((ret=pfm_dispatch_events(&inp, NULL, &outp, NULL)) != PFMLIB_SUCCESS)
fatal_error("cannot configure events: %s\n", pfm_strerror(ret));
/*
* Now prepare the argument to initialize the PMDs and PMCS.
* We must pfp_pmc_count to determine the number of PMC to intialize.
* We must use pfp_event_count to determine the number of PMD to initialize.
* Some events causes extra PMCs to be used, so pfp_pmc_count may be >= pfp_event_count.
*
* This step is new compared to libpfm-2.x. It is necessary because the library no
* longer knows about the kernel data structures.
*/
for (i=0; i < outp.pfp_pmc_count; i++) {
pc[i].reg_num = outp.pfp_pmcs[i].reg_num;
pc[i].reg_value = outp.pfp_pmcs[i].reg_value;
}
/*
* figure out pmd mapping from output pmc
*/
for (i=0, j=0; i < inp.pfp_event_count; i++) {
pd[i].reg_num = outp.pfp_pmcs[j].reg_pmd_num;
for(; j < outp.pfp_pmc_count; j++) if (outp.pfp_pmcs[j].reg_evt_idx != i) break;
}
/*
* Now program the registers
*
* We don't use the same variable to indicate the number of elements passed to
* the kernel because, as we said earlier, pc may contain more elements than
* the number of events (pmd) we specified, i.e., contains more than counting
* monitors.
*/
/*
* now we load (i.e., attach) the context to ourself
*/
load_args.load_pid = getpid();
if (pfm_load_context(ctx_fd, &load_args))
fatal_error("pfm_load_context error errno %d\n",errno);
if (pfm_write_pmcs(ctx_fd, pc, outp.pfp_pmc_count))
fatal_error("pfm_write_pmcs error errno %d\n",errno);
/*
* To be read, each PMD must be either written or declared
* as being part of a sample (reg_smpl_pmds)
*/
if (pfm_write_pmds(ctx_fd, pd, inp.pfp_event_count))
fatal_error("pfm_write_pmds error errno %d\n",errno);
/*
* Let's roll now
*/
if (pfm_start(ctx_fd, NULL))
fatal_error("pfm_start error errno %d\n",errno);
noploop(10000000ULL);
/*
* now read the results
*/
if (pfm_read_pmds(ctx_fd, pd, inp.pfp_event_count))
fatal_error( "pfm_read_pmds error errno %d\n",errno);
if (pfm_stop(ctx_fd))
fatal_error("pfm_stop error errno %d\n",errno);
/*
* print the results
*/
for (i=0; i < inp.pfp_event_count; i++) {
pfm_get_event_name(inp.pfp_events[i].event, name, MAX_EVT_NAME_LEN);
printf("PMD%u %20"PRIu64" %s\n",
pd[i].reg_num,
pd[i].reg_value,
name);
pd[i].reg_value = 0ULL;
}
if (pfm_write_pmcs(ctx_fd, pc, outp.pfp_pmc_count))
fatal_error("pfm_write_pmcs error errno %d\n",errno);
/*
* To be read, each PMD must be either written or declared
* as being part of a sample (reg_smpl_pmds)
*/
if (pfm_write_pmds(ctx_fd, pd, inp.pfp_event_count))
fatal_error("pfm_write_pmds error errno %d\n",errno);
/*
* and destroy our context
*/
if (pfm_start(ctx_fd, NULL))
fatal_error("pfm_start error errno %d\n",errno);
noploop(10000000ULL);
/*
* now read the results
*/
if (pfm_read_pmds(ctx_fd, pd, inp.pfp_event_count))
fatal_error( "pfm_read_pmds error errno %d\n",errno);
if (pfm_stop(ctx_fd))
fatal_error("pfm_stop error errno %d\n",errno);
/*
* print the results
*/
for (i=0; i < inp.pfp_event_count; i++) {
pfm_get_event_name(inp.pfp_events[i].event, name, MAX_EVT_NAME_LEN);
printf("PMD%u %20"PRIu64" %s\n",
pd[i].reg_num,
pd[i].reg_value,
name);
pd[i].reg_value = 0ULL;
}
close(ctx_fd);
return 0;
}
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/