Phil,
On Tue, Jul 25, 2006 at 05:32:09PM +0200, Philip Mucci wrote:
>
> I'm happy to report that I a snapshot of PAPI (papi-3-2-0 branch)
> running on top of perfmon2 and libpfm from sourceforge. I'm currently
> testing it on my laptop, a PIII Coppermine. Thanks to Stephane for all
> his hard work in getting this far...so far, just the simple cases are
> passing but it's a step in getting perfmon/libpfm widely accepted by the
> tools community.
>
That is very good progress.
> 1) Repeatability
>
> Ok, so when I build with debugging, (-O2 optimizes away the no-op loop
> on gcc 4-1), I get this from self. (3 runs, reasonably repeatable) Self
> does a start/stop/read.
> PMD0 7003294746 CPU_CLK_UNHALTED
> PMD1 9000003363 INST_RETIRED
> PMD0 7004045112 CPU_CLK_UNHALTED
> PMD1 9000003709 INST_RETIRED
> PMD0 7003623967 CPU_CLK_UNHALTED
> PMD1 9000003477 INST_RETIRED
>
> When I use the PAPI test zero, which does start/read/stop, I'm seeing
> much higher variation in the counts. Note the test is different from the
> NOOP loop. This test does Using 20000000 iterations of c += a*b
>
> PAPI_FP_INS : 39490396
> PAPI_TOT_CYC : 256809659
> PAPI_FP_INS : 39663314
> PAPI_TOT_CYC : 257938240
> PAPI_FP_INS : 38598545
> PAPI_TOT_CYC : 250904862
> PAPI_FP_INS : 38988574
> PAPI_TOT_CYC : 253545113
>
> The different in this test, aside from the actual compute loop, is that
> PAPI does a start/read/stop instead of start/stop/read.
>
> These numbers certainly appear to vary too much, especially when
> compared with the perfctr implementation. I'm not ready (at all) to
> blame PFM or PERFMON for this yet...Just raising eyebrows here. I see
> the same behavior on other tests that repeatedly do a start/read/stop.
>
What happens when you:
- modify self to to start/read/stop
- modify your PAPI test to do start/stop/read
> Could this be a CONFIG_PREEMPT issue? I'm thinking that somehow the
> counters are still running
>
I have not tested with with CONFIG_PREEMPT. Try without this option.
> 2) Of secondary importance at this point,
>
> I noticed that user-mode mmaped counter reads are now gone. Are they
> coming back? I'm a big advocate of this...
>
Wrong, they are still there. You need to ask for this. See self_view.c
for an example.
> 3) Even less important, oh please, when do I get a virtualized cycle
> counter?
>
Not yet implemented but I believe there is enough infrastructure to do
this.
> P.S. Other differences, I load the context immediately after creation,
> and then write the pmcs, pmds and call start.
>
Is that just modified self?
> P.P.S. I'm attaching a pfm test case, which when four copies are running
> simultaneously seems to show the variance.
> [EMAIL PROTECTED] libpfm-3.x]$ [perfsel0=0x510079 emask=0x79 umask=0x0
> os=0 usr=1 en=1 int=1 inv=0 edge=0 cnt_mask=0] CPU_CLK_UNHALTED
> [perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] INST_RETIRED
> [perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] CPU_CLK_UNHALTED
> [perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] INST_RETIRED
> [perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] CPU_CLK_UNHALTED
> [perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] INST_RETIRED
> [perfsel0=0x510079 emask=0x79 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] CPU_CLK_UNHALTED
> [perfsel1=0x5100c0 emask=0xc0 umask=0x0 os=0 usr=1 en=1 int=1 inv=0
> edge=0 cnt_mask=0] INST_RETIRED
> PMD0 70001343 CPU_CLK_UNHALTED
> PMD1 89938804 INST_RETIRED
> PMD0 66164489 CPU_CLK_UNHALTED
> PMD1 84920320 INST_RETIRED
> PMD0 66500635 CPU_CLK_UNHALTED
> PMD1 85460041 INST_RETIRED
> PMD0 70031244 CPU_CLK_UNHALTED
> PMD1 90000092 INST_RETIRED
> PMD0 65636983 CPU_CLK_UNHALTED
> PMD1 84354349 INST_RETIRED
> PMD0 101211878 CPU_CLK_UNHALTED
> PMD1 130070012 INST_RETIRED
> PMD0 111426597 CPU_CLK_UNHALTED
> PMD1 143195770 INST_RETIRED
> PMD0 125678282 CPU_CLK_UNHALTED
> PMD1 161509204 INST_RETIRED
>
>
>
>
>
>
>
>
> _______________________________________________
> perfmon mailing list
> [email protected]
> http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/
--
-Stephane
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/