There are two important classes of data load misses in L2 cache on
your Woodcrest chips: those due to "demand" load requests, i.e. actual
load instructions; and those due to requests from hardware prefetchers.
There are separate HW performance events defined for these two cases.
I use an in-house collector that might use different aliases for these
events than those available to you, so I'll give both the alias name as
well as the actual hex-encoding of the events:
   L2_LD.SELF.DEMAND.I_STATE ( 0x29:u0x41 ) - the no. of demand data load
               requests to L2 cache from this core ("SELF") for which the
               response was "Invalid state", i.e., the no. of misses

   L2_LD.SELF.PREFETCH.I_STATE ( 0x29:u0x51 ) - the no. of data prefetch
               requests from HW prefetcher(s) to L2 cache from this core
               for which the response was "Invalid state", i.e. the no. of 
misses.

Good luck.

Hugh Caffey

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stéphane 
Zuckerman
Sent: Wednesday, May 09, 2007 2:35 AM
To: [EMAIL PROTECTED]
Subject: [perfmon] How to measure all of the L2 DATA MISSES

Hello,

We're trying to measure L2D MISSES on a Xeon Woodcrest (a dual cpu, dual 
core machine).

We've tried different hardware counters, namely :

- LAST_LEVEL_CACHE_MISSES, which, the documentation says, is equivalent 
to L2_RQSTS:I_STATE (invalid cachelines), but doesn't count the hardware 
prefetches

- L2_RQSTS:MESI, and various combinations between M/E/S/I options, as 
well as PREFETCH combined with the mask SELF.

We're trying to measure accurately the L2 data misses, but we're getting 
inconsistent results. How must we proceed ?

Thanks,

-- 
Stéphane Zuckerman
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/

_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/

Reply via email to