Vince, Thanks for the perf event names. I tried a few things and I am not sure what to make of the results. First I tried the command you provided and got this:
perf stat -a -e \{"uncore_cbox_0/event=0x35,umask=0xa/","uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/"\} /bin/ls --- ls output removed --- Performance counter stats for '/bin/ls': 5,625 uncore_cbox_0/event=0x35,umask=0xa/ [26.27%] <not supported> uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/ 0.002038929 seconds time elapsed So this behaved similar to PAPI/libpfm4. The first event returned a count and the second event got an error. Just for fun, I used the same events in the opposite order: perf stat -a -e \{"uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/","uncore_cbox_0/event=0x35,umask=0xa/"\} /bin/ls --- ls output removed --- Performance counter stats for '/bin/ls': <not counted> uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/ <not supported> uncore_cbox_0/event=0x35,umask=0xa/ 0.002003219 seconds time elapsed This caused both events to report an error. This seems to me like a kernel problem. I also tried using each event by itself and they both returned counts. With PAPI/libpfm4 I believe that this test will return a count for the first event and an error on the second. You implied that the { } 's may influence if or how events are grouped. So I tried the command again in the original order without the { } characters and got this: perf stat -a -e "uncore_cbox_0/event=0x35,umask=0xa/","uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/" /bin/ls --- ls output removed --- Performance counter stats for '/bin/ls': 57,288 uncore_cbox_0/event=0x35,umask=0xa/ [18.05%] 158,292 uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/ [ 3.07%] 0.001963151 seconds time elapsed Both events give a count. I have never seen this result with PAPI/libpfm4 but I have never tried them with grouping enabled when calling the kernel. In PAPI we turned grouping off so that the kernel would allow us to use events from different uncore pmu's at the same time. I can try turning it back on and running these two events to see what happens. If they work, maybe a better solution is to try a hybrid form of grouping. We could create a different group for each uncore pmu and put all the events associated with a given pmu into that pmu's group. We would then call the kernel once for each group rather than once for each event as we are doing now. Any idea if the kernel will let us play the game this way ?? I considered this approach when I changed the uncore component to not use grouping but decided to wait and see if the extra complexity was worth doing. If it turns out that these events work better when they are in the same group, then it may be worth doing. So I guess I have more stuff to try. Gary > -----Original Message----- > From: Vince Weaver [mailto:vincent.wea...@maine.edu] > Sent: Monday, September 08, 2014 2:21 PM > To: Gary Mohr > Cc: Michel Brown; perfmon2-devel > Subject: RE: [perfmon2] Error reporting when using invalid combination of > umasks. > > On Thu, 4 Sep 2014, Gary Mohr wrote: > > > snbep_unc_cbo0::UNC_C_TOR_INSERTS:MISS_ALL:cpu=0 > > snbep_unc_cbo0::UNC_C_TOR_INSERTS:NID_MISS_ALL:nf=0x1:cpu=0 > > > ... > > > I will try using these events with perf. It is not my favorite tool but > > if it will either show that these events should work together or help > > convince others that something in the kernel is not working correctly, > > it is worth the effort. > > Sorry for the delay responding, it was the first week of classes and > things were a bit crazy around here. > > If you did want to try these things under perf, the perf names for > the first two events you list are > uncore_cbox_0/event=0x35,umask=0xa/ > uncore_cbox_0/event=0x35,umask=0x4a,filter_nid=0x1/ > > So you can try something like > > perf stat -a -e > \{"uncore_cbox_0/event=0x35,umask=0xa/","uncore_cbox_0/event=0x35,u > mask=0x4a,filter_nid=0x1/"\} /bin/ls > > Although even with the "{}" grouping (which I thought would force > GROUPing > of the events, it looks like perf doesn't. And the perf code is so > horrible I haven't forced myself to go looking why yet. > > Vince ------------------------------------------------------------------------------ Want excitement? Manually upgrade your production database. When you want reliability, choose Perforce. Perforce version control. Predictably reliable. http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk _______________________________________________ perfmon2-devel mailing list perfmon2-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/perfmon2-devel