And the case can be reduced to

perf stat -e \{r1000248,r148\} -- sleep 1

 Performance counter stats for 'sleep 1':

              7008      r1000248
   <not supported>      r148

       1.001885804 seconds time elapsed

On Thu, Nov 12, 2015 at 9:58 AM, Yuanfang Chen <cyf...@gmail.com> wrote:
> Sorry, r1000248 on haswell should be
>
> L1D_PEND_MISS.FB_FULL     0,1,2,3       0,1,2,3,4,5,6,7
>
> On Wed, Nov 11, 2015 at 5:05 PM, Yuanfang Chen <cyf...@gmail.com> wrote:
>> Hello
>>
>> I am using a haswell box. (E3-1231 v3). HT enabled.
>>
>> perf stat -e \{cycles,r148,r1000248,r8d1,r40d1\} -- sleep 1
>>
>> with ubuntu version of perf
>>
>> Performance counter stats for 'sleep 1':
>>
>>             756066      cycles
>>               5740      r1000248
>>               5516      r8d1
>>               9064      r40d1
>>                  0        r148
>>
>>        1.001770249 seconds time elapsed
>>
>> with relatively new tip of perf/core:
>>
>>  Performance counter stats for 'sleep 1':
>>
>>             729403      cycles
>>               7250      r1000248
>>               5628      r8d1
>>               9273      r40d1
>>    <not supported>      r148
>>
>>        1.001674174 seconds time elapsed
>>
>> from https://download.01.org/perfmon/HSW/
>>                                                          SMT on
>>           SMT off
>> cpu_clk_unhalted.thread cycles           Fixed counter 2      Fixed counter 2
>> ld_blocks.no_sr    r803                        0,1,2,3
>>  0,1,2,3,4,5,6,7
>> mem_load_uops_retired.l1_miss r8d1   0,1,2,3                  0,1,2,3
>> mem_load_uops_retired.hit_lfb r40d1    0,1,2,3                  0,1,2,3
>> l1d_pend_miss.pending r148                2                          2
>>
>> Seems these five events couldn't be counting at the same time,
>> although in terms of hardware they should get along.  Is this a bug or
>> a limitation I should be aware of? Thank you so much.
>>
>> Yuanfang
--
To unsubscribe from this list: send the line "unsubscribe linux-perf-users" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to