On Fri, Sep 01, 2017 at 07:26:18PM +0200, Jiri Olsa wrote:
> On Thu, Aug 31, 2017 at 12:40:25PM -0700, Andi Kleen wrote:
> 
> SNIP
> 
> > 
> >    % perf stat -M Summary --metric-only -a sleep 1
> >     
> >      Performance counter stats for 'system wide':
> >     
> >     Instructions                              CLKS                 
> > CPU_Utilization      GFLOPs               SMT_2T_Utilization   
> > Kernel_Utilization
> >     317614222.0                              1392930775.0             0.0   
> >               0.0                 0.2                 0.1
> >     
> >            1.001497549 seconds time elapsed
> >     
> >    % perf stat -M GFLOPs flops
> >     
> >      Performance counter stats for 'flops':
> >     
> >          3,999,541,471      fp_comp_ops_exe.sse_scalar_single #      1.2 
> > GFLOPs                   (66.65%)
> >                     14      fp_comp_ops_exe.sse_scalar_double               
> >                       (66.65%)
> >                      0      fp_comp_ops_exe.sse_packed_double               
> >                       (66.67%)
> >                      0      fp_comp_ops_exe.sse_packed_single               
> >                       (66.70%)
> >                      0      simd_fp_256.packed_double                       
> >               (66.70%)
> >                      0      simd_fp_256.packed_single                       
> >               (66.67%)
> 
> looks like some events are probably crossing some
> output boundaries we have:
> 
> [jolsa@krava perf]$ sudo ./perf stat -M SMT -I 1000
> #           time             counts unit events
>      1.000565706        408,879,985      inst_retired.any          #      0.7 
> CoreIPC                  (66.68%)
>      1.000565706      1,120,999,114      cpu_clk_unhalted.thread_any          
>                            (66.68%)
>      1.000565706        701,285,312      cycles                               
>                          (66.68%)
>      1.000565706      1,148,325,740      cpu_clk_unhalted.thread_any # 
> 574162870.0 CORE_CLKS             (66.67%)
>      1.000565706        711,565,247      cpu_clk_unhalted.thread              
>                          (66.66%)
>      1.000565706         24,057,590      
> cpu_clk_thread_unhalted.one_thread_active #      0.3 SMT_2T_Utilization       
> (66.67%)
>      1.000565706         65,753,475      cpu_clk_thread_unhalted.ref_xclk_any 
>                                     (66.67%)
> ^C     1.349436822         21,198,385      inst_retired.any          #      
> 0.1 CoreIPC                  (66.70%)
>      1.349436822        112,740,282      cpu_clk_unhalted.thread_any          
>                            (66.70%)
>      1.349436822         84,509,414      cycles                               
>                          (66.70%)
>      1.349436822        108,181,315      cpu_clk_unhalted.thread_any # 
> 54090657.5 CORE_CLKS              (66.62%)
>      1.349436822         79,700,353      cpu_clk_unhalted.thread              
>                          (66.61%)
>      1.349436822          3,911,698      
> cpu_clk_thread_unhalted.one_thread_active #      0.8 SMT_2T_Utilization       
> (66.69%)
>      1.349436822         14,739,671      cpu_clk_thread_unhalted.ref_xclk_any 
>                                     (66.69%)
> 
> 
> could you please check on that and maybe shift the alignment for the longest 
> name?

other than this the rest of the patchset looks ok to me

Acked-by: Jiri Olsa <[email protected]>

thanks,
jirka

Reply via email to