Sorry for the delay...

On Thu, Dec 17, 2015 at 10:12 PM, Tian, Kevin <kevin.t...@intel.com> wrote:

> > From: Boris Ostrovsky [mailto:boris.ostrov...@oracle.com]
> > Sent: Tuesday, December 08, 2015 3:14 AM
> >
> > On 11/30/2015 07:39 PM, Brendan Gregg wrote:
> > > This introduces a way to have a restricted VPMU, by specifying one of
> two
> > > predefined groups of PMCs to make available. For secure environments,
> this
> > > allows the VPMU to be used without needing to enable all PMCs.
> > >
> > > Signed-off-by: Brendan Gregg <bgr...@netflix.com>
> > > Reviewed-by: Boris Ostrovsky <boris.ostrov...@oracle.com>
> >
> > This needs to be reviewed also by Intel maintainers (copied). Plus x86
> > maintainers.
> >
> > -boris
> >
>
> [...]
>
> > > diff --git a/xen/arch/x86/cpu/vpmu_intel.c
> b/xen/arch/x86/cpu/vpmu_intel.c
> > > index 8d83a1a..a6c5545 100644
> > > --- a/xen/arch/x86/cpu/vpmu_intel.c
> > > +++ b/xen/arch/x86/cpu/vpmu_intel.c
> > > @@ -602,12 +602,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t
> > msr_content,
> > >                    "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
> > >           return -EINVAL;
> > >       case MSR_IA32_PEBS_ENABLE:
> > > +        if ( vpmu_features & (XENPMU_FEATURE_IPC_ONLY |
> > > +             XENPMU_FEATURE_ARCH_ONLY) )
> > > +            return -EINVAL;
> > >           if ( msr_content & 1 )
> > >               gdprintk(XENLOG_WARNING, "Guest is trying to enable
> PEBS, "
> > >                        "which is not supported.\n");
> > >           core2_vpmu_cxt->pebs_enable = msr_content;
> > >           return 0;
> > >       case MSR_IA32_DS_AREA:
> > > +        if ( (vpmu_features & (XENPMU_FEATURE_IPC_ONLY |
> > > +             XENPMU_FEATURE_ARCH_ONLY)) &&
> > > +             !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
> > > +            return -EINVAL;
>
> should the check be made just based on BTS?
>

Ah, yes. The BTS check was added after the new modes, but it should be
standalone. I don't think anything else uses DS_AREA other than BTS.


> > >           if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
> > >           {
> > >               if ( !is_canonical_address(msr_content) )
> > > @@ -652,12 +659,55 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t
> > msr_content,
> > >           tmp = msr - MSR_P6_EVNTSEL(0);
> > >           if ( tmp >= 0 && tmp < arch_pmc_cnt )
> > >           {
> > > +            bool_t blocked = 0;
> > > +            uint64_t umaskevent;
> > >               struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> > >                   vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> > >
> > >               if ( msr_content & ARCH_CTRL_MASK )
> > >                   return -EINVAL;
> > >
> > > +            /* PMC filters */
> > > +            umaskevent = msr_content & MSR_IA32_CMT_EVTSEL_UE_MASK;
> > > +            if ( vpmu_features & XENPMU_FEATURE_IPC_ONLY ||
> > > +                 vpmu_features & XENPMU_FEATURE_ARCH_ONLY )
> > > +            {
> > > +                blocked = 1;
> > > +                switch ( umaskevent )
> > > +                {
> > > +                /*
> > > +                 * See the Pre-Defined Architectural Performance
> Events table
> > > +                 * from the Intel 64 and IA-32 Architectures Software
> > > +                 * Developer's Manual, Volume 3B, System Programming
> Guide,
> > > +                 * Part 2.
> > > +                 */
> > > +                case 0x003c:       /* unhalted core cycles */
>
> Better to copy same wording from SDM, e.g. "UnHalted Core Cycles */. same
> for below.
>

Ok, yes.


>
> > > +                case 0x013c:       /* unhalted ref cycles */
> > > +                case 0x00c0:       /* instruction retired */
> > > +                    blocked = 0;
> > > +                default:
> > > +                    break;
> > > +                }
> > > +            }
> > > +
> > > +            if ( vpmu_features & XENPMU_FEATURE_ARCH_ONLY )
> > > +            {
> > > +                /* additional counters beyond IPC only; blocked
> already set */
> > > +                switch ( umaskevent )
> > > +                {
> > > +                case 0x4f2e:       /* LLC reference */
> > > +                case 0x412e:       /* LLC misses */
> > > +                case 0x00c4:       /* branch instruction retired */
> > > +                case 0x00c5:       /* branch */
> > > +                    blocked = 0;
> > > +                default:
> > > +                    break;
> > > +               }
> > > +            }
> > > +
> > > +            if ( blocked )
> > > +                return -EINVAL;
> > > +
> > >               if ( has_hvm_container_vcpu(v) )
> > >                   vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> > >                                      &core2_vpmu_cxt->global_ctrl);
> > > diff --git a/xen/include/asm-x86/msr-index.h
> b/xen/include/asm-x86/msr-index.h
> > > index b8ad93c..0542064 100644
> > > --- a/xen/include/asm-x86/msr-index.h
> > > +++ b/xen/include/asm-x86/msr-index.h
> > > @@ -328,6 +328,7 @@
> > >
> > >   /* Platform Shared Resource MSRs */
> > >   #define MSR_IA32_CMT_EVTSEL               0x00000c8d
> > > +#define MSR_IA32_CMT_EVTSEL_UE_MASK        0x0000ffff
> > >   #define MSR_IA32_CMT_CTR          0x00000c8e
> > >   #define MSR_IA32_PSR_ASSOC                0x00000c8f
> > >   #define MSR_IA32_PSR_L3_QOS_CFG   0x00000c81
> > > diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
> > > index 7753df0..f9ad7b4 100644
> > > --- a/xen/include/public/pmu.h
> > > +++ b/xen/include/public/pmu.h
> > > @@ -84,9 +84,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
> > >
> > >   /*
> > >    * PMU features:
> > > - * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
> > > + * - XENPMU_FEATURE_INTEL_BTS:  Intel BTS support (ignored on AMD)
> > > + * - XENPMU_FEATURE_IPC_ONLY:   Restrict PMC to the most minimum set
> possible.
>
> PMC -> PMCs
>

Ok.


>
> > > + *                              Instructions, cycles, and ref cycles.
> Can be
> > > + *                              used to calculate
> instructions-per-cycle (IPC)
> > > + *                              (ignored on AMD).
> > > + * - XENPMU_FEATURE_ARCH_ONLY:  Restrict PMCs to the Intel Pre-Defined
> > > + *                              Architecteral Performance Events
> exposed by
>
> Architecteral -> Architectural
>

Ok.


>
> > > + *                              cpuid and listed in the Intel
> developer's manual
> > > + *                              (ignored on AMD).
> > >    */
> > > -#define XENPMU_FEATURE_INTEL_BTS  1
> > > +#define XENPMU_FEATURE_INTEL_BTS  (1<<0)
> > > +#define XENPMU_FEATURE_IPC_ONLY   (1<<1)
> > > +#define XENPMU_FEATURE_ARCH_ONLY  (1<<2)
> > >
> > >   /*
> > >    * Shared PMU data between hypervisor and PV(H) domains.
>
>
Thanks for checking! New patch (v5) coming...

Brendan

-- 
Brendan Gregg, Senior Performance Architect, Netflix
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to