Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-08 Thread Athira Rajeev


> On 08-Jul-2020, at 5:12 PM, Michael Ellerman  wrote:
> 
> Athira Rajeev  > writes:
> 
>> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
>   ^
>   a
>> First is the addition of BHRB disable bit and second new filtering
>  ^
>  is
>> modes for BHRB.
>> 
>> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
>> bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls
> 
> Most people call that bit 37.
> 
>> whether BHRB entries are written when BHRB recording is enabled by other
>> bits. Patch implements support for this BHRB disable bit.
>   ^
>   This
> 
>> Secondly PowerISA v3.1 introduce filtering support for
> 
> .. that should be in a separate patch please.
> 
>> PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
>^
>This
>> for "ind_call" and "cond" in power10_bhrb_filter_map().
>> 
>> 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to 
>> userspace via BHRB buffer")'
> 
> That doesn't need single quotes, and should be wrapped at 72 columns
> like the rest of the text.
> 
>> added a check in bhrb_read() to filter the kernel address from BHRB buffer. 
>> Patch here modified
>> it to avoid that check for PowerISA v3.1 based processors, since PowerISA 
>> v3.1 allows
>> only MSR[PR]=1 address to be written to BHRB buffer.
> 
> And that should be a separate patch again please.

Sure, I will split these to separate patches

> 
>> Signed-off-by: Athira Rajeev 
>> ---
>> arch/powerpc/perf/core-book3s.c   | 27 +--
>> arch/powerpc/perf/isa207-common.c | 13 +
>> arch/powerpc/perf/power10-pmu.c   | 13 +++--
>> arch/powerpc/platforms/powernv/idle.c | 14 ++
>> 4 files changed, 59 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/powerpc/perf/core-book3s.c 
>> b/arch/powerpc/perf/core-book3s.c
>> index fad5159..9709606 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -466,9 +466,13 @@ static void power_pmu_bhrb_read(struct perf_event 
>> *event, struct cpu_hw_events *
>>   * addresses at this point. Check the privileges before
>>   * exporting it to userspace (avoid exposure of regions
>>   * where we could have speculative execution)
>> + * Incase of ISA 310, BHRB will capture only user-space
>   ^
>   In case of ISA v3.1,

Ok, 
> 
>> + * address,hence include a check before filtering code
>   ^  ^
>   addresses, hence   .
>>   */
>> -if (is_kernel_addr(addr) && 
>> perf_allow_kernel(>attr) != 0)
>> -continue;
>> +if (!(ppmu->flags & PPMU_ARCH_310S))
>> +if (is_kernel_addr(addr) &&
>> +perf_allow_kernel(>attr) != 0)
>> +continue;
> 
> The indentation is weird. You should just check all three conditions
> with &&.

Ok, will correct this.
> 
>> 
>>  /* Branches are read most recent first (ie. mfbhrb 0 is
>>   * the most recent branch).
>> @@ -1212,7 +1216,7 @@ static void write_mmcr0(struct cpu_hw_events *cpuhw, 
>> unsigned long mmcr0)
>> static void power_pmu_disable(struct pmu *pmu)
>> {
>>  struct cpu_hw_events *cpuhw;
>> -unsigned long flags, mmcr0, val;
>> +unsigned long flags, mmcr0, val, mmcra = 0;
> 
> You initialise it below.
> 
>>  if (!ppmu)
>>  return;
>> @@ -1245,12 +1249,23 @@ static void power_pmu_disable(struct pmu *pmu)
>>  mb();
>>  isync();
>> 
>> +val = mmcra = cpuhw->mmcr[2];
>> +
> 
> For mmcr0 (above), val is the variable we mutate and mmcr0 is the
> original value. But here you've done the reverse, which is confusing.

Yes, I am altering mmcra here and using val as original value. I should have 
done it reverse.

> 
>>  /*
>>   * Disable instruction sampling if it was enabled
>>   */
>> -if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE) {
>> -mtspr(SPRN_MMCRA,
>> -  cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE);
>> +if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE)
>> +mmcra = cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE;
> 
> You just loaded cpuhw->mmcr[2] into mmcra, use it rather than referring
> back to cpuhw->mmcr[2] over and over.
> 

Ok,
>> +
>> +/* Disable BHRB via 

Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-08 Thread Athira Rajeev


> On 08-Jul-2020, at 1:13 PM, Gautham R Shenoy  wrote:
> 
> On Tue, Jul 07, 2020 at 05:17:55PM +1000, Michael Neuling wrote:
>> On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
>>> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
>>> First is the addition of BHRB disable bit and second new filtering
>>> modes for BHRB.
>>> 
>>> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
>>> bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls
>>> whether BHRB entries are written when BHRB recording is enabled by other
>>> bits. Patch implements support for this BHRB disable bit.
>> 
>> Probably good to note here that this is backwards compatible. So if you have 
>> a
>> kernel that doesn't know about this bit, it'll clear it and hence you still 
>> get
>> BHRB. 
>> 
>> You should also note why you'd want to do disable this (ie. the core will run
>> faster).
>> 
>>> Secondly PowerISA v3.1 introduce filtering support for
>>> PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
>>> for "ind_call" and "cond" in power10_bhrb_filter_map().
>>> 
>>> 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to 
>>> userspace
>>> via BHRB buffer")'
>>> added a check in bhrb_read() to filter the kernel address from BHRB buffer.
>>> Patch here modified
>>> it to avoid that check for PowerISA v3.1 based processors, since PowerISA 
>>> v3.1
>>> allows
>>> only MSR[PR]=1 address to be written to BHRB buffer.
>>> 
>>> Signed-off-by: Athira Rajeev 
>>> ---
>>> arch/powerpc/perf/core-book3s.c   | 27 +--
>>> arch/powerpc/perf/isa207-common.c | 13 +
>>> arch/powerpc/perf/power10-pmu.c   | 13 +++--
>>> arch/powerpc/platforms/powernv/idle.c | 14 ++
>> 
>> This touches the idle code so we should get those guys on CC (adding Vaidy 
>> and
>> Ego).
>> 
>>> 4 files changed, 59 insertions(+), 8 deletions(-)
>>> 
> 
> [..snip..]
> 
> 
>>> diff --git a/arch/powerpc/platforms/powernv/idle.c
>>> b/arch/powerpc/platforms/powernv/idle.c
>>> index 2dd4673..7db99c7 100644
>>> --- a/arch/powerpc/platforms/powernv/idle.c
>>> +++ b/arch/powerpc/platforms/powernv/idle.c
>>> @@ -611,6 +611,7 @@ static unsigned long power9_idle_stop(unsigned long 
>>> psscr,
>>> bool mmu_on)
>>> unsigned long srr1;
>>> unsigned long pls;
>>> unsigned long mmcr0 = 0;
>>> +   unsigned long mmcra_bhrb = 0;
> 
> We are saving the whole of MMCRA aren't we ? We might want to just
> name it mmcra in that case.
> 
>>> struct p9_sprs sprs = {}; /* avoid false used-uninitialised */
>>> bool sprs_saved = false;
>>> 
>>> @@ -657,6 +658,15 @@ static unsigned long power9_idle_stop(unsigned long
>>> psscr, bool mmu_on)
>>>   */
>>> mmcr0   = mfspr(SPRN_MMCR0);
>>> }
>>> +
>>> +   if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>>> +   /* POWER10 uses MMCRA[:26] as BHRB disable bit
>>> +* to disable BHRB logic when not used. Hence Save and
>>> +* restore MMCRA after a state-loss idle.
>>> +*/
> 
> Multi-line comment usually has the first line blank.

Hi Gautham

Thanks for checking. I will change the comment format
Yes, we are saving whole of MMCRA. 
> 
>   /*
>* Line 1
>* Line 2
>* .
>* .
>* .
>* Line N
>*/
> 
>>> +   mmcra_bhrb  = mfspr(SPRN_MMCRA);
>> 
>> 
>> Why is the bhrb bit of mmcra special here?
> 
> The comment above could include the consequence of not saving and
> restoring MMCRA i.e
> 
> - If the user hasn't asked for the BHRB to be
>  written the value of MMCRA[BHRBD] = 1.
> 
> - On wakeup from stop, MMCRA[BHRBD] will be 0, since MMCRA is not a
>  previleged resource and will be lost.
> 
> - Thus, if we do not save and restore the MMCRA[BHRBD], the hardware
>  will be needlessly writing to the BHRB in the problem mode.
> 
>> 
>>> +   }
>>> +
>>> if ((psscr & PSSCR_RL_MASK) >= pnv_first_spr_loss_level) {
>>> sprs.lpcr   = mfspr(SPRN_LPCR);
>>> sprs.hfscr  = mfspr(SPRN_HFSCR);
>>> @@ -721,6 +731,10 @@ static unsigned long power9_idle_stop(unsigned long
>>> psscr, bool mmu_on)
>>> mtspr(SPRN_MMCR0, mmcr0);
>>> }
>>> 
>>> +   /* Reload MMCRA to restore BHRB disable bit for POWER10 */
>>> +   if (cpu_has_feature(CPU_FTR_ARCH_31))
>>> +   mtspr(SPRN_MMCRA, mmcra_bhrb);
>>> +
>>> /*
>>>  * DD2.2 and earlier need to set then clear bit 60 in MMCRA
>>>  * to ensure the PMU starts running.
>> 
> 
> --
> Thanks and Regards
> gautham.



Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-08 Thread Michael Ellerman
Athira Rajeev  writes:

> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
   ^
   a
> First is the addition of BHRB disable bit and second new filtering
  ^
  is
> modes for BHRB.
>
> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
> bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls

Most people call that bit 37.

> whether BHRB entries are written when BHRB recording is enabled by other
> bits. Patch implements support for this BHRB disable bit.
   ^
   This

> Secondly PowerISA v3.1 introduce filtering support for

.. that should be in a separate patch please.

> PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
^
This
> for "ind_call" and "cond" in power10_bhrb_filter_map().
>
> 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to userspace 
> via BHRB buffer")'

That doesn't need single quotes, and should be wrapped at 72 columns
like the rest of the text.

> added a check in bhrb_read() to filter the kernel address from BHRB buffer. 
> Patch here modified
> it to avoid that check for PowerISA v3.1 based processors, since PowerISA 
> v3.1 allows
> only MSR[PR]=1 address to be written to BHRB buffer.

And that should be a separate patch again please.

> Signed-off-by: Athira Rajeev 
> ---
>  arch/powerpc/perf/core-book3s.c   | 27 +--
>  arch/powerpc/perf/isa207-common.c | 13 +
>  arch/powerpc/perf/power10-pmu.c   | 13 +++--
>  arch/powerpc/platforms/powernv/idle.c | 14 ++
>  4 files changed, 59 insertions(+), 8 deletions(-)
>
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index fad5159..9709606 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -466,9 +466,13 @@ static void power_pmu_bhrb_read(struct perf_event 
> *event, struct cpu_hw_events *
>* addresses at this point. Check the privileges before
>* exporting it to userspace (avoid exposure of regions
>* where we could have speculative execution)
> +  * Incase of ISA 310, BHRB will capture only user-space
   ^
   In case of ISA v3.1,

> +  * address,hence include a check before filtering code
   ^  ^
   addresses, hence   .
>*/
> - if (is_kernel_addr(addr) && 
> perf_allow_kernel(>attr) != 0)
> - continue;
> + if (!(ppmu->flags & PPMU_ARCH_310S))
> + if (is_kernel_addr(addr) &&
> + perf_allow_kernel(>attr) != 0)
> + continue;

The indentation is weird. You should just check all three conditions
with &&.

>  
>   /* Branches are read most recent first (ie. mfbhrb 0 is
>* the most recent branch).
> @@ -1212,7 +1216,7 @@ static void write_mmcr0(struct cpu_hw_events *cpuhw, 
> unsigned long mmcr0)
>  static void power_pmu_disable(struct pmu *pmu)
>  {
>   struct cpu_hw_events *cpuhw;
> - unsigned long flags, mmcr0, val;
> + unsigned long flags, mmcr0, val, mmcra = 0;

You initialise it below.

>   if (!ppmu)
>   return;
> @@ -1245,12 +1249,23 @@ static void power_pmu_disable(struct pmu *pmu)
>   mb();
>   isync();
>  
> + val = mmcra = cpuhw->mmcr[2];
> +

For mmcr0 (above), val is the variable we mutate and mmcr0 is the
original value. But here you've done the reverse, which is confusing.

>   /*
>* Disable instruction sampling if it was enabled
>*/
> - if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE) {
> - mtspr(SPRN_MMCRA,
> -   cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE);
> + if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE)
> + mmcra = cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE;

You just loaded cpuhw->mmcr[2] into mmcra, use it rather than referring
back to cpuhw->mmcr[2] over and over.

> +
> + /* Disable BHRB via mmcra [:26] for p10 if needed */
> + if (!(cpuhw->mmcr[2] & MMCRA_BHRB_DISABLE))

You don't need to check that it's clear AFAICS. Just always set disable
and the check against val below will catch the nop case.

> + mmcra |= MMCRA_BHRB_DISABLE;
> +
> + /* Write SPRN_MMCRA if mmcra has either disabled

Comment format is wrong.

> +  * 

Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-08 Thread Athira Rajeev


> On 07-Jul-2020, at 12:47 PM, Michael Neuling  wrote:
> 
> On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
>> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
>> First is the addition of BHRB disable bit and second new filtering
>> modes for BHRB.
>> 
>> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
>> bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls
>> whether BHRB entries are written when BHRB recording is enabled by other
>> bits. Patch implements support for this BHRB disable bit.
> 
> Probably good to note here that this is backwards compatible. So if you have a
> kernel that doesn't know about this bit, it'll clear it and hence you still 
> get
> BHRB. 
> 
> You should also note why you'd want to do disable this (ie. the core will run
> faster).
> 


Sure Mikey, will add these information in commit message 

Thanks
Athira


>> Secondly PowerISA v3.1 introduce filtering support for
>> PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
>> for "ind_call" and "cond" in power10_bhrb_filter_map().
>> 
>> 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to userspace
>> via BHRB buffer")'
>> added a check in bhrb_read() to filter the kernel address from BHRB buffer.
>> Patch here modified
>> it to avoid that check for PowerISA v3.1 based processors, since PowerISA 
>> v3.1
>> allows
>> only MSR[PR]=1 address to be written to BHRB buffer.
>> 
>> Signed-off-by: Athira Rajeev 
>> ---
>> arch/powerpc/perf/core-book3s.c   | 27 +--
>> arch/powerpc/perf/isa207-common.c | 13 +
>> arch/powerpc/perf/power10-pmu.c   | 13 +++--
>> arch/powerpc/platforms/powernv/idle.c | 14 ++
> 
> This touches the idle code so we should get those guys on CC (adding Vaidy and
> Ego).
> 
>> 4 files changed, 59 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/powerpc/perf/core-book3s.c 
>> b/arch/powerpc/perf/core-book3s.c
>> index fad5159..9709606 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -466,9 +466,13 @@ static void power_pmu_bhrb_read(struct perf_event 
>> *event,
>> struct cpu_hw_events *
>>   * addresses at this point. Check the privileges before
>>   * exporting it to userspace (avoid exposure of regions
>>   * where we could have speculative execution)
>> + * Incase of ISA 310, BHRB will capture only user-space
>> + * address,hence include a check before filtering code
>>   */
>> -if (is_kernel_addr(addr) && perf_allow_kernel(
>>> attr) != 0)
>> -continue;
>> +if (!(ppmu->flags & PPMU_ARCH_310S))
>> +if (is_kernel_addr(addr) &&
>> +perf_allow_kernel(>attr) != 0)
>> +continue;
>> 
>>  /* Branches are read most recent first (ie. mfbhrb 0 is
>>   * the most recent branch).
>> @@ -1212,7 +1216,7 @@ static void write_mmcr0(struct cpu_hw_events *cpuhw,
>> unsigned long mmcr0)
>> static void power_pmu_disable(struct pmu *pmu)
>> {
>>  struct cpu_hw_events *cpuhw;
>> -unsigned long flags, mmcr0, val;
>> +unsigned long flags, mmcr0, val, mmcra = 0;
>> 
>>  if (!ppmu)
>>  return;
>> @@ -1245,12 +1249,23 @@ static void power_pmu_disable(struct pmu *pmu)
>>  mb();
>>  isync();
>> 
>> +val = mmcra = cpuhw->mmcr[2];
>> +
>>  /*
>>   * Disable instruction sampling if it was enabled
>>   */
>> -if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE) {
>> -mtspr(SPRN_MMCRA,
>> -  cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE);
>> +if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE)
>> +mmcra = cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE;
>> +
>> +/* Disable BHRB via mmcra [:26] for p10 if needed */
>> +if (!(cpuhw->mmcr[2] & MMCRA_BHRB_DISABLE))
>> +mmcra |= MMCRA_BHRB_DISABLE;
>> +
>> +/* Write SPRN_MMCRA if mmcra has either disabled
>> + * instruction sampling or BHRB
>> + */
>> +if (val != mmcra) {
>> +mtspr(SPRN_MMCRA, mmcra);
>>  mb();
>>  isync();
>>  }
>> diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-
>> common.c
>> index 7d4839e..463d925 100644
>> --- a/arch/powerpc/perf/isa207-common.c
>> +++ b/arch/powerpc/perf/isa207-common.c
>> @@ -404,6 +404,12 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
>> 
>>  mmcra = mmcr1 = mmcr2 = mmcr3 = 0;
>> 
>> +/* Disable bhrb unless explicitly requested
>> + * by setting MMCRA [:26] bit.
>> + */

Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-08 Thread Gautham R Shenoy
On Tue, Jul 07, 2020 at 05:17:55PM +1000, Michael Neuling wrote:
> On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
> > PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
> > First is the addition of BHRB disable bit and second new filtering
> > modes for BHRB.
> > 
> > BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
> > bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls
> > whether BHRB entries are written when BHRB recording is enabled by other
> > bits. Patch implements support for this BHRB disable bit.
> 
> Probably good to note here that this is backwards compatible. So if you have a
> kernel that doesn't know about this bit, it'll clear it and hence you still 
> get
> BHRB. 
> 
> You should also note why you'd want to do disable this (ie. the core will run
> faster).
> 
> > Secondly PowerISA v3.1 introduce filtering support for
> > PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
> > for "ind_call" and "cond" in power10_bhrb_filter_map().
> > 
> > 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to 
> > userspace
> > via BHRB buffer")'
> > added a check in bhrb_read() to filter the kernel address from BHRB buffer.
> > Patch here modified
> > it to avoid that check for PowerISA v3.1 based processors, since PowerISA 
> > v3.1
> > allows
> > only MSR[PR]=1 address to be written to BHRB buffer.
> > 
> > Signed-off-by: Athira Rajeev 
> > ---
> >  arch/powerpc/perf/core-book3s.c   | 27 +--
> >  arch/powerpc/perf/isa207-common.c | 13 +
> >  arch/powerpc/perf/power10-pmu.c   | 13 +++--
> >  arch/powerpc/platforms/powernv/idle.c | 14 ++
> 
> This touches the idle code so we should get those guys on CC (adding Vaidy and
> Ego).
> 
> >  4 files changed, 59 insertions(+), 8 deletions(-)
> > 

[..snip..]


> > diff --git a/arch/powerpc/platforms/powernv/idle.c
> > b/arch/powerpc/platforms/powernv/idle.c
> > index 2dd4673..7db99c7 100644
> > --- a/arch/powerpc/platforms/powernv/idle.c
> > +++ b/arch/powerpc/platforms/powernv/idle.c
> > @@ -611,6 +611,7 @@ static unsigned long power9_idle_stop(unsigned long 
> > psscr,
> > bool mmu_on)
> > unsigned long srr1;
> > unsigned long pls;
> > unsigned long mmcr0 = 0;
> > +   unsigned long mmcra_bhrb = 0;

We are saving the whole of MMCRA aren't we ? We might want to just
name it mmcra in that case.

> > struct p9_sprs sprs = {}; /* avoid false used-uninitialised */
> > bool sprs_saved = false;
> >  
> > @@ -657,6 +658,15 @@ static unsigned long power9_idle_stop(unsigned long
> > psscr, bool mmu_on)
> >   */
> > mmcr0   = mfspr(SPRN_MMCR0);
> > }
> > +
> > +   if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> > +   /* POWER10 uses MMCRA[:26] as BHRB disable bit
> > +* to disable BHRB logic when not used. Hence Save and
> > +* restore MMCRA after a state-loss idle.
> > +*/

Multi-line comment usually has the first line blank.

/*
 * Line 1
 * Line 2
 * .
 * .
 * .
 * Line N
 */

> > +   mmcra_bhrb  = mfspr(SPRN_MMCRA);
> 
> 
> Why is the bhrb bit of mmcra special here?

The comment above could include the consequence of not saving and
restoring MMCRA i.e

- If the user hasn't asked for the BHRB to be
  written the value of MMCRA[BHRBD] = 1.

- On wakeup from stop, MMCRA[BHRBD] will be 0, since MMCRA is not a
  previleged resource and will be lost.

- Thus, if we do not save and restore the MMCRA[BHRBD], the hardware
  will be needlessly writing to the BHRB in the problem mode.

> 
> > +   }
> > +
> > if ((psscr & PSSCR_RL_MASK) >= pnv_first_spr_loss_level) {
> > sprs.lpcr   = mfspr(SPRN_LPCR);
> > sprs.hfscr  = mfspr(SPRN_HFSCR);
> > @@ -721,6 +731,10 @@ static unsigned long power9_idle_stop(unsigned long
> > psscr, bool mmu_on)
> > mtspr(SPRN_MMCR0, mmcr0);
> > }
> >  
> > +   /* Reload MMCRA to restore BHRB disable bit for POWER10 */
> > +   if (cpu_has_feature(CPU_FTR_ARCH_31))
> > +   mtspr(SPRN_MMCRA, mmcra_bhrb);
> > +
> > /*
> >  * DD2.2 and earlier need to set then clear bit 60 in MMCRA
> >  * to ensure the PMU starts running.
> 

--
Thanks and Regards
gautham.


Re: [PATCH v2 07/10] powerpc/perf: support BHRB disable bit and new filtering modes

2020-07-07 Thread Michael Neuling
On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
> First is the addition of BHRB disable bit and second new filtering
> modes for BHRB.
> 
> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
> bit 26, namely "BHRB Recording Disable (BHRBRD)". This field controls
> whether BHRB entries are written when BHRB recording is enabled by other
> bits. Patch implements support for this BHRB disable bit.

Probably good to note here that this is backwards compatible. So if you have a
kernel that doesn't know about this bit, it'll clear it and hence you still get
BHRB. 

You should also note why you'd want to do disable this (ie. the core will run
faster).

> Secondly PowerISA v3.1 introduce filtering support for
> PERF_SAMPLE_BRANCH_IND_CALL/COND. The patch adds BHRB filter support
> for "ind_call" and "cond" in power10_bhrb_filter_map().
> 
> 'commit bb19af816025 ("powerpc/perf: Prevent kernel address leak to userspace
> via BHRB buffer")'
> added a check in bhrb_read() to filter the kernel address from BHRB buffer.
> Patch here modified
> it to avoid that check for PowerISA v3.1 based processors, since PowerISA v3.1
> allows
> only MSR[PR]=1 address to be written to BHRB buffer.
> 
> Signed-off-by: Athira Rajeev 
> ---
>  arch/powerpc/perf/core-book3s.c   | 27 +--
>  arch/powerpc/perf/isa207-common.c | 13 +
>  arch/powerpc/perf/power10-pmu.c   | 13 +++--
>  arch/powerpc/platforms/powernv/idle.c | 14 ++

This touches the idle code so we should get those guys on CC (adding Vaidy and
Ego).

>  4 files changed, 59 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index fad5159..9709606 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -466,9 +466,13 @@ static void power_pmu_bhrb_read(struct perf_event *event,
> struct cpu_hw_events *
>* addresses at this point. Check the privileges before
>* exporting it to userspace (avoid exposure of regions
>* where we could have speculative execution)
> +  * Incase of ISA 310, BHRB will capture only user-space
> +  * address,hence include a check before filtering code
>*/
> - if (is_kernel_addr(addr) && perf_allow_kernel(
> >attr) != 0)
> - continue;
> + if (!(ppmu->flags & PPMU_ARCH_310S))
> + if (is_kernel_addr(addr) &&
> + perf_allow_kernel(>attr) != 0)
> + continue;
>  
>   /* Branches are read most recent first (ie. mfbhrb 0 is
>* the most recent branch).
> @@ -1212,7 +1216,7 @@ static void write_mmcr0(struct cpu_hw_events *cpuhw,
> unsigned long mmcr0)
>  static void power_pmu_disable(struct pmu *pmu)
>  {
>   struct cpu_hw_events *cpuhw;
> - unsigned long flags, mmcr0, val;
> + unsigned long flags, mmcr0, val, mmcra = 0;
>  
>   if (!ppmu)
>   return;
> @@ -1245,12 +1249,23 @@ static void power_pmu_disable(struct pmu *pmu)
>   mb();
>   isync();
>  
> + val = mmcra = cpuhw->mmcr[2];
> +
>   /*
>* Disable instruction sampling if it was enabled
>*/
> - if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE) {
> - mtspr(SPRN_MMCRA,
> -   cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE);
> + if (cpuhw->mmcr[2] & MMCRA_SAMPLE_ENABLE)
> + mmcra = cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE;
> +
> + /* Disable BHRB via mmcra [:26] for p10 if needed */
> + if (!(cpuhw->mmcr[2] & MMCRA_BHRB_DISABLE))
> + mmcra |= MMCRA_BHRB_DISABLE;
> +
> + /* Write SPRN_MMCRA if mmcra has either disabled
> +  * instruction sampling or BHRB
> +  */
> + if (val != mmcra) {
> + mtspr(SPRN_MMCRA, mmcra);
>   mb();
>   isync();
>   }
> diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-
> common.c
> index 7d4839e..463d925 100644
> --- a/arch/powerpc/perf/isa207-common.c
> +++ b/arch/powerpc/perf/isa207-common.c
> @@ -404,6 +404,12 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
>  
>   mmcra = mmcr1 = mmcr2 = mmcr3 = 0;
>  
> + /* Disable bhrb unless explicitly requested
> +  * by setting MMCRA [:26] bit.
> +  */
> + if (cpu_has_feature(CPU_FTR_ARCH_31))
> + mmcra |= MMCRA_BHRB_DISABLE;
> +
>   /* Second pass: assign PMCs, set all MMCR1 fields */
>   for (i = 0; i < n_ev; ++i) {
>   pmc =