> On 24-Feb-2021, at 5:51 PM, Thadeu Lima de Souza Cascardo > <casca...@canonical.com> wrote: > > EBB events must be under exclusive groups, so there is no mix of EBB and > non-EBB events on the same PMU. This requirement worked fine as perf core > would not allow other pinned events to be scheduled together with exclusive > events. > > This assumption was broken by commit 1908dc911792 ("perf: Tweak > perf_event_attr::exclusive semantics"). > > After that, the test cpu_event_pinned_vs_ebb_test started succeeding after > read_events, but worse, the task would not have given access to PMC1, so > when it tried to write to it, it was killed with "illegal instruction". > > Preventing mixed EBB and non-EBB events from being add to the same PMU will > just revert to the previous behavior and the test will succeed. Hi, Thanks for checking this. I checked your patch which is fixing “check_excludes” to make sure all events must agree on EBB. But in the PMU group constraints, we already have check for EBB events. This is in arch/powerpc/perf/isa207-common.c ( isa207_get_constraint function ). <<>> mask |= CNST_EBB_VAL(ebb); value |= CNST_EBB_MASK; <<>> But the above setting for mask and value is interchanged. We actually need to fix here. Below patch should fix this: diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c index e4f577da33d8..8b5eeb6fb2fb 100644 --- a/arch/powerpc/perf/isa207-common.c +++ b/arch/powerpc/perf/isa207-common.c @@ -447,8 +447,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp, * EBB events are pinned & exclusive, so this should never actually * hit, but we leave it as a fallback in case. */ - mask |= CNST_EBB_VAL(ebb); - value |= CNST_EBB_MASK; + mask |= CNST_EBB_MASK; + value |= CNST_EBB_VAL(ebb); *maskp = mask; *valp = value; Can you please try with this patch. Thanks Athira > > Fixes: 1908dc911792 (perf: Tweak perf_event_attr::exclusive semantics) > Signed-off-by: Thadeu Lima de Souza Cascardo <casca...@canonical.com> > --- > arch/powerpc/perf/core-book3s.c | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c > index 43599e671d38..d767f7944f85 100644 > --- a/arch/powerpc/perf/core-book3s.c > +++ b/arch/powerpc/perf/core-book3s.c > @@ -1010,9 +1010,25 @@ static int check_excludes(struct perf_event **ctrs, > unsigned int cflags[], > int n_prev, int n_new) > { > int eu = 0, ek = 0, eh = 0; > + bool ebb = false; > int i, n, first; > struct perf_event *event; > > + n = n_prev + n_new; > + if (n <= 1) > + return 0; > + > + first = 1; > + for (i = 0; i < n; ++i) { > + event = ctrs[i]; > + if (first) { > + ebb = is_ebb_event(event); > + first = 0; > + } else if (is_ebb_event(event) != ebb) { > + return -EAGAIN; > + } > + } > + > /* > * If the PMU we're on supports per event exclude settings then we > * don't need to do any of this logic. NB. This assumes no PMU has both > @@ -1021,10 +1037,6 @@ static int check_excludes(struct perf_event **ctrs, > unsigned int cflags[], > if (ppmu->flags & PPMU_ARCH_207S) > return 0; > > - n = n_prev + n_new; > - if (n <= 1) > - return 0; > - > first = 1; > for (i = 0; i < n; ++i) { > if (cflags[i] & PPMU_LIMITED_PMC_OK) { > -- > 2.27.0 >