Re: [PATCH] perf/x86: Fix integer overflow when left shifting an integer more than 32 bits
On 20/04/2021 16:31, Peter Zijlstra wrote: > On Tue, Apr 20, 2021 at 05:03:03PM +0200, Peter Zijlstra wrote: >> On Tue, Apr 20, 2021 at 03:29:07PM +0100, Colin King wrote: >>> From: Colin Ian King >>> >>> The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being >>> bit-wise masked with the value (0x03 << i*4). However, the shifted value >>> is evaluated using 32 bit arithmetic, so will overflow when i > 8. >>> Fix this by making 0x03 a ULL so that the shift is performed using >>> 64 bit arithmetic. >>> >>> Addresses-Coverity: ("Unintentional integer overflow") >> >> Strange tag that, also inaccurate, wide shifts are UB and don't behave >> consistently. >> >> As is, we've not had hardware with that many fixed counters, but yes, >> worth fixing I suppose. > > Patch now reads: > > --- > Subject: perf/x86: Allow for 8 From: Colin Ian King > Date: Tue, 20 Apr 2021 15:29:07 +0100 > > From: Colin Ian King > > The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being > bit-wise masked with the value (0x03 << i*4). However, the shifted value > is evaluated using 32 bit arithmetic, so will UB when i > 8. Fix this > by making 0x03 a ULL so that the shift is performed using 64 bit > arithmetic. > > This makes the arithmetic internally consistent and preparers for the > day when hardware provides 8 > Signed-off-by: Colin Ian King > Signed-off-by: Peter Zijlstra (Intel) > Link: > https://lkml.kernel.org/r/20210420142907.382417-1-colin.k...@canonical.com > --- > arch/x86/events/core.c |2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -261,7 +261,7 @@ static bool check_hw_exists(void) > for (i = 0; i < x86_pmu.num_counters_fixed; i++) { > if (fixed_counter_disabled(i)) > continue; > - if (val & (0x03 << i*4)) { > + if (val & (0x03ULL << i*4)) { > bios_fail = 1; > val_fail = val; > reg_fail = reg; >
Re: [PATCH] perf/x86: Fix integer overflow when left shifting an integer more than 32 bits
On Tue, Apr 20, 2021 at 05:03:03PM +0200, Peter Zijlstra wrote: > On Tue, Apr 20, 2021 at 03:29:07PM +0100, Colin King wrote: > > From: Colin Ian King > > > > The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being > > bit-wise masked with the value (0x03 << i*4). However, the shifted value > > is evaluated using 32 bit arithmetic, so will overflow when i > 8. > > Fix this by making 0x03 a ULL so that the shift is performed using > > 64 bit arithmetic. > > > > Addresses-Coverity: ("Unintentional integer overflow") > > Strange tag that, also inaccurate, wide shifts are UB and don't behave > consistently. > > As is, we've not had hardware with that many fixed counters, but yes, > worth fixing I suppose. Patch now reads: --- Subject: perf/x86: Allow for 8 Date: Tue, 20 Apr 2021 15:29:07 +0100 From: Colin Ian King The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being bit-wise masked with the value (0x03 << i*4). However, the shifted value is evaluated using 32 bit arithmetic, so will UB when i > 8. Fix this by making 0x03 a ULL so that the shift is performed using 64 bit arithmetic. This makes the arithmetic internally consistent and preparers for the day when hardware provides 8 Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210420142907.382417-1-colin.k...@canonical.com --- arch/x86/events/core.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -261,7 +261,7 @@ static bool check_hw_exists(void) for (i = 0; i < x86_pmu.num_counters_fixed; i++) { if (fixed_counter_disabled(i)) continue; - if (val & (0x03 << i*4)) { + if (val & (0x03ULL << i*4)) { bios_fail = 1; val_fail = val; reg_fail = reg;
Re: [PATCH] perf/x86: Fix integer overflow when left shifting an integer more than 32 bits
On Tue, Apr 20, 2021 at 03:29:07PM +0100, Colin King wrote: > From: Colin Ian King > > The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being > bit-wise masked with the value (0x03 << i*4). However, the shifted value > is evaluated using 32 bit arithmetic, so will overflow when i > 8. > Fix this by making 0x03 a ULL so that the shift is performed using > 64 bit arithmetic. > > Addresses-Coverity: ("Unintentional integer overflow") Strange tag that, also inaccurate, wide shifts are UB and don't behave consistently. As is, we've not had hardware with that many fixed counters, but yes, worth fixing I suppose. > Fixes: a5ebe0ba3dff ("perf/x86: Check all MSRs before passing hw check") > Signed-off-by: Colin Ian King > --- > arch/x86/events/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c > index bafd93c54ffa..59c665c8c2e9 100644 > --- a/arch/x86/events/core.c > +++ b/arch/x86/events/core.c > @@ -261,7 +261,7 @@ static bool check_hw_exists(void) > for (i = 0; i < x86_pmu.num_counters_fixed; i++) { > if (fixed_counter_disabled(i)) > continue; > - if (val & (0x03 << i*4)) { > + if (val & (0x03ULL << i*4)) { > bios_fail = 1; > val_fail = val; > reg_fail = reg; > -- > 2.30.2 >