Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Tue, Dec 17, 2019 at 10:32:35AM -0800, Linus Torvalds wrote: > On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds > wrote: > > > > Let me think about it. > > How about we just get rid of the union entirely, and just use > 'unsigned long' or 'unsigned long long' depending on the size. > > Something like the attached patch - it still requires that it be an > arithmetic type, but now because of the final cast. > > But it might still be a cast to a volatile type, of course. Then the > result will be volatile, but at least now READ_ONCE() won't be taking > the address of a volatile variable on the stack - does that at least > fix some of the horrible code generation. Hmm? Sounds like it according to mpe, but I'll confirm too for arm64. > This is untested, because I obviously still have the cases of > structures (page table entries) being accessed once.. > > Linus > include/linux/compiler.h | 33 + > 1 file changed, 17 insertions(+), 16 deletions(-) > > diff --git a/include/linux/compiler.h b/include/linux/compiler.h > index 5e88e7e33abe..8b4282194f16 100644 > --- a/include/linux/compiler.h > +++ b/include/linux/compiler.h > @@ -179,18 +179,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, > int val, > > #include > > -#define __READ_ONCE_SIZE \ > -({ \ > - switch (size) { \ > - case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \ > - case 2: *(__u16 *)res = *(volatile __u16 *)p; break;\ > - case 4: *(__u32 *)res = *(volatile __u32 *)p; break;\ > - case 8: *(__u64 *)res = *(volatile __u64 *)p; break;\ > - default:\ > - barrier(); \ > - __builtin_memcpy((void *)res, (const void *)p, size); \ > - barrier(); \ > - } \ > +/* "unsigned long" or "unsigned long long" - make it fit in a register if > possible */ > +#define __READ_ONCE_TYPE(size) \ > + __typeof__(__builtin_choose_expr(size > sizeof(0UL), 0ULL, 0UL)) Ha, I wondered when '__builtin_choose_expr()' would make an appearance in this thread! Nice trick. I'll try integrating this with what I have and see what I run into next. Back down the rabbit hole... Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Linus Torvalds writes: > On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds > wrote: >> >> Let me think about it. > > How about we just get rid of the union entirely, and just use > 'unsigned long' or 'unsigned long long' depending on the size. > > Something like the attached patch - it still requires that it be an > arithmetic type, but now because of the final cast. > > But it might still be a cast to a volatile type, of course. Then the > result will be volatile, but at least now READ_ONCE() won't be taking > the address of a volatile variable on the stack - does that at least > fix some of the horrible code generation. Hmm? Yes it seems to fix it for me. There's no unnecessary stack protector gunk, and no store/load to the stack variable. This is my previous example of ext4_resize_begin(), hacked to use a copy of the generic version of test_and_set_bit_lock(), which in turn was hacked to use a local version of your READ_ONCE(). c0534390 : c0534390: 19 01 4c 3c addis r2,r12,281 c0534394: 70 c3 42 38 addir2,r2,-15504 c0534398: a6 02 08 7c mflrr0 c053439c: 4d 98 b3 4b bl c006dbe8 <_mcount> c05343a0: a6 02 08 7c mflrr0 c05343a4: f8 ff e1 fb std r31,-8(r1) c05343a8: f0 ff c1 fb std r30,-16(r1) c05343ac: 78 1b 7f 7c mr r31,r3 c05343b0: 18 00 60 38 li r3,24 c05343b4: 10 00 01 f8 std r0,16(r1) c05343b8: 91 ff 21 f8 stdur1,-112(r1) c05343bc: 98 03 df eb ld r30,920(r31) c05343c0: d9 d3 c0 4b bl c0141798 c05343c4: 00 00 00 60 nop c05343c8: 00 00 a3 2f cmpdi cr7,r3,0 c05343cc: a4 00 9e 41 beq cr7,c0534470 c05343d0: 98 03 3f e9 ld r9,920(r31) c05343d4: 60 00 5e e9 ld r10,96(r30) c05343d8: 54 00 fe 80 lwz r7,84(r30) c05343dc: 68 00 09 e9 ld r8,104(r9) c05343e0: 18 00 4a e9 ld r10,24(r10) c05343e4: 14 00 08 81 lwz r8,20(r8) c05343e8: 36 3c 4a 7d srd r10,r10,r7 c05343ec: 00 40 aa 7f cmpdcr7,r10,r8 c05343f0: b8 00 9e 40 bne cr7,c05344a8 c05343f4: a0 00 49 a1 lhz r10,160(r9) c05343f8: 02 00 4a 71 andi. r10,r10,2 c05343fc: 84 00 82 40 bne c0534480 c0534400: 30 02 49 e9 ld r10,560(r9) # simple load of EXT4_SB(sb)->s_ext4_flags c0534404: 01 00 4a 71 andi. r10,r10,1 c0534408: 48 00 82 40 bne c0534450 c053440c: 30 02 e9 38 addir7,r9,560 c0534410: 01 00 00 39 li r8,1 c0534414: a8 38 40 7d ldarx r10,0,r7 c0534418: 78 53 06 7d or r6,r8,r10 c053441c: ad 39 c0 7c stdcx. r6,0,r7 c0534420: f4 ff c2 40 bne-c0534414 c0534424: 2c 01 00 4c isync c0534428: 01 00 49 71 andi. r9,r10,1 c053442c: 00 00 60 38 li r3,0 c0534430: 20 00 82 40 bne c0534450 c0534434: 70 00 21 38 addir1,r1,112 c0534438: 10 00 01 e8 ld r0,16(r1) c053443c: f0 ff c1 eb ld r30,-16(r1) c0534440: f8 ff e1 eb ld r31,-8(r1) c053: a6 03 08 7c mtlrr0 c0534448: 20 00 80 4e blr c053444c: 00 00 00 60 nop c0534450: 70 00 21 38 addir1,r1,112 c0534454: f0 ff 60 38 li r3,-16 c0534458: 10 00 01 e8 ld r0,16(r1) c053445c: f0 ff c1 eb ld r30,-16(r1) c0534460: f8 ff e1 eb ld r31,-8(r1) c0534464: a6 03 08 7c mtlrr0 c0534468: 20 00 80 4e blr c053446c: 00 00 00 60 nop c0534470: ff ff 60 38 li r3,-1 c0534474: c0 ff ff 4b b c0534434 c0534478: 00 00 00 60 nop c053447c: 00 00 00 60 nop c0534480: 8a ff c2 3c addis r6,r2,-118 c0534484: 74 ff 82 3c addis r4,r2,-140 c0534488: 78 fb e3 7f mr r3,r31 c053448c: 7c 00 a0 38 li r5,124 c0534490: a8 75 c6 38 addir6,r6,30120 c0534494: f8 0b 84 38 addir4,r4,3064 c0534498:
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Wed, Dec 18, 2019 at 11:22:05AM +0100, Christian Borntraeger wrote: > On 12.12.19 21:49, Linus Torvalds wrote: > > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > >> > >> The root of my concern in all of this, and what started me looking at it in > >> the first place, is the interaction with 'typeof()'. Inheriting 'volatile' > >> for a pointer means that local variables in macros declared using typeof() > >> suddenly start generating *hideous* code, particularly when pointless stack > >> spills get stackprotector all excited. > > > > Yeah, removing volatile can be a bit annoying. > > > > For the particular case of the bitops, though, it's not an issue. > > Since you know the type there, you can just cast it. > > > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do > > > > typeof(0+(*p)) __var; > > > > since you might as well get the integer promotion anyway (on the > > non-volatile result). > > > > But that doesn't work with structures or unions, of course. > > We do have a READ_ONCE on the following union in s390 code. > > union ipte_control { > unsigned long val; > struct { > unsigned long k : 1; > unsigned long kh : 31; > unsigned long kg : 32; > }; > }; > > > In fact this one was the original failure case why we change ACCESS_ONCE. > > see arch/s390/kvm/gaccess.c Thanks. I think we should be ok just using the 'val' field instead of the whole union but, then again, when bitfields are involved who knows what the compiler might do. I thought we usually shied away from using them to mirror hardware structures like this? Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On 12.12.19 21:49, Linus Torvalds wrote: > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: >> >> The root of my concern in all of this, and what started me looking at it in >> the first place, is the interaction with 'typeof()'. Inheriting 'volatile' >> for a pointer means that local variables in macros declared using typeof() >> suddenly start generating *hideous* code, particularly when pointless stack >> spills get stackprotector all excited. > > Yeah, removing volatile can be a bit annoying. > > For the particular case of the bitops, though, it's not an issue. > Since you know the type there, you can just cast it. > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do > > typeof(0+(*p)) __var; > > since you might as well get the integer promotion anyway (on the > non-volatile result). > > But that doesn't work with structures or unions, of course. We do have a READ_ONCE on the following union in s390 code. union ipte_control { unsigned long val; struct { unsigned long k : 1; unsigned long kh : 31; unsigned long kg : 32; }; }; In fact this one was the original failure case why we change ACCESS_ONCE. see arch/s390/kvm/gaccess.c
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds wrote: > > Let me think about it. How about we just get rid of the union entirely, and just use 'unsigned long' or 'unsigned long long' depending on the size. Something like the attached patch - it still requires that it be an arithmetic type, but now because of the final cast. But it might still be a cast to a volatile type, of course. Then the result will be volatile, but at least now READ_ONCE() won't be taking the address of a volatile variable on the stack - does that at least fix some of the horrible code generation. Hmm? This is untested, because I obviously still have the cases of structures (page table entries) being accessed once.. Linus include/linux/compiler.h | 33 + 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 5e88e7e33abe..8b4282194f16 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -179,18 +179,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #include -#define __READ_ONCE_SIZE \ -({ \ - switch (size) { \ - case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \ - case 2: *(__u16 *)res = *(volatile __u16 *)p; break; \ - case 4: *(__u32 *)res = *(volatile __u32 *)p; break; \ - case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \ - default: \ - barrier(); \ - __builtin_memcpy((void *)res, (const void *)p, size); \ - barrier(); \ - }\ +/* "unsigned long" or "unsigned long long" - make it fit in a register if possible */ +#define __READ_ONCE_TYPE(size) \ + __typeof__(__builtin_choose_expr(size > sizeof(0UL), 0ULL, 0UL)) + +#define __READ_ONCE_SIZE \ +({ \ + switch (size) {\ + case 1: *(unsigned long *)res = *(volatile __u8 *)p; break; \ + case 2: *(unsigned long *)res = *(volatile __u16 *)p; break; \ + case 4: *(unsigned long *)res = *(volatile __u32 *)p; break; \ + case 8: *(unsigned long long *)res = *(volatile __u64 *)p; break; \ + } \ }) static __always_inline @@ -258,13 +258,14 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s #define __READ_ONCE(x, check) \ ({ \ - union { typeof(x) __val; char __c[1]; } __u; \ + __READ_ONCE_TYPE(sizeof(x)) __u;\ + compiletime_assert(sizeof(x) <= sizeof(__u), "READ_ONCE type"); \ if (check) \ - __read_once_size(&(x), __u.__c, sizeof(x)); \ + __read_once_size(&(x), &__u, sizeof(x)); \ else\ - __read_once_size_nocheck(&(x), __u.__c, sizeof(x)); \ + __read_once_size_nocheck(&(x), &__u, sizeof(x)); \ smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \ - __u.__val; \ + (__typeof__(x))__u; \ }) #define READ_ONCE(x) __READ_ONCE(x, 1)
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Tue, Dec 17, 2019 at 10:05:53AM -0800, Linus Torvalds wrote: > On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds > wrote: > > > > Let me think about it. > > .. and in the short term, maybe for code generation, the right thing > is to just do the cast in the bitops, where we can just cast to > "unsigned long *" and remove the volatile that way. Yeah, I think I'll spin that patch series tomorrow anyway, since I don't think we need to hold it up. > I'm still hoping there's a trick, but.. Well, there's always Peter's awful hack [1] but it's really gross. FWIW, I've pushed the handful of patches I have to [2], which drop the GCC 4.8 workaround and introduce a non-atomic version instead of the '__builtin_memcpy()'. Will [1] https://lore.kernel.org/lkml/20191213125618.gd2...@hirez.programming.kicks-ass.net [2] https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=rwonce/cleanup
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds wrote: > > Let me think about it. .. and in the short term, maybe for code generation, the right thing is to just do the cast in the bitops, where we can just cast to "unsigned long *" and remove the volatile that way. I'm still hoping there's a trick, but.. Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Tue, Dec 17, 2019 at 9:07 AM Will Deacon wrote: > > However, I'm really banging my head against the compiler trying to get > your trick above to work for pointer types when the pointed-to-type is > not defined. You are right, of course. The trick works fine with arithmetic types, but since it does use arithmetic, it requires that pointer types be not only declared, but defined. The addition wants the size of the underlying type (even though with an addition of zero it wouldn't be required - but that's not how C works). Let me think about it. Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 12:49:52PM -0800, Linus Torvalds wrote: > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > > > > The root of my concern in all of this, and what started me looking at it in > > the first place, is the interaction with 'typeof()'. Inheriting 'volatile' > > for a pointer means that local variables in macros declared using typeof() > > suddenly start generating *hideous* code, particularly when pointless stack > > spills get stackprotector all excited. > > Yeah, removing volatile can be a bit annoying. > > For the particular case of the bitops, though, it's not an issue. > Since you know the type there, you can just cast it. > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do > > typeof(0+(*p)) __var; > > since you might as well get the integer promotion anyway (on the > non-volatile result). > > But that doesn't work with structures or unions, of course. > > I'm not entirely sure we have READ_ONCE() with a struct. I do know we > have it with 64-bit entities on 32-bit machines, but that's ok with > the "0+" trick. Other than the two trivial examples Arnd and I spotted, it looks like we're in for some fun with the page-table types such as pud_t but that /should/ be fixable with enough effort. However, I'm really banging my head against the compiler trying to get your trick above to work for pointer types when the pointed-to-type is not defined. As a very cut down example (I pulled this back out of the preprocessor and cleaned it up a bit): struct dentry { struct inode *d_inode; }; static inline struct inode *d_inode_rcu(struct dentry *dentry) { return ({ typeof(0 + dentry->d_inode) __x = (*(volatile typeof(dentry->d_inode) *)&(dentry->d_inode)); (typeof(dentry->d_inode))__x; }); } Trying to compile this results in: | In function 'd_inode_rcu': | error: invalid use of undefined type 'struct inode' whereas it compiles fine if you remove the '0 +' from the first typeof. What am I missing? Perhaps the compiler wants the size information of 'struct inode' before it will contemplate the arithmetic, but if so then I don't think we can use this trick after all. Hmm. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Mon, Dec 16, 2019 at 11:28 AM Will Deacon wrote: > On Fri, Dec 13, 2019 at 02:17:08PM +0100, Arnd Bergmann wrote: > > On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds > > wrote: > > > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > > > > The root of my concern in all of this, and what started me looking at > > > > it in > > > > the first place, is the interaction with 'typeof()'. Inheriting > > > > 'volatile' > > > > for a pointer means that local variables in macros declared using > > > > typeof() > > > > suddenly start generating *hideous* code, particularly when pointless > > > > stack > > > > spills get stackprotector all excited. > > > > > > Yeah, removing volatile can be a bit annoying. > > > > > > For the particular case of the bitops, though, it's not an issue. > > > Since you know the type there, you can just cast it. > > > > > > And if we had the rule that READ_ONCE() was an arithmetic type, you could > > > do > > > > > > typeof(0+(*p)) __var; > > > > > > since you might as well get the integer promotion anyway (on the > > > non-volatile result). > > > > > > But that doesn't work with structures or unions, of course. > > > > > > I'm not entirely sure we have READ_ONCE() with a struct. I do know we > > > have it with 64-bit entities on 32-bit machines, but that's ok with > > > the "0+" trick. > > > > I'll have my randconfig builder look for instances, so far I found one, > > see below. My feeling is that it would be better to enforce at least > > the size being a 1/2/4/8, to avoid cases where someone thinks > > the access is atomic, but it falls back on a memcpy. > > I've been using something similar built on compiletime_assert_atomic_type() > and I spotted another instance in the xdp code (xskq_validate_desc()) which > tries to READ_ONCE() on a 128-bit descriptor, although a /very/ quick read > of the code suggests that this probably can't be concurrently modified if > the ring indexes are synchronised properly. That's the only other one I found. I have not checked how many are structs that are the size of a normal u8/u16/u32/u64, or if there are types that have a lower alignment than there size, such as a __u16[2] that might span a page boundary. > However, enabling this for 32-bit ARM is total carnage; as Linus mentioned, > a whole bunch of code appears to be relying on atomic 64-bit access of > READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime, > cpuidle, ... :( > > Unfortunately, at least some of these *do* look like bugs, but I can't see > how we can fix them, not least because the first two are user ABI afaict. It > may also be that in practice we get 2x32-bit stores, and that works out fine > when storing a 32-bit virtual address. I'm not sure what (if anything) the > compiler guarantees in these cases. Would it help if 32-bit architectures use atomic64_read() and atomic64_set() to implement a 64-bit READ_ONCE()/WRITE_ONCE(), or would that make it worse in other ways? On mips32, riscv32 and some minor 32-bit architectures with SMP support (xtensa, csky, hexagon, openrisc, parisc32, sparc32 and ppc32 AFAICT) this ends up using a spinlock for GENERIC_ATOMIC64, but at least ARMv6+, i586+ and most ARC should be fine. (Side note: the ARMv7 implementation is suboptimimal for ARMv7VE+ if LPAE is disabled, I think we need to really add Kconfig options for ARMv7VE and 32-bit ARMv8 improve this and things like integer divide). Arnd
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Mon, Dec 16, 2019 at 10:28:06AM +, Will Deacon wrote: > However, enabling this for 32-bit ARM is total carnage; as Linus mentioned, > a whole bunch of code appears to be relying on atomic 64-bit access of > READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime, > cpuidle, ... :( > > Unfortunately, at least some of these *do* look like bugs, but I can't see > how we can fix them, not least because the first two are user ABI afaict. It > may also be that in practice we get 2x32-bit stores, and that works out fine > when storing a 32-bit virtual address. I'm not sure what (if anything) the > compiler guarantees in these cases. Perf does indeed have a (known) problem here for the head/tail values. Last time we looked at that nobody could really come up with a sane solution that wouldn't break something. I'll try and dig out that thread. Perhaps casting the value to 'unsigned long' internally might work, I forgot the details.
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Fri, Dec 13, 2019 at 02:17:08PM +0100, Arnd Bergmann wrote: > On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds > wrote: > > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > > > The root of my concern in all of this, and what started me looking at it > > > in > > > the first place, is the interaction with 'typeof()'. Inheriting 'volatile' > > > for a pointer means that local variables in macros declared using typeof() > > > suddenly start generating *hideous* code, particularly when pointless > > > stack > > > spills get stackprotector all excited. > > > > Yeah, removing volatile can be a bit annoying. > > > > For the particular case of the bitops, though, it's not an issue. > > Since you know the type there, you can just cast it. > > > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do > > > > typeof(0+(*p)) __var; > > > > since you might as well get the integer promotion anyway (on the > > non-volatile result). > > > > But that doesn't work with structures or unions, of course. > > > > I'm not entirely sure we have READ_ONCE() with a struct. I do know we > > have it with 64-bit entities on 32-bit machines, but that's ok with > > the "0+" trick. > > I'll have my randconfig builder look for instances, so far I found one, > see below. My feeling is that it would be better to enforce at least > the size being a 1/2/4/8, to avoid cases where someone thinks > the access is atomic, but it falls back on a memcpy. I've been using something similar built on compiletime_assert_atomic_type() and I spotted another instance in the xdp code (xskq_validate_desc()) which tries to READ_ONCE() on a 128-bit descriptor, although a /very/ quick read of the code suggests that this probably can't be concurrently modified if the ring indexes are synchronised properly. However, enabling this for 32-bit ARM is total carnage; as Linus mentioned, a whole bunch of code appears to be relying on atomic 64-bit access of READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime, cpuidle, ... :( Unfortunately, at least some of these *do* look like bugs, but I can't see how we can fix them, not least because the first two are user ABI afaict. It may also be that in practice we get 2x32-bit stores, and that works out fine when storing a 32-bit virtual address. I'm not sure what (if anything) the compiler guarantees in these cases. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Fri, Dec 13, 2019 at 1:33 PM Arnd Bergmann wrote: > > A few hundred randconfig (x86, arm32 and arm64) builds later I > still only found one other instance: Just send me the pull request to READ_ONCE() and WRITE_ONCE() be arithmetic types, and your two trivial fixes, and let's get this over with. With that, you can remove the 'volatile' with my simple 'typeof(0+*(p))' trick, and we're all good, and we don't need to worry about compiler versions either. I'm willing to take that after the merge window as a "sanity fix". Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Fri, Dec 13, 2019 at 2:17 PM Arnd Bergmann wrote: > > On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds > wrote: > I'll have my randconfig builder look for instances, so far I found one, > see below. My feeling is that it would be better to enforce at least > the size being a 1/2/4/8, to avoid cases where someone thinks > the access is atomic, but it falls back on a memcpy. > > Arnd > > diff --git a/drivers/xen/time.c b/drivers/xen/time.c > index 0968859c29d0..adb492c0aa34 100644 > --- a/drivers/xen/time.c > +++ b/drivers/xen/time.c > @@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta( > do { > state_time = get64(>state_entry_time); > rmb(); /* Hypervisor might update data. */ > - *res = READ_ONCE(*state); > + memcpy(res, state, sizeof(*res)); > rmb(); /* Hypervisor might update data. */ > } while (get64(>state_entry_time) != state_time || > (state_time & XEN_RUNSTATE_UPDATE)); A few hundred randconfig (x86, arm32 and arm64) builds later I still only found one other instance: diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index eddae4688862..1c1f33447e96 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -304,7 +304,9 @@ static inline struct xdp_desc *xskq_validate_desc(struct xsk_queue *q, struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring; unsigned int idx = q->cons_tail & q->ring_mask; - *desc = READ_ONCE(ring->desc[idx]); + barrier(); + memcpy(desc, >desc[idx], sizeof(*desc)); + barrier(); if (xskq_is_valid_desc(q, desc, umem)) return desc; Arnd
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Segher Boessenkool writes: > Hi! > > On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote: >> I tried this: >> >> > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, >> > int size) >> > */ >> > #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) >> > >> > +#else /* GCC_VERSION < 40800 */ >> > + >> > +#define READ_ONCE_NOCHECK(x) >> > \ >> > +({ >> > \ >> > + typeof(x) __x = *(volatile typeof(x))&(x); \ >> >> Didn't compile, needed: >> >> typeof(x) __x = *(volatile typeof())&(x); \ >> >> >> > + smp_read_barrier_depends(); \ >> > + __x; >> > +}) >> >> >> And that works for me. No extra stack check stuff. >> >> I guess the question is does that version of READ_ONCE() implement the >> read once semantics. Do we have a good way to test that? >> >> The only differences are because of the early return in the generic >> test_and_set_bit_lock(): > > No, there is another difference: > >> 30 ld r10,560(r9) >> 31 std r10,104(r1) >> 32 ld r10,104(r1) >> 33 andi. r10,r10,1 >> 34 bne29 bne >> > > The stack var is volatile, so it is read back immediately after writing > it, here. This is a bad idea for performance, in general. Argh, yuck. Thanks, I shouldn't try to read asm listings at 11pm. So that just confirms what Will was saying further up the thread about the volatile pointer, rather than READ_ONCE() per se. cheers
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Fri, Dec 13, 2019 at 01:56:18PM +0100, Peter Zijlstra wrote: > > Excellent! I had to change it to something like: > > #define unqual_typeof(x)typeof(({_Atomic typeof(x) ___x __maybe_unused; > ___x; })) > > but that does indeed work! > > Now I suppose we should wrap that in a symbol that indicates our > compiler does indeed support _Atomic, otherwise things will come apart. > > That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which > is exactly the range that needs the daft READ_ONCE() construct, how > convenient :/ > > Something a little like this perhaps? Yes, this looks good to me. Just a small nit here below. > --- > > diff --git a/arch/arm64/include/asm/barrier.h > b/arch/arm64/include/asm/barrier.h > index 7d9cc5ec4971..c389af602da8 100644 > --- a/arch/arm64/include/asm/barrier.h > +++ b/arch/arm64/include/asm/barrier.h > @@ -75,9 +75,9 @@ static inline unsigned long > array_index_mask_nospec(unsigned long idx, > > #define __smp_store_release(p, v)\ > do { \ > - typeof(p) __p = (p);\ > - union { typeof(*p) __val; char __c[1]; } __u = \ > - { .__val = (__force typeof(*p)) (v) }; \ > + unqual_typeof(p) __p = (p); \ > + union { unqual_typeof(*p) __val; char __c[1]; } __u = \ > + { .__val = (__force unqual_typeof(*p)) (v) }; \ The 2 two trailing backslashes are now off by one tab. -- Luc
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Hi! On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote: > I tried this: > > > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, > > int size) > > */ > > #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) > > > > +#else /* GCC_VERSION < 40800 */ > > + > > +#define READ_ONCE_NOCHECK(x) > > \ > > +({ \ > > + typeof(x) __x = *(volatile typeof(x))&(x); \ > > Didn't compile, needed: > > typeof(x) __x = *(volatile typeof())&(x); \ > > > > + smp_read_barrier_depends(); \ > > + __x; > > +}) > > > And that works for me. No extra stack check stuff. > > I guess the question is does that version of READ_ONCE() implement the > read once semantics. Do we have a good way to test that? > > The only differences are because of the early return in the generic > test_and_set_bit_lock(): No, there is another difference: > 30 ld r10,560(r9) > 31 std r10,104(r1) > 32 ld r10,104(r1) > 33 andi. r10,r10,1 > 34 bne29 bne > The stack var is volatile, so it is read back immediately after writing it, here. This is a bad idea for performance, in general. Segher
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds wrote: > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > > The root of my concern in all of this, and what started me looking at it in > > the first place, is the interaction with 'typeof()'. Inheriting 'volatile' > > for a pointer means that local variables in macros declared using typeof() > > suddenly start generating *hideous* code, particularly when pointless stack > > spills get stackprotector all excited. > > Yeah, removing volatile can be a bit annoying. > > For the particular case of the bitops, though, it's not an issue. > Since you know the type there, you can just cast it. > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do > > typeof(0+(*p)) __var; > > since you might as well get the integer promotion anyway (on the > non-volatile result). > > But that doesn't work with structures or unions, of course. > > I'm not entirely sure we have READ_ONCE() with a struct. I do know we > have it with 64-bit entities on 32-bit machines, but that's ok with > the "0+" trick. I'll have my randconfig builder look for instances, so far I found one, see below. My feeling is that it would be better to enforce at least the size being a 1/2/4/8, to avoid cases where someone thinks the access is atomic, but it falls back on a memcpy. Arnd diff --git a/drivers/xen/time.c b/drivers/xen/time.c index 0968859c29d0..adb492c0aa34 100644 --- a/drivers/xen/time.c +++ b/drivers/xen/time.c @@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta( do { state_time = get64(>state_entry_time); rmb(); /* Hypervisor might update data. */ - *res = READ_ONCE(*state); + memcpy(res, state, sizeof(*res)); rmb(); /* Hypervisor might update data. */ } while (get64(>state_entry_time) != state_time || (state_time & XEN_RUNSTATE_UPDATE)); diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 5e88e7e33abe..f4ae360efdba 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -179,6 +179,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #include +extern void __broken_access_once(void *, const void *, unsigned long); + #define __READ_ONCE_SIZE \ ({ \ switch (size) { \ @@ -187,9 +189,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, case 4: *(__u32 *)res = *(volatile __u32 *)p; break;\ case 8: *(__u64 *)res = *(volatile __u64 *)p; break;\ default:\ - barrier(); \ - __builtin_memcpy((void *)res, (const void *)p, size); \ - barrier(); \ + __broken_access_once((void *)res, (const void *)p, size); \ } \ }) @@ -225,9 +225,7 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s case 4: *(volatile __u32 *)p = *(__u32 *)res; break; case 8: *(volatile __u64 *)p = *(__u64 *)res; break; default: - barrier(); - __builtin_memcpy((void *)p, (const void *)res, size); - barrier(); + __broken_access_once((void *)p, (const void *)res, size); } }
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Fri, Dec 13, 2019 at 11:47:06AM +0100, Luc Van Oostenryck wrote: > On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote: > > Now, looking at the current GCC source: > > > > > > https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707 > > > > it seems that __typeof__() is supposed to strip all qualifiers from > > _Atomic types. That lead me to try: > > > > typeof(_Atomic typeof(p)) __p = (p); > > > > But alas, I still get the same junk you got for ool_store_release() :/ > > I was checking this to see if Sparse was ready to support this. > I was a bit surprised because at first sigth GCC was doing as > it claims (typeof striping const & volatile on _Atomic types) > but your exampe wasn't working. But it's working if an > intermediate var is used: > _Atomic typeof(p) tmp; > typeof(tmp) __p = (p); > or, uglier but probably more practical: > typeof(({_Atomic typeof(p) tmp; })) __p = (p); > > Go figure! Excellent! I had to change it to something like: #define unqual_typeof(x)typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; })) but that does indeed work! Now I suppose we should wrap that in a symbol that indicates our compiler does indeed support _Atomic, otherwise things will come apart. That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which is exactly the range that needs the daft READ_ONCE() construct, how convenient :/ Something a little like this perhaps? --- diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 7d9cc5ec4971..c389af602da8 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -75,9 +75,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, #define __smp_store_release(p, v) \ do { \ - typeof(p) __p = (p);\ - union { typeof(*p) __val; char __c[1]; } __u = \ - { .__val = (__force typeof(*p)) (v) }; \ + unqual_typeof(p) __p = (p); \ + union { unqual_typeof(*p) __val; char __c[1]; } __u = \ + { .__val = (__force unqual_typeof(*p)) (v) }; \ compiletime_assert_atomic_type(*p); \ kasan_check_write(__p, sizeof(*p)); \ switch (sizeof(*p)) { \ @@ -110,8 +110,8 @@ do { \ #define __smp_load_acquire(p) \ ({ \ - union { typeof(*p) __val; char __c[1]; } __u; \ - typeof(p) __p = (p);\ + union { unqual_typeof(*p) __val; char __c[1]; } __u;\ + unqual_typeof(p) __p = (p); \ compiletime_assert_atomic_type(*p); \ kasan_check_read(__p, sizeof(*p)); \ switch (sizeof(*p)) { \ @@ -141,8 +141,8 @@ do { \ #define smp_cond_load_relaxed(ptr, cond_expr) \ ({ \ - typeof(ptr) __PTR = (ptr); \ - typeof(*ptr) VAL; \ + unqual_typeof(ptr) __PTR = (ptr); \ + unqual_typeof(*ptr) VAL;\ for (;;) { \ VAL = READ_ONCE(*__PTR);\ if (cond_expr) \ @@ -154,8 +154,8 @@ do { \ #define smp_cond_load_acquire(ptr, cond_expr) \ ({ \ - typeof(ptr) __PTR = (ptr); \ - typeof(*ptr) VAL; \ + unqual_typeof(ptr) __PTR = (ptr); \ + unqual_typeof(*ptr) VAL;\ for (;;) { \ VAL = smp_load_acquire(__PTR); \ if (cond_expr) \ diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Peter Zijlstra writes: > On Thu, Dec 12, 2019 at 10:07:56AM +, Will Deacon wrote: > >> > So your proposed change _should_ be fine. Will, I'm assuming you never >> > saw this on your ARGH64 builds when you did this code ? >> >> I did see it, but (a) looking at the code out-of-line makes it look a lot >> worse than it actually is (so the ext4 example is really helpful -- thanks >> Michael!) and (b) I chalked it up to a crappy compiler. >> >> However, see this comment from Arnd on my READ_ONCE series from the other >> day: >> >> https://lore.kernel.org/lkml/CAK8P3a0f=wvsqsbq4t0fmekcfe_mc3oarxaetvitsksa-d2...@mail.gmail.com >> >> In which case, I'm thinking that we should be doing better in READ_ONCE() >> for non-buggy compilers which would also keep the KCSAN folks happy for this >> code (and would help with [1] too). > > So something like this then? Although I suppose that should be moved > into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so. I tried this: > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int > size) > */ > #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) > > +#else /* GCC_VERSION < 40800 */ > + > +#define READ_ONCE_NOCHECK(x) \ > +({ \ > + typeof(x) __x = *(volatile typeof(x))&(x); \ Didn't compile, needed: typeof(x) __x = *(volatile typeof())&(x); \ > + smp_read_barrier_depends(); \ > + __x; > +}) And that works for me. No extra stack check stuff. I guess the question is does that version of READ_ONCE() implement the read once semantics. Do we have a good way to test that? The only differences are because of the early return in the generic test_and_set_bit_lock(): 1 :1 : 2 addis r2,r12,281 2 addis r2,r12,281 3 addir2,r2,-223683 addi r2,r2,-22064 4 mflrr0 4 mflr r0 5 bl <_mcount> 5 bl <_mcount> 6 mflrr0 6 mflr r0 7 std r31,-8(r1) 7 std r31,-8(r1) 8 std r30,-16(r1) 8 std r30,-16(r1) 9 mr r31,r3 9 mr r31,r3 10 li r3,24 10 li r3,24 11 std r0,16(r1) 11 std r0,16(r1) 12 stdur1,-128(r1)12 stdu r1,-112(r1) 13 ld r30,920(r31) 13 ld r30,920(r31) 14 bl14 bl 15 nop15 nop 16 cmpdi cr7,r3,0 16 cmpdi cr7,r3,0 17 beq cr7, 17 beq cr7, 18 ld r9,920(r31)18 ld r9,920(r31) 19 ld r10,96(r30)19 ld r10,96(r30) 20 lwz r7,84(r30) 20 lwz r7,84(r30) 21 ld r8,104(r9) 21 ld r8,104(r9) 22 ld r10,24(r10)22 ld r10,24(r10) 23 lwz r8,20(r8) 23 lwz r8,20(r8) 24 srd r10,r10,r7 24 srd r10,r10,r7 25 cmpdcr7,r10,r8 25 cmpd cr7,r10,r8 26 bne cr7, 26 bne cr7, 27 lhz r10,160(r9)27 lhz r10,160(r9) 28 andi. r10,r10,2 28 andi. r10,r10,2 29 bne 30 ld r10,560(r9) 31 std r10,104(r1) 32 ld r10,104(r1) 33 andi. r10,r10,1 34 bne29 bne 35 addir7,r9,560 30 addi r9,r9,560 36 li r8,1 31 li r10,1 37 ldarx r10,0,r7 32 ldarx r3,0,r9,1 38 or r6,r8,r10 33 or r8,r3,r10 39 stdcx. r6,0,r734 stdcx. r8,0,r9 40 bne- 35 bne- 41 isync 36
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote: > Now, looking at the current GCC source: > > > https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707 > > it seems that __typeof__() is supposed to strip all qualifiers from > _Atomic types. That lead me to try: > > typeof(_Atomic typeof(p)) __p = (p); > > But alas, I still get the same junk you got for ool_store_release() :/ I was checking this to see if Sparse was ready to support this. I was a bit surprised because at first sigth GCC was doing as it claims (typeof striping const & volatile on _Atomic types) but your exampe wasn't working. But it's working if an intermediate var is used: _Atomic typeof(p) tmp; typeof(tmp) __p = (p); or, uglier but probably more practical: typeof(({_Atomic typeof(p) tmp; })) __p = (p); Go figure! OTOH, at least on GCC 8.3, it seems to always do the same with volatiles than it does with consts. -- Luc Van Oostenryck
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 09:21:57PM +0100, Peter Zijlstra wrote: > On Thu, Dec 12, 2019 at 07:34:01PM +, Will Deacon wrote: > > void ool_store_release(volatile unsigned long *ptr, unsigned long val) > > { > > smp_store_release(ptr, val); > > } > > > > : > >0: a9be7bfdstp x29, x30, [sp, #-32]! > >4: 9002adrpx2, 0 <__stack_chk_guard> > >8: 9142add x2, x2, #0x0 > >c: 910003fdmov x29, sp > > 10: f9400043ldr x3, [x2] > > 14: f9000fa3str x3, [x29, #24] > > 18: d283mov x3, #0x0// #0 > > 1c: c89ffc01stlrx1, [x0] > > 20: f9400fa1ldr x1, [x29, #24] > > 24: f9400040ldr x0, [x2] > > 28: ca20eor x0, x1, x0 > > 2c: b560cbnzx0, 38 > > 30: a8c27bfdldp x29, x30, [sp], #32 > > 34: d65f03c0ret > > 38: 9400bl 0 <__stack_chk_fail> > > > > It's a mess, and fixing READ_ONCE() doesn't help this case, which is why > > I was looking at getting rid of volatile where it's not strictly needed. > > I'm certainly open to other suggestions, I just haven't managed to think > > of anything else. > > We could move the kernel to C++ and write: > > std::remove_volatile::type __p = (p); > > /me runs like hell... Also, the GCC __auto_type thing strips _Atomic and const qualifiers but for some obscure raisin forgets to strip volatile :/ https://gcc.gnu.org/ml/gcc-patches/2013-11/msg01378.html Now, looking at the current GCC source: https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707 it seems that __typeof__() is supposed to strip all qualifiers from _Atomic types. That lead me to try: typeof(_Atomic typeof(p)) __p = (p); But alas, I still get the same junk you got for ool_store_release() :/
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 11:34 AM Will Deacon wrote: > > The root of my concern in all of this, and what started me looking at it in > the first place, is the interaction with 'typeof()'. Inheriting 'volatile' > for a pointer means that local variables in macros declared using typeof() > suddenly start generating *hideous* code, particularly when pointless stack > spills get stackprotector all excited. Yeah, removing volatile can be a bit annoying. For the particular case of the bitops, though, it's not an issue. Since you know the type there, you can just cast it. And if we had the rule that READ_ONCE() was an arithmetic type, you could do typeof(0+(*p)) __var; since you might as well get the integer promotion anyway (on the non-volatile result). But that doesn't work with structures or unions, of course. I'm not entirely sure we have READ_ONCE() with a struct. I do know we have it with 64-bit entities on 32-bit machines, but that's ok with the "0+" trick. Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 07:34:01PM +, Will Deacon wrote: > void ool_store_release(volatile unsigned long *ptr, unsigned long val) > { > smp_store_release(ptr, val); > } > > : >0: a9be7bfdstp x29, x30, [sp, #-32]! >4: 9002adrpx2, 0 <__stack_chk_guard> >8: 9142add x2, x2, #0x0 >c: 910003fdmov x29, sp > 10: f9400043ldr x3, [x2] > 14: f9000fa3str x3, [x29, #24] > 18: d283mov x3, #0x0// #0 > 1c: c89ffc01stlrx1, [x0] > 20: f9400fa1ldr x1, [x29, #24] > 24: f9400040ldr x0, [x2] > 28: ca20eor x0, x1, x0 > 2c: b560cbnzx0, 38 > 30: a8c27bfdldp x29, x30, [sp], #32 > 34: d65f03c0ret > 38: 9400bl 0 <__stack_chk_fail> > > It's a mess, and fixing READ_ONCE() doesn't help this case, which is why > I was looking at getting rid of volatile where it's not strictly needed. > I'm certainly open to other suggestions, I just haven't managed to think > of anything else. We could move the kernel to C++ and write: std::remove_volatile::type __p = (p); /me runs like hell...
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Hi Linus, On Thu, Dec 12, 2019 at 10:43:05AM -0800, Linus Torvalds wrote: > On Thu, Dec 12, 2019 at 10:06 AM Will Deacon wrote: > > > > I'm currently trying to solve the issue by removing volatile from the bitop > > function signatures > > I really think that's the wrong thing to do. > > The bitop signature really should be "volatile" (and it should be > "const volatile" for test_bit, but I'm not sure anybody cares). Agreed on the "const" part, although I do think the "volatile" aspect has nasty side-effects despite being a visual indicator that we're eliding locks. More below. > Exactly because it's simply valid to say "hey, my data is volatile, > but do an atomic test of this bit". So it might be volatile in the > caller. That's fair, although the cases I've run into so far for the bitops are usually just that the functions have been wrapped, and volatile could easily be dropped from the caller as well (e.g. assign_bit(), __node_clear(), linkmode_test_bit()). > Now, I generally frown on actual volatile data structures - because > the data structure volatility often depends on _context_. The same > data might be volatile in one context (when you do some optimistic > test on it without locking), but 100% stable in another (when you do > have a lock). There are cases in driver code where it looks as though data members are being declared volatile specifically because of the bitops type signatures (e.g. 'wrapped' in 'struct mdp5_mdss', 'context_flag' in 'struct drm_device', 'state' in 'struct s2io_nic'). Yeah, it's bogus, but I think that having the modifier in the function signature is still leading people astray. > So I don't want to see "volatile" on data definitions ("jiffies" being > the one traditional exception), but marking things volatile in code > (because you know you're working with unlocked data) and then passing > them down to various helper functions - including the bitops ones - is > quite traditional and accepted. > > In other words, 'volatile" should be treated the same way "const" is > largely treated in C. > > A pointer to "const" data doesn't mean that the data is read-only, or > that it cannot be modified _elsewhere_, it means that within this > particular context and this copy of the pointer we promise not to > write to it. > > Similarly, a pointer to "volatile" data doesn't mean that the data > might not be stable once you take a lock, for example. So it's ok to > have volatile pointers even if the data declaration itself isn't > volatile - you're stating something about the context, not something > fundamental about the data. > > And in the context of the bit operations, "volatile" is the correct thing > to do. The root of my concern in all of this, and what started me looking at it in the first place, is the interaction with 'typeof()'. Inheriting 'volatile' for a pointer means that local variables in macros declared using typeof() suddenly start generating *hideous* code, particularly when pointless stack spills get stackprotector all excited. Even if we simplify READ_ONCE() back to its old incantation, the acquire/release accessors will have the exact same issues on architectures that implement them. For example, consider this code on arm64: void ool_store_release(unsigned long *ptr, unsigned long val) { smp_store_release(ptr, val); } This compiles to a single instruction plus return, which is what we want: : 0: c89ffc01stlrx1, [x0] 4: d65f03c0ret Now, see what happens if we make the 'ptr' argument volatile: void ool_store_release(volatile unsigned long *ptr, unsigned long val) { smp_store_release(ptr, val); } : 0: a9be7bfdstp x29, x30, [sp, #-32]! 4: 9002adrpx2, 0 <__stack_chk_guard> 8: 9142add x2, x2, #0x0 c: 910003fdmov x29, sp 10: f9400043ldr x3, [x2] 14: f9000fa3str x3, [x29, #24] 18: d283mov x3, #0x0// #0 1c: c89ffc01stlrx1, [x0] 20: f9400fa1ldr x1, [x29, #24] 24: f9400040ldr x0, [x2] 28: ca20eor x0, x1, x0 2c: b560cbnzx0, 38 30: a8c27bfdldp x29, x30, [sp], #32 34: d65f03c0ret 38: 9400bl 0 <__stack_chk_fail> It's a mess, and fixing READ_ONCE() doesn't help this case, which is why I was looking at getting rid of volatile where it's not strictly needed. I'm certainly open to other suggestions, I just haven't managed to think of anything else. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 10:06 AM Will Deacon wrote: > > I'm currently trying to solve the issue by removing volatile from the bitop > function signatures I really think that's the wrong thing to do. The bitop signature really should be "volatile" (and it should be "const volatile" for test_bit, but I'm not sure anybody cares). Exactly because it's simply valid to say "hey, my data is volatile, but do an atomic test of this bit". So it might be volatile in the caller. Now, I generally frown on actual volatile data structures - because the data structure volatility often depends on _context_. The same data might be volatile in one context (when you do some optimistic test on it without locking), but 100% stable in another (when you do have a lock). So I don't want to see "volatile" on data definitions ("jiffies" being the one traditional exception), but marking things volatile in code (because you know you're working with unlocked data) and then passing them down to various helper functions - including the bitops ones - is quite traditional and accepted. In other words, 'volatile" should be treated the same way "const" is largely treated in C. A pointer to "const" data doesn't mean that the data is read-only, or that it cannot be modified _elsewhere_, it means that within this particular context and this copy of the pointer we promise not to write to it. Similarly, a pointer to "volatile" data doesn't mean that the data might not be stable once you take a lock, for example. So it's ok to have volatile pointers even if the data declaration itself isn't volatile - you're stating something about the context, not something fundamental about the data. And in the context of the bit operations, "volatile" is the correct thing to do. Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On 12.12.19 19:06, Will Deacon wrote: > On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote: >> On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra wrote: >>> >>> +#ifdef GCC_VERSION < 40800 >> >> Where does that 4.8 version check come from, and why? >> >> Yeah, I know, but this really wants a comment. Sadly it looks like gcc >> bugzilla is down, so >> >>https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 >> >> currently gives an "Internal Server Error" for me. >> >> [ Delete the horrid code we have because of gcc bugs ] >> >>> +#else /* GCC_VERSION < 40800 */ >>> + >>> +#define READ_ONCE_NOCHECK(x) \ >>> +({ \ >>> + typeof(x) __x = *(volatile typeof(x))&(x); \ >> >> I think we can/should just do this unconditionally if it helps th eissue. > > I'm currently trying to solve the issue by removing volatile from the bitop > function signatures, but it's grotty because there are quite a few callers > to fix up. I'm still trying to do it, because removing volatile fields from > structurs is generally a "good thing", but I'd be keen to simplify > READ_ONCE() as you suggest regardless. As I am the one who added the foundation of READ_ONCEs uglyness, I am now in favour of re-simplifying it again. I was first a bit scared about re-introducing bugs, but the gcc testsuite has this particular case covered, so hopefully we should not see the issue with volatile and aggregate types again. Christian
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote: > On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra wrote: > > > > +#ifdef GCC_VERSION < 40800 > > Where does that 4.8 version check come from, and why? > > Yeah, I know, but this really wants a comment. Sadly it looks like gcc > bugzilla is down, so > >https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 > > currently gives an "Internal Server Error" for me. > > [ Delete the horrid code we have because of gcc bugs ] > > > +#else /* GCC_VERSION < 40800 */ > > + > > +#define READ_ONCE_NOCHECK(x) \ > > +({ \ > > + typeof(x) __x = *(volatile typeof(x))&(x); \ > > I think we can/should just do this unconditionally if it helps th eissue. I'm currently trying to solve the issue by removing volatile from the bitop function signatures, but it's grotty because there are quite a few callers to fix up. I'm still trying to do it, because removing volatile fields from structurs is generally a "good thing", but I'd be keen to simplify READ_ONCE() as you suggest regardless. > Maybe add a warning about how gcc < 4.8 might mis-compile the kernel - > those versions are getting close to being unacceptable for kernel > builds anyway. > > We could also look at being stricter for the normal READ/WRITE_ONCE(), > and require that they are > > (a) regular integer types > > (b) fit in an atomic word > > We actually did (b) for a while, until we noticed that we do it on > loff_t's etc and relaxed the rules. But maybe we could have a > "non-atomic" version of READ/WRITE_ONCE() that is used for the > questionable cases? That makes a lot of sense to me, and it would allow us to use compiletime_assert_atomic_type() as we do for the acquire/release accessors. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote: > Yeah, I know, but this really wants a comment. Sadly it looks like gcc > bugzilla is down, so > >https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 > > currently gives an "Internal Server Error" for me. We're being DoSsed again. Reload, it will work after a while :-/ Segher
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra wrote: > > +#ifdef GCC_VERSION < 40800 Where does that 4.8 version check come from, and why? Yeah, I know, but this really wants a comment. Sadly it looks like gcc bugzilla is down, so https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 currently gives an "Internal Server Error" for me. [ Delete the horrid code we have because of gcc bugs ] > +#else /* GCC_VERSION < 40800 */ > + > +#define READ_ONCE_NOCHECK(x) \ > +({ \ > + typeof(x) __x = *(volatile typeof(x))&(x); \ I think we can/should just do this unconditionally if it helps th eissue. Maybe add a warning about how gcc < 4.8 might mis-compile the kernel - those versions are getting close to being unacceptable for kernel builds anyway. We could also look at being stricter for the normal READ/WRITE_ONCE(), and require that they are (a) regular integer types (b) fit in an atomic word We actually did (b) for a while, until we noticed that we do it on loff_t's etc and relaxed the rules. But maybe we could have a "non-atomic" version of READ/WRITE_ONCE() that is used for the questionable cases? Linus
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 05:04:27PM +, Will Deacon wrote: > On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote: > > On Thu, Dec 12, 2019 at 10:07:56AM +, Will Deacon wrote: > > > > > > So your proposed change _should_ be fine. Will, I'm assuming you never > > > > saw this on your ARGH64 builds when you did this code ? > > > > > > I did see it, but (a) looking at the code out-of-line makes it look a lot > > > worse than it actually is (so the ext4 example is really helpful -- thanks > > > Michael!) and (b) I chalked it up to a crappy compiler. > > > > > > However, see this comment from Arnd on my READ_ONCE series from the other > > > day: > > > > > > https://lore.kernel.org/lkml/CAK8P3a0f=wvsqsbq4t0fmekcfe_mc3oarxaetvitsksa-d2...@mail.gmail.com > > > > > > In which case, I'm thinking that we should be doing better in READ_ONCE() > > > for non-buggy compilers which would also keep the KCSAN folks happy for > > > this > > > code (and would help with [1] too). > > > > So something like this then? Although I suppose that should be moved > > into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so. > > Ah wait, I think we've been looking at this wrong. The volatile pointer > argument is actually the problem here, not READ_ONCE()! The use of typeof() > means that the temporary variable to which __READ_ONCE_SIZE writes ends up > being a volatile store, so it can't be optimised away. This is why we get > a stack access and why stack protector then wrecks the codegen for us. Hmm, it's actually probably the volatile read which is causing the problem, since __READ_ONCE_SIZE has casted that away and just uses "void *", but you get the idea. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote: > On Thu, Dec 12, 2019 at 10:07:56AM +, Will Deacon wrote: > > > > So your proposed change _should_ be fine. Will, I'm assuming you never > > > saw this on your ARGH64 builds when you did this code ? > > > > I did see it, but (a) looking at the code out-of-line makes it look a lot > > worse than it actually is (so the ext4 example is really helpful -- thanks > > Michael!) and (b) I chalked it up to a crappy compiler. > > > > However, see this comment from Arnd on my READ_ONCE series from the other > > day: > > > > https://lore.kernel.org/lkml/CAK8P3a0f=wvsqsbq4t0fmekcfe_mc3oarxaetvitsksa-d2...@mail.gmail.com > > > > In which case, I'm thinking that we should be doing better in READ_ONCE() > > for non-buggy compilers which would also keep the KCSAN folks happy for this > > code (and would help with [1] too). > > So something like this then? Although I suppose that should be moved > into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so. Ah wait, I think we've been looking at this wrong. The volatile pointer argument is actually the problem here, not READ_ONCE()! The use of typeof() means that the temporary variable to which __READ_ONCE_SIZE writes ends up being a volatile store, so it can't be optimised away. This is why we get a stack access and why stack protector then wrecks the codegen for us. I'll cook a patch getting rid of those volatiles. Will
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
Hi, On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote: > Some of the generic versions don't generate good code compared to our > versions, but that's because READ_ONCE() is triggering stack protector > to be enabled. The *big* difference is the generic code has a special path that does not do an atomic access at all. Either that is a good idea or not, but we probably should not change the behaviour here, not without benchmarking anyway. > For example, comparing an out-of-line copy of the generic and ppc > versions of test_and_set_bit_lock(): (With what GCC version, and what exact flags?) (A stand-alone testcase would be nice too, btw). (Michael gave me one, thanks!) > If you squint, the generated code for the actual logic is pretty similar, but > the stack protector gunk makes a big mess. And with stack protector it cannot shrink-wrap the exit, one of the bigger performance costs of the stack protector. The extra branch in the generic code isn't fun either (but maybe it is good for performance? > It's particularly bad here > because the ppc version doesn't even need a stack frame. You are hit by this: if (... || (RECORD_OR_UNION_TYPE_P (var_type) && record_or_union_type_has_array_p (var_type)) ...) (in the GCC code, stack_protect_decl_p (), cfgexpand.c) for the variable __u from #define __READ_ONCE(x, check) \ ({ \ union { typeof(x) __val; char __c[1]; } __u;\ __read_once_size(&(x), __u.__c, sizeof(x)); \ smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \ __u.__val; \ }) This is all optimised away later, but at the point this decision is made GCC does not know that. > So READ_ONCE() + STACKPROTECTOR_STRONG is problematic. The root cause is > presumably that READ_ONCE() does an access to an on-stack variable, > which triggers the heuristics in the compiler that the stack needs > protecting. Not exactly, but the problem is READ_ONCE alright. > It seems like a compiler "mis-feature" that a constant-sized access to the > stack > triggers the stack protector logic, especially when the access is eventually > optimised away. But I guess that's probably what we get for doing tricks like > READ_ONCE() in the first place :/ __c is an array. That is all that matters. I don't think it is very reasonable to fault GCC for this. > I tried going back to the version of READ_ONCE() that doesn't use a > union, ie. effectively reverting dd36929720f4 ("kernel: make READ_ONCE() > valid on const arguments") to get: > > #define READ_ONCE(x) \ > ({ typeof(x) __val; __read_once_size(, &__val, sizeof(__val)); __val; > }) With that, it is that the address of __val is taken: ... || TREE_ADDRESSABLE (var) ... > But it makes no difference, the stack protector stuff still triggers. So > I guess it's simply taking the address of a stack variable that triggers > it. Not in the earlier testcase. Btw, there is no such thing as a "stack variable" at that point in the compiler: it just is a local var. > There seems to be a function attribute to enable stack protector for a > function, but not one to disable it: > > https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Function-Attributes.html#index-stack_005fprotect-function-attribute Yes. > That may not be a good solution even if it did exist, because it would > potentially disable stack protector in places where we do want it > enabled. Right, I don't think we want that, such an attribute invites people to write dangerous code. (You already can just put the functions that you want to be unsafe in a separate source file... It sounds even sillier that way, heh). Segher
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 10:07:56AM +, Will Deacon wrote: > > So your proposed change _should_ be fine. Will, I'm assuming you never > > saw this on your ARGH64 builds when you did this code ? > > I did see it, but (a) looking at the code out-of-line makes it look a lot > worse than it actually is (so the ext4 example is really helpful -- thanks > Michael!) and (b) I chalked it up to a crappy compiler. > > However, see this comment from Arnd on my READ_ONCE series from the other > day: > > https://lore.kernel.org/lkml/CAK8P3a0f=wvsqsbq4t0fmekcfe_mc3oarxaetvitsksa-d2...@mail.gmail.com > > In which case, I'm thinking that we should be doing better in READ_ONCE() > for non-buggy compilers which would also keep the KCSAN folks happy for this > code (and would help with [1] too). So something like this then? Although I suppose that should be moved into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so. --- diff --git a/include/linux/compiler.h b/include/linux/compiler.h index ad8c76144a3c..8326e2cf28b4 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -179,20 +179,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #include #include - -#define __READ_ONCE_SIZE \ -({ \ - switch (size) { \ - case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \ - case 2: *(__u16 *)res = *(volatile __u16 *)p; break;\ - case 4: *(__u32 *)res = *(volatile __u32 *)p; break;\ - case 8: *(__u64 *)res = *(volatile __u64 *)p; break;\ - default:\ - barrier(); \ - __builtin_memcpy((void *)res, (const void *)p, size); \ - barrier(); \ - } \ -}) +#include +#include #ifdef CONFIG_KASAN /* @@ -222,6 +210,22 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #define __no_sanitize_or_inline __always_inline #endif +#ifdef GCC_VERSION < 40800 + +#define __READ_ONCE_SIZE \ +({ \ + switch (size) { \ + case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \ + case 2: *(__u16 *)res = *(volatile __u16 *)p; break;\ + case 4: *(__u32 *)res = *(volatile __u32 *)p; break;\ + case 8: *(__u64 *)res = *(volatile __u64 *)p; break;\ + default:\ + barrier(); \ + __builtin_memcpy((void *)res, (const void *)p, size); \ + barrier(); \ + } \ +}) + static __no_kcsan_or_inline void __read_once_size(const volatile void *p, void *res, int size) { @@ -274,9 +278,6 @@ void __write_once_size(volatile void *p, void *res, int size) * with an explicit memory barrier or atomic instruction that provides the * required ordering. */ -#include -#include - #define __READ_ONCE(x, check) \ ({ \ union { typeof(x) __val; char __c[1]; } __u;\ @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size) */ #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) +#else /* GCC_VERSION < 40800 */ + +#define READ_ONCE_NOCHECK(x) \ +({ \ + typeof(x) __x = *(volatile typeof(x))&(x); \ + smp_read_barrier_depends(); \ + __x; +}) + +#define READ_ONCE(x) \ +({ \ + kcsan_check_atomic_read(&(x), sizeof(x)); \ + READ_ONCE_NOCHECK(x); \ +}) + +#endif /* GCC_VERSION < 40800 */ + static __no_kasan_or_inline unsigned long read_word_at_a_time(const void *addr) {
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 09:01:05AM +0100, Peter Zijlstra wrote: > On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote: > > Peter Zijlstra writes: > > > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote: > > Some of the generic versions don't generate good code compared to our > > versions, but that's because READ_ONCE() is triggering stack protector > > to be enabled. > > Bah, there's never anything simple, is there :/ > > > For example, comparing an out-of-line copy of the generic and ppc > > versions of test_and_set_bit_lock(): > > > >1 : 1 > > : > >2 addis r2,r12,361 > >3 addir2,r2,-4240 > >4 stdur1,-48(r1) > >5 rlwinm r8,r3,29,3,28 > >6 clrlwi r10,r3,26 2 rldicl > > r10,r3,58,6 > >7 ld r9,3320(r13) > >8 std r9,40(r1) > >9 li r9,0 > > 10 li r9,13 li r9,1 > > 4 clrlwi r3,r3,26 > > 5 rldicr > > r10,r10,3,60 > > 11 sld r9,r9,r10 6 sld r3,r9,r3 > > 12 add r10,r4,r8 7 add r4,r4,r10 > > 13 ldx r8,r4,r8 > > 14 and.r8,r9,r8 > > 15 bne 34f > > 16 ldarx r7,0,r108 ldarx r9,0,r4,1 > > 17 or r8,r9,r79 or r10,r9,r3 > > 18 stdcx. r8,0,r10 10 stdcx. r10,0,r4 > > 19 bne-16b11 bne-8b > > 20 isync 12 isync > > 21 and r9,r7,r9 13 and r3,r3,r9 > > 22 addic r7,r9,-1 14 addic r9,r3,-1 > > 23 subfe r7,r7,r9 15 subfe r3,r9,r3 > > 24 ld r9,40(r1) > > 25 ld r10,3320(r13) > > 26 xor.r9,r9,r10 > > 27 li r10,0 > > 28 mr r3,r7 > > 29 bne 36f > > 30 addir1,r1,48 > > 31 blr16 blr > > 32 nop > > 33 nop > > 34 li r7,1 > > 35 b 24b > > 36 mflrr0 > > 37 std r0,64(r1) > > 38 bl <__stack_chk_fail+0x8> > > > > > > If you squint, the generated code for the actual logic is pretty similar, > > but > > the stack protector gunk makes a big mess. It's particularly bad here > > because the ppc version doesn't even need a stack frame. > > > > I've also confirmed that even when test_and_set_bit_lock() is inlined > > into an actual call site the stack protector logic still triggers. > > > If I change the READ_ONCE() in test_and_set_bit_lock(): > > > > if (READ_ONCE(*p) & mask) > > return 1; > > > > to a regular pointer access: > > > > if (*p & mask) > > return 1; > > > > Then the generated code looks more or less the same, except for the extra > > early > > return in the generic version of test_and_set_bit_lock(), and different > > handling > > of the return code by the compiler. > > So given that the function signature is: > > static inline int test_and_set_bit_lock(unsigned int nr, > volatile unsigned long *p) > > @p already carries the required volatile qualifier, so READ_ONCE() does > not add anything here (except for easier to read code and poor code > generation). > > So your proposed change _should_ be fine. Will, I'm assuming you never > saw this on your ARGH64 builds when you did this code ? I did see it, but (a) looking at the code out-of-line makes it look a lot worse than it actually is (so the ext4 example is really helpful -- thanks Michael!) and (b) I chalked it up to a crappy compiler. However, see this comment from Arnd on my READ_ONCE series from the other day: https://lore.kernel.org/lkml/CAK8P3a0f=wvsqsbq4t0fmekcfe_mc3oarxaetvitsksa-d2...@mail.gmail.com In which case, I'm thinking that we should be doing better in READ_ONCE() for non-buggy compilers which would also keep the KCSAN folks happy for this code (and would help with [1] too). Will [1] https://lkml.org/lkml/2019/11/12/898
Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote: > [ trimmed CC a bit ] > > Peter Zijlstra writes: > > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote: > ... > > you write: > > > > "Currently bitops-instrumented.h assumes that the architecture provides > > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). > > This is true on x86 and s390, but is not always true: there is a > > generic bitops/non-atomic.h header that provides generic non-atomic > > operations, and also a generic bitops/lock.h for locking operations." > > > > Is there any actual benefit for PPC to using their own atomic bitops > > over bitops/lock.h ? I'm thinking that the generic code is fairly > > optimal for most LL/SC architectures. > > Yes and no :) > > Some of the generic versions don't generate good code compared to our > versions, but that's because READ_ONCE() is triggering stack protector > to be enabled. Bah, there's never anything simple, is there :/ > For example, comparing an out-of-line copy of the generic and ppc > versions of test_and_set_bit_lock(): > >1 : 1 : >2 addis r2,r12,361 >3 addir2,r2,-4240 >4 stdur1,-48(r1) >5 rlwinm r8,r3,29,3,28 >6 clrlwi r10,r3,26 2 rldicl r10,r3,58,6 >7 ld r9,3320(r13) >8 std r9,40(r1) >9 li r9,0 > 10 li r9,13 li r9,1 > 4 clrlwi r3,r3,26 > 5 rldicr > r10,r10,3,60 > 11 sld r9,r9,r10 6 sld r3,r9,r3 > 12 add r10,r4,r8 7 add r4,r4,r10 > 13 ldx r8,r4,r8 > 14 and.r8,r9,r8 > 15 bne 34f > 16 ldarx r7,0,r108 ldarx r9,0,r4,1 > 17 or r8,r9,r79 or r10,r9,r3 > 18 stdcx. r8,0,r10 10 stdcx. r10,0,r4 > 19 bne-16b11 bne-8b > 20 isync 12 isync > 21 and r9,r7,r9 13 and r3,r3,r9 > 22 addic r7,r9,-1 14 addic r9,r3,-1 > 23 subfe r7,r7,r9 15 subfe r3,r9,r3 > 24 ld r9,40(r1) > 25 ld r10,3320(r13) > 26 xor.r9,r9,r10 > 27 li r10,0 > 28 mr r3,r7 > 29 bne 36f > 30 addir1,r1,48 > 31 blr16 blr > 32 nop > 33 nop > 34 li r7,1 > 35 b 24b > 36 mflrr0 > 37 std r0,64(r1) > 38 bl <__stack_chk_fail+0x8> > > > If you squint, the generated code for the actual logic is pretty similar, but > the stack protector gunk makes a big mess. It's particularly bad here > because the ppc version doesn't even need a stack frame. > > I've also confirmed that even when test_and_set_bit_lock() is inlined > into an actual call site the stack protector logic still triggers. > If I change the READ_ONCE() in test_and_set_bit_lock(): > > if (READ_ONCE(*p) & mask) > return 1; > > to a regular pointer access: > > if (*p & mask) > return 1; > > Then the generated code looks more or less the same, except for the extra > early > return in the generic version of test_and_set_bit_lock(), and different > handling > of the return code by the compiler. So given that the function signature is: static inline int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) @p already carries the required volatile qualifier, so READ_ONCE() does not add anything here (except for easier to read code and poor code generation). So your proposed change _should_ be fine. Will, I'm assuming you never saw this on your ARGH64 builds when you did this code ? --- diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h index dd90c9792909..10264e8808f8 100644 --- a/include/asm-generic/bitops/atomic.h +++ b/include/asm-generic/bitops/atomic.h @@ -35,7 +35,7 @@ static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - if (READ_ONCE(*p) & mask) + if (*p & mask) return 1; old = atomic_long_fetch_or(mask, (atomic_long_t *)p); @@ -48,7 +48,7 @@ static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p) unsigned long mask = BIT_MASK(nr); p += BIT_WORD(nr); - if (!(READ_ONCE(*p)
READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
[ trimmed CC a bit ] Peter Zijlstra writes: > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote: ... > you write: > > "Currently bitops-instrumented.h assumes that the architecture provides > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). > This is true on x86 and s390, but is not always true: there is a > generic bitops/non-atomic.h header that provides generic non-atomic > operations, and also a generic bitops/lock.h for locking operations." > > Is there any actual benefit for PPC to using their own atomic bitops > over bitops/lock.h ? I'm thinking that the generic code is fairly > optimal for most LL/SC architectures. Yes and no :) Some of the generic versions don't generate good code compared to our versions, but that's because READ_ONCE() is triggering stack protector to be enabled. For example, comparing an out-of-line copy of the generic and ppc versions of test_and_set_bit_lock(): 1 : 1 : 2 addis r2,r12,361 3 addir2,r2,-4240 4 stdur1,-48(r1) 5 rlwinm r8,r3,29,3,28 6 clrlwi r10,r3,26 2 rldicl r10,r3,58,6 7 ld r9,3320(r13) 8 std r9,40(r1) 9 li r9,0 10 li r9,13 li r9,1 4 clrlwi r3,r3,26 5 rldicr r10,r10,3,60 11 sld r9,r9,r10 6 sld r3,r9,r3 12 add r10,r4,r8 7 add r4,r4,r10 13 ldx r8,r4,r8 14 and.r8,r9,r8 15 bne 34f 16 ldarx r7,0,r108 ldarx r9,0,r4,1 17 or r8,r9,r79 or r10,r9,r3 18 stdcx. r8,0,r10 10 stdcx. r10,0,r4 19 bne-16b11 bne-8b 20 isync 12 isync 21 and r9,r7,r9 13 and r3,r3,r9 22 addic r7,r9,-1 14 addic r9,r3,-1 23 subfe r7,r7,r9 15 subfe r3,r9,r3 24 ld r9,40(r1) 25 ld r10,3320(r13) 26 xor.r9,r9,r10 27 li r10,0 28 mr r3,r7 29 bne 36f 30 addir1,r1,48 31 blr16 blr 32 nop 33 nop 34 li r7,1 35 b 24b 36 mflrr0 37 std r0,64(r1) 38 bl <__stack_chk_fail+0x8> If you squint, the generated code for the actual logic is pretty similar, but the stack protector gunk makes a big mess. It's particularly bad here because the ppc version doesn't even need a stack frame. I've also confirmed that even when test_and_set_bit_lock() is inlined into an actual call site the stack protector logic still triggers. eg, if I make two versions of ext4_resize_begin() which call the generic or ppc version of test_and_set_bit_lock(), the generic version gets a bunch of extra stack protector code. 1 c05336e0 : 1 c05335b0 : 2 addis r2,r12,281 2 addis r2,r12,281 3 addir2,r2,-122563 addi r2,r2,-11952 4 mflrr0 4 mflr r0 5 bl <_mcount> 5 bl <_mcount> 6 mflrr0 6 mflr r0 7 std r31,-8(r1) 7 std r31,-8(r1) 8 std r30,-16(r1) 8 std r30,-16(r1) 9 mr r31,r3 9 mr r31,r3 10 li r3,24 10 li r3,24 11 std r0,16(r1) 11 std r0,16(r1) 12 stdur1,-128(r1)12 stdu r1,-112(r1) 13 ld r9,3320(r13) 14 std r9,104(r1) 15 li r9,0 16 ld r30,920(r31) 13 ld r30,920(r31) 17 bl14 bl 18 nop15 nop 19 cmpdi cr7,r3,0 16 cmpdi cr7,r3,0 20 beq cr7, 17 beq cr7, 21 ld r9,920(r31)18 ld r9,920(r31) 22 ld r10,96(r30)19 ld
Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
Peter Zijlstra writes: > On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote: > >> Good question, I'll have a look. >> >> There seems to be confusion about what the type of the bit number is, >> which is leading to sign extension in some cases and not others. > > Shiny. > >> It looks like the type should be unsigned long? > > I'm thinking unsigned makes most sense, I mean, negative bit offsets > should 'work' but that's almost always guaranteed to be an out-of-bound > operation. Yeah I agree. > As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I > suppose since the bitmap itself is 'unsigned long', we might as well use > 'unsigned long' for the bitnr too. 4G is a lot of bits, but it's not *that* many. eg. If we had a bit per 4K page on a 32T machine that would be 8G bits. So unsigned long seems best. >> Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned >> long nr, unsigned long *addr); >> arch/mips/include/asm/bitops.h:static inline void >> __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) >> arch/powerpc/include/asm/bitops.h:static inline void >> arch___clear_bit_unlock(int nr, volatile unsigned long *addr) >> arch/riscv/include/asm/bitops.h:static inline void >> __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) >> arch/s390/include/asm/bitops.h:static inline void >> arch___clear_bit_unlock(unsigned long nr, >> include/asm-generic/bitops/instrumented-lock.h:static inline void >> __clear_bit_unlock(long nr, volatile unsigned long *addr) >> include/asm-generic/bitops/lock.h:static inline void >> __clear_bit_unlock(unsigned int nr, >> >> So I guess step one is to convert our versions to use unsigned long, so >> we're at least not tripping over that difference when comparing the >> assembly. > > Yeah, I'll look at fixing the generic code, bitops/atomic.h and > bitops/non-atomic.h don't even agree on the type of bitnr. Thanks. cheers
Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote: > Good question, I'll have a look. > > There seems to be confusion about what the type of the bit number is, > which is leading to sign extension in some cases and not others. Shiny. > It looks like the type should be unsigned long? I'm thinking unsigned makes most sense, I mean, negative bit offsets should 'work' but that's almost always guaranteed to be an out-of-bound operation. As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I suppose since the bitmap itself is 'unsigned long', we might as well use 'unsigned long' for the bitnr too. > Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned > long nr, unsigned long *addr); > arch/mips/include/asm/bitops.h:static inline void > __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) > arch/powerpc/include/asm/bitops.h:static inline void > arch___clear_bit_unlock(int nr, volatile unsigned long *addr) > arch/riscv/include/asm/bitops.h:static inline void > __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) > arch/s390/include/asm/bitops.h:static inline void > arch___clear_bit_unlock(unsigned long nr, > include/asm-generic/bitops/instrumented-lock.h:static inline void > __clear_bit_unlock(long nr, volatile unsigned long *addr) > include/asm-generic/bitops/lock.h:static inline void > __clear_bit_unlock(unsigned int nr, > > So I guess step one is to convert our versions to use unsigned long, so > we're at least not tripping over that difference when comparing the > assembly. Yeah, I'll look at fixing the generic code, bitops/atomic.h and bitops/non-atomic.h don't even agree on the type of bitnr.
Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
Peter Zijlstra writes: > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA256 >> >> Hi Linus, >> >> Please pull another powerpc update for 5.5. >> >> As you'll see from the diffstat this is mostly not powerpc code. In order to >> do >> KASAN instrumentation of bitops we needed to juggle some of the generic >> bitops >> headers. >> >> Because those changes potentially affect several architectures I wasn't >> confident putting them directly into my tree, so I've had them sitting in a >> topic branch. That branch (topic/kasan-bitops) has been in linux-next for a >> month, and I've not had any feedback that it's caused any problems. >> >> So I think this is good to merge, but it's a standalone pull so if anyone >> does >> object it's not a problem. > > No objections, but here: > > > https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9 > > you write: > > "Currently bitops-instrumented.h assumes that the architecture provides > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). > This is true on x86 and s390, but is not always true: there is a > generic bitops/non-atomic.h header that provides generic non-atomic > operations, and also a generic bitops/lock.h for locking operations." > > Is there any actual benefit for PPC to using their own atomic bitops > over bitops/lock.h ? I'm thinking that the generic code is fairly > optimal for most LL/SC architectures. Good question, I'll have a look. There seems to be confusion about what the type of the bit number is, which is leading to sign extension in some cases and not others. eg, comparing the generic clear_bit_unlock() vs ours: 1 c0031890 : 1 c00319a0 : 2 extsw r3,r3 3 li r10,1 4 srawi r9,r3,6 5 addze r9,r9 6 rlwinm r8,r9,6,0,25 7 extsw r9,r9 8 subf r3,r8,r3 2 rlwinm r9,r3,29,3,289 rldicr r9,r9,3,60 10 sld r3,r10,r3 3 add r4,r4,r911 add r4,r4,r9 4 lwsync 12 lwsync 5 li r9,-2 6 clrlwi r3,r3,26 7 rotld r3,r9,r3 8 ldarx r9,0,r4 13 ldarx r9,0,r4 9 and r10,r3,r9 14 andc r9,r9,r3 10 stdcx. r10,0,r415 stdcx. r9,0,r4 11 bne- 16 bne- 12 blr 17 blr It looks like in actual usage it often doesn't matter, ie. when we pass a constant bit number it all gets inlined and the compiler works it out. It looks like the type should be unsigned long? Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned long nr, unsigned long *addr); arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr) arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr) arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr, include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr, So I guess step one is to convert our versions to use unsigned long, so we're at least not tripping over that difference when comparing the assembly. cheers
Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
The pull request you sent on Fri, 06 Dec 2019 23:46:11 +1100: > https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git > tags/powerpc-5.5-2 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/43a2898631a8beee66c1d64c1e860f43d96b2e91 Thank you! -- Deet-doot-dot, I am a bot. https://korg.wiki.kernel.org/userdoc/prtracker
Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Hi Linus, > > Please pull another powerpc update for 5.5. > > As you'll see from the diffstat this is mostly not powerpc code. In order to > do > KASAN instrumentation of bitops we needed to juggle some of the generic bitops > headers. > > Because those changes potentially affect several architectures I wasn't > confident putting them directly into my tree, so I've had them sitting in a > topic branch. That branch (topic/kasan-bitops) has been in linux-next for a > month, and I've not had any feedback that it's caused any problems. > > So I think this is good to merge, but it's a standalone pull so if anyone does > object it's not a problem. No objections, but here: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9 you write: "Currently bitops-instrumented.h assumes that the architecture provides atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). This is true on x86 and s390, but is not always true: there is a generic bitops/non-atomic.h header that provides generic non-atomic operations, and also a generic bitops/lock.h for locking operations." Is there any actual benefit for PPC to using their own atomic bitops over bitops/lock.h ? I'm thinking that the generic code is fairly optimal for most LL/SC architectures. I've been meaning to audit the various architectures and move them over, but alas, it's something I've not yet had time for...
[GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Linus, Please pull another powerpc update for 5.5. As you'll see from the diffstat this is mostly not powerpc code. In order to do KASAN instrumentation of bitops we needed to juggle some of the generic bitops headers. Because those changes potentially affect several architectures I wasn't confident putting them directly into my tree, so I've had them sitting in a topic branch. That branch (topic/kasan-bitops) has been in linux-next for a month, and I've not had any feedback that it's caused any problems. So I think this is good to merge, but it's a standalone pull so if anyone does object it's not a problem. cheers The following changes since commit da0c9ea146cbe92b832f1b0f694840ea8eb33cce: Linux 5.4-rc2 (2019-10-06 14:27:30 -0700) are available in the git repository at: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git tags/powerpc-5.5-2 for you to fetch changes up to 4f4afc2c9599520300b3f2b3666d2034fca03df3: docs/core-api: Remove possibly confusing sub-headings from Bit Operations (2019-12-04 21:20:28 +1100) - -- powerpc updates for 5.5 #2 A few commits splitting the KASAN instrumented bitops header in three, to match the split of the asm-generic bitops headers. This is needed on powerpc because we use asm-generic/bitops/non-atomic.h, for the non-atomic bitops, whereas the existing KASAN instrumented bitops assume all the underlying operations are provided by the arch as arch_foo() versions. Thanks to: Daniel Axtens & Christophe Leroy. - -- Daniel Axtens (2): kasan: support instrumented bitops combined with generic bitops powerpc: support KASAN instrumentation of bitops Michael Ellerman (1): docs/core-api: Remove possibly confusing sub-headings from Bit Operations Documentation/core-api/kernel-api.rst| 8 +- arch/powerpc/include/asm/bitops.h| 51 ++-- arch/s390/include/asm/bitops.h | 4 +- arch/x86/include/asm/bitops.h| 4 +- include/asm-generic/bitops-instrumented.h| 263 include/asm-generic/bitops/instrumented-atomic.h | 100 include/asm-generic/bitops/instrumented-lock.h | 81 ++ include/asm-generic/bitops/instrumented-non-atomic.h | 114 + 8 files changed, 337 insertions(+), 288 deletions(-) delete mode 100644 include/asm-generic/bitops-instrumented.h create mode 100644 include/asm-generic/bitops/instrumented-atomic.h create mode 100644 include/asm-generic/bitops/instrumented-lock.h create mode 100644 include/asm-generic/bitops/instrumented-non-atomic.h -BEGIN PGP SIGNATURE- iQIzBAEBCAAdFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl3qSS4ACgkQUevqPMjh pYCp1Q//TrG2tPMDPHpWqCzNdWoh96zpIo2UsauDcc8l+XT7shkwHcGnpoECgCfK NjhP77qqXI61E+5qUCfO16/j5g6PbvvG/E/xlQEdgX7lIxBeGs4IkoRU8QjkJ9w5 wAjG/XwaMJ21CQY2F51dn9NPQUvFxKV0o6QJ+/pIFBnv0eeYCtRWno7+tZGIiMhk ExfJhR0rnBdBc6oonNOTAfWn5u51FRRqUeICeo4iFoICu5v4cTbPiU3/8bZYzhSb wM9WdG+/IUs02PffIQF4GDyMmzi/Qm3Ujl3tUIEaFHlfN9pF6X7Yog7Co26CShJj No4wJK5rS3ECXmwo7Yd69sV9FZrMZZvGY9x7p7bEE7mqk1fHMaM3DMXvR8Gx6UGM NCXX2QIIigz3RUTbj3CW2iZa9R/FTSFXs3Ih4YDDJdPNanYpcX3/wE6mpwsco8do lxWcN1AMGXLiaNdQ8IkRZ6hOLH/Po34RvDo1P1mS06NzfyyTZW7JNiUtU2HSqPRs vjIkHDM7585ika6jeDHU4cJaLy7bsCNV2fLsHWDE3Xno43g7qcKGOx+PtO25XubZ iP1vojR4Qml+e3ySf6dDiOIDltSWZwjCGtbi2gmdErHiLdLeJX2XGjC36Qnep6u6 15HIWzX41tg8y4QRJDmPyeDm3Ccbabz+m4LaccbdObgGWVwxwgA= =06Wr -END PGP SIGNATURE-