On Mon, Mar 06, 2017 at 01:58:51PM +0100, Peter Zijlstra wrote:
> On Mon, Mar 06, 2017 at 01:50:47PM +0100, Dmitry Vyukov wrote:
> > On Mon, Mar 6, 2017 at 1:42 PM, Dmitry Vyukov <[email protected]> wrote:
> > > KASAN uses compiler instrumentation to intercept all memory accesses.
> > > But it does not see memory accesses done in assembly code.
> > > One notable user of assembly code is atomic operations. Frequently,
> > > for example, an atomic reference decrement is the last access to an
> > > object and a good candidate for a racy use-after-free.
> > >
> > > Add manual KASAN checks to atomic operations.
> > > Note: we need checks only before asm blocks and don't need them
> > > in atomic functions composed of other atomic functions
> > > (e.g. load-cmpxchg loops).
> > 
> > Peter, also pointed me at arch/x86/include/asm/bitops.h. Will add them in 
> > v2.
> > 
> 
> > >  static __always_inline void atomic_add(int i, atomic_t *v)
> > >  {
> > > +       kasan_check_write(v, sizeof(*v));
> > >         asm volatile(LOCK_PREFIX "addl %1,%0"
> > >                      : "+m" (v->counter)
> > >                      : "ir" (i));
> 
> 
> So the problem is doing load/stores from asm bits, and GCC
> (traditionally) doesn't try and interpret APP asm bits.
> 
> However, could we not write a GCC plugin that does exactly that?
> Something that interprets the APP asm bits and generates these KASAN
> bits that go with it?

Another suspect is the per-cpu stuff, that's all asm foo as well.

Reply via email to