Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection
On Tue, Jul 25, 2017 at 5:03 AM, Li Kunwrote: > Hi Kees, > > > on 2017/7/25 2:35, Kees Cook wrote: >> >> +static __always_inline __must_check >> +int __refcount_add_unless(refcount_t *r, int a, int u) >> +{ >> + int c, new; >> + >> + c = atomic_read(&(r->refs)); >> + do { >> + if (unlikely(c == u)) >> + break; >> + >> + asm volatile("addl %2,%0\n\t" >> + REFCOUNT_CHECK_LT_ZERO >> + : "=r" (new) >> + : "0" (c), "ir" (a), >> + [counter] "m" (r->refs.counter) >> + : "cc", "cx"); >> + >> + } while (!atomic_try_cmpxchg(&(r->refs), , new)); >> + >> + return c; >> +} > > here when the result LT_ZERO, you will saturate the r->refs.counter and make > the > > atomic_try_cmpxchg(&(r->refs), , new) bound to fail first time. > > maybe we can just saturate the value of variable "new" ? Oh, good catch! Thanks. Actually, it's even worse than that: we'll exit this function without the refcount being correctly saturated. The final result will be INT_MIN / 2 + a. It's not terrible, but I should have noticed this in testing. (There was a gap in my testing for the _not_zero() overflows, which I've fixed now...) I'll figure this out or revert to the logic I was using in v6... -Kees -- Kees Cook Pixel Security
Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection
On Tue, Jul 25, 2017 at 5:03 AM, Li Kun wrote: > Hi Kees, > > > on 2017/7/25 2:35, Kees Cook wrote: >> >> +static __always_inline __must_check >> +int __refcount_add_unless(refcount_t *r, int a, int u) >> +{ >> + int c, new; >> + >> + c = atomic_read(&(r->refs)); >> + do { >> + if (unlikely(c == u)) >> + break; >> + >> + asm volatile("addl %2,%0\n\t" >> + REFCOUNT_CHECK_LT_ZERO >> + : "=r" (new) >> + : "0" (c), "ir" (a), >> + [counter] "m" (r->refs.counter) >> + : "cc", "cx"); >> + >> + } while (!atomic_try_cmpxchg(&(r->refs), , new)); >> + >> + return c; >> +} > > here when the result LT_ZERO, you will saturate the r->refs.counter and make > the > > atomic_try_cmpxchg(&(r->refs), , new) bound to fail first time. > > maybe we can just saturate the value of variable "new" ? Oh, good catch! Thanks. Actually, it's even worse than that: we'll exit this function without the refcount being correctly saturated. The final result will be INT_MIN / 2 + a. It's not terrible, but I should have noticed this in testing. (There was a gap in my testing for the _not_zero() overflows, which I've fixed now...) I'll figure this out or revert to the logic I was using in v6... -Kees -- Kees Cook Pixel Security
Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection
Hi Kees, on 2017/7/25 2:35, Kees Cook wrote: +static __always_inline __must_check +int __refcount_add_unless(refcount_t *r, int a, int u) +{ + int c, new; + + c = atomic_read(&(r->refs)); + do { + if (unlikely(c == u)) + break; + + asm volatile("addl %2,%0\n\t" + REFCOUNT_CHECK_LT_ZERO + : "=r" (new) + : "0" (c), "ir" (a), + [counter] "m" (r->refs.counter) + : "cc", "cx"); here when the result LT_ZERO, you will saturate the r->refs.counter and make the atomic_try_cmpxchg(&(r->refs), , new) bound to fail first time. maybe we can just saturate the value of variable "new" ? + + } while (!atomic_try_cmpxchg(&(r->refs), , new)); + + return c; +} + -- Best Regards Li Kun
Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection
Hi Kees, on 2017/7/25 2:35, Kees Cook wrote: +static __always_inline __must_check +int __refcount_add_unless(refcount_t *r, int a, int u) +{ + int c, new; + + c = atomic_read(&(r->refs)); + do { + if (unlikely(c == u)) + break; + + asm volatile("addl %2,%0\n\t" + REFCOUNT_CHECK_LT_ZERO + : "=r" (new) + : "0" (c), "ir" (a), + [counter] "m" (r->refs.counter) + : "cc", "cx"); here when the result LT_ZERO, you will saturate the r->refs.counter and make the atomic_try_cmpxchg(&(r->refs), , new) bound to fail first time. maybe we can just saturate the value of variable "new" ? + + } while (!atomic_try_cmpxchg(&(r->refs), , new)); + + return c; +} + -- Best Regards Li Kun