On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote: > Profiling NFSD under an iozone workload showed that hardened > usercopy checks consume roughly 1.3% of CPU in the TCP receive > path. The runtime check in check_object_size() validates that > copy buffers reside in expected slab regions, which is > meaningful when data crosses the user/kernel boundary but adds > no value when both source and destination are kernel addresses.
I'm not sure I'd go as far as "no value". I could see an attack which managed to trick the kernel into copying past the end of a slab object and sending the contents of that buffer across the network to an attacker. Or I guess in this case you're talking about copying _to_ a slab object. Then we could see a network attacker somewhow confusing the kernel into copying past the end of the object they allocated, overwriting slab metadata and/or the contents of the next object in the slab. Limited value, sure. And the performance change you're showing here certainly isn't nothing! > Split check_copy_size() so that copy_to_iter() can bypass the > runtime check_object_size() call for kernel-only iterators > (ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size() > are unaffected; user-backed iterators still receive the full > usercopy validation. > > This benefits all kernel consumers of copy_to_iter(), including > the TCP receive path used by the NFS client and server, > NVMe-TCP, and any other subsystem that uses ITER_BVEC or > ITER_KVEC receive buffers. > > Signed-off-by: Chuck Lever <[email protected]> > --- > include/linux/ucopysize.h | 10 +++++++++- > include/linux/uio.h | 9 +++++++-- > 2 files changed, 16 insertions(+), 3 deletions(-) > > diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h > index 41c2d9720466..b3eacb4869a8 100644 > --- a/include/linux/ucopysize.h > +++ b/include/linux/ucopysize.h > @@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned long > count) > } > > static __always_inline __must_check bool > -check_copy_size(const void *addr, size_t bytes, bool is_source) > +check_copy_size_nosec(const void *addr, size_t bytes, bool is_source) > { > int sz = __builtin_object_size(addr, 0); > if (unlikely(sz >= 0 && sz < bytes)) { > @@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool > is_source) > } > if (WARN_ON_ONCE(bytes > INT_MAX)) > return false; > + return true; > +} > + > +static __always_inline __must_check bool > +check_copy_size(const void *addr, size_t bytes, bool is_source) > +{ > + if (!check_copy_size_nosec(addr, bytes, is_source)) > + return false; > check_object_size(addr, bytes, is_source); > return true; > } > diff --git a/include/linux/uio.h b/include/linux/uio.h > index a9bc5b3067e3..f860529abfbe 100644 > --- a/include/linux/uio.h > +++ b/include/linux/uio.h > @@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, > unsigned offset, > static __always_inline __must_check > size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) > { > - if (check_copy_size(addr, bytes, true)) > - return _copy_to_iter(addr, bytes, i); > + if (user_backed_iter(i)) { > + if (check_copy_size(addr, bytes, true)) > + return _copy_to_iter(addr, bytes, i); > + } else { > + if (check_copy_size_nosec(addr, bytes, true)) > + return _copy_to_iter(addr, bytes, i); > + } > return 0; > } > > -- > 2.53.0 > >
