On Mon, Oct 28, 2024 at 06:56:14PM -0700, Josh Poimboeuf wrote:
> The barrier_nospec() in 64-bit copy_from_user() is slow.  Instead use
> pointer masking to force the user pointer to all 1's if the access_ok()
> mispredicted true for an invalid address.
> 
> The kernel test robot reports a 2.6% improvement in the per_thread_ops
> benchmark (see link below).
> 
> To avoid regressing powerpc and 32-bit x86, move their barrier_nospec()
> calls to their respective raw_copy_from_user() implementations so
> there's no functional change there.
> 
> Note that for safety on some AMD CPUs, this relies on recent commit
> 86e6b1547b3d ("x86: fix user address masking non-canonical speculation
> issue").
> 
> Link: https://lore.kernel.org/202410281344.d02c72a2-oliver.s...@intel.com
> Signed-off-by: Josh Poimboeuf <jpoim...@kernel.org>

Acked-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Reply via email to