Hi All, As I reported at [1], kstack offset randomisation suffers from a couple of bugs and, on arm64 at least, the performance is poor. This series attempts to fix both; patch 1 provides back-portable fixes for the functional bugs. Patches 2-3 propose a performance improvement approach.
I've looked at a few different options but ultimately decided that Jeremy's original prng approach is the fastest. I made the argument that this approach is secure "enough" in the RFC [2] and the responses indicated agreement. More details in the commit logs. Performance =========== Mean and tail performance of 3 "small" syscalls was measured. syscall was made 10 million times and each individually measured and binned. These results have low noise so I'm confident that they are trustworthy. The baseline is v6.18-rc5 with stack randomization turned *off*. So I'm showing performance cost of turning it on without any changes to the implementation, then the reduced performance cost of turning it on with my changes applied. **NOTE**: The below results were generated using the RFC patches but there is no meaningful change, so the numbers are still valid. arm64 (AWS Graviton3): +-----------------+--------------+-------------+---------------+ | Benchmark | Result Class | v6.18-rc5 | per-task-prng | | | | rndstack-on | | | | | | | +=================+==============+=============+===============+ | syscall/getpid | mean (ns) | (R) 15.62% | (R) 3.43% | | | p99 (ns) | (R) 155.01% | (R) 3.20% | | | p99.9 (ns) | (R) 156.71% | (R) 2.93% | +-----------------+--------------+-------------+---------------+ | syscall/getppid | mean (ns) | (R) 14.09% | (R) 2.12% | | | p99 (ns) | (R) 152.81% | 1.55% | | | p99.9 (ns) | (R) 153.67% | 1.77% | +-----------------+--------------+-------------+---------------+ | syscall/invalid | mean (ns) | (R) 13.89% | (R) 3.32% | | | p99 (ns) | (R) 165.82% | (R) 3.51% | | | p99.9 (ns) | (R) 168.83% | (R) 3.77% | +-----------------+--------------+-------------+---------------+ Because arm64 was previously using get_random_u16(), it was expensive when it didn't have any buffered bits and had to call into the crng. That's what caused the enormous tail latency. x86 (AWS Sapphire Rapids): +-----------------+--------------+-------------+---------------+ | Benchmark | Result Class | v6.18-rc5 | per-task-prng | | | | rndstack-on | | | | | | | +=================+==============+=============+===============+ | syscall/getpid | mean (ns) | (R) 13.32% | (R) 4.60% | | | p99 (ns) | (R) 13.38% | (R) 18.08% | | | p99.9 (ns) | 16.26% | (R) 19.38% | +-----------------+--------------+-------------+---------------+ | syscall/getppid | mean (ns) | (R) 11.96% | (R) 5.26% | | | p99 (ns) | (R) 11.83% | (R) 8.35% | | | p99.9 (ns) | (R) 11.42% | (R) 22.37% | +-----------------+--------------+-------------+---------------+ | syscall/invalid | mean (ns) | (R) 10.58% | (R) 2.91% | | | p99 (ns) | (R) 10.51% | (R) 4.36% | | | p99.9 (ns) | (R) 10.35% | (R) 21.97% | +-----------------+--------------+-------------+---------------+ I was surprised to see that the baseline cost on x86 is 10-12% since it is just using rdtsc. But as I say, I believe the results are accurate. Changes since v1 (RFC) [2] ========================== - Introduced patch 2 to make prandom_u32_state() __always_inline (needed since its called from noinstr code) - In patch 3, prng is now per-cpu instead of per-task (per Ard) [1] https://lore.kernel.org/all/[email protected]/ [2] https://lore.kernel.org/all/[email protected]/ Thanks, Ryan Ryan Roberts (3): randomize_kstack: Maintain kstack_offset per task prandom: Convert prandom_u32_state() to __always_inline randomize_kstack: Unify random source across arches arch/Kconfig | 5 +-- arch/arm64/kernel/syscall.c | 11 ----- arch/loongarch/kernel/syscall.c | 11 ----- arch/powerpc/kernel/syscall.c | 12 ------ arch/riscv/kernel/traps.c | 12 ------ arch/s390/include/asm/entry-common.h | 8 ---- arch/x86/include/asm/entry-common.h | 12 ------ include/linux/prandom.h | 19 ++++++++- include/linux/randomize_kstack.h | 61 ++++++++++++---------------- init/main.c | 2 +- kernel/fork.c | 1 + lib/random32.c | 19 --------- 12 files changed, 49 insertions(+), 124 deletions(-) -- 2.43.0
