CVSROOT: /cvs Module name: src Changes by: js...@cvs.openbsd.org 2025/01/27 07:02:32
Modified files: lib/libcrypto/arch/aarch64: opensslconf.h lib/libcrypto/arch/alpha: opensslconf.h lib/libcrypto/arch/amd64: opensslconf.h lib/libcrypto/arch/arm: opensslconf.h lib/libcrypto/arch/hppa: opensslconf.h lib/libcrypto/arch/i386: opensslconf.h lib/libcrypto/arch/m88k: opensslconf.h lib/libcrypto/arch/mips64: opensslconf.h lib/libcrypto/arch/powerpc: opensslconf.h lib/libcrypto/arch/powerpc64: opensslconf.h lib/libcrypto/arch/riscv64: opensslconf.h lib/libcrypto/arch/sh: opensslconf.h lib/libcrypto/arch/sparc64: opensslconf.h lib/libcrypto/rc4: rc4.c Log message: Mop up RC4_INDEX. The RC4_INDEX define switches between base pointer indexing and per-byte pointer increment. This supposedly made a huge difference to performance on x86 at some point, however compilers have improved somewhat since then. There is no change (or effectively no change) in generated assembly on a the majority of LLVM platforms and even when there is some change (e.g. aarch64), there is no noticable performance difference. Simplify the (still messy) macros/code and mop up RC4_INDEX. ok tb@