Hi Sasha,
Le 30/03/2022 à 00:14, Sasha Krassovsky a écrit :
I've noticed that we include xsimd as an abstraction over all of the simd
architectures. I'd like to propose a different solution which would result
in fewer lines of code, while being more readable.
My thinking is that anything simple enough to abstract with xsimd can be
autovectorized by the compiler. Any more interesting SIMD algorithm usually
is tailored to the target instruction set and can't be abstracted away with
xsimd anyway.
As a matter of fact, we already rely on auto-vectorization in a couple
of places (mostly `aggregate_basic_avx2.cc` and friends).
The main concern with doing this is that auto-vectorization makes
performance difficult to predict and ensure. Some compiler or compiler
versions may fail vectorizing a given piece of code (how does MSVC fare
these days?). As a consequence, some Arrow builds may be 2x to 8x faster
than others on the same machine and the same workload. This is not an
optimal user experience, and in many cases it can be difficult or
impossible to change the compiler version.
As matter of fact, on x86 our baseline instruction set is SSE4.2, so we
already get some auto-vectorization and we can look at the concrete
numbers. On my AMD Zen 2 CPU I get the following numbers on the pairwise
addition kernel (this is with clang 12.0):
https://gist.github.com/pitrou/dbfa3d97afb4ebbe096bab69cb6bb4d5
Judging that PADDB, PADDW, PADDD and PADDQ all have the same throughput
on that CPU (according to
https://www.agner.org/optimize/instruction_tables.pdf), all these data
types should show the same performance in bytes/second. Yet int64 gets
half of int16 and int32, and int8 is far behind.
Looking at the disassembly, the int16 and int32 versions are unrolled
and vectorized by clang 12.0, the int8 and int64 are not... Why? How do
we ensure that common compiler optimize this routine as expected?
Personally, I have no idea.
As for why xsimd rather than hand-written intrinsics, it's a middle
ground in terms of maintainability and readability. Code using
intrinsics quickly gets hard to follow except for the daily SIMD
specialist, and it's also annoying to port to another architecture (even
due to gratuitous naming differences).
As for why xsimd is not very much used accross the codebase, the initial
intent was to have more SIMD-accelerated code, but nobody actually got
around to do it, due to other priorities. In the end, we spend more time
adding features than hand-optimizing the existing code.
Regards
Antoine.
With that in mind, I'd like to propose the following strategy:
1. Write a single source file with simple, element-at-a-time for loop
implementations of each function.
2. Compile this same source file several times with different compile flags
for different vectorization (e.g. if we're on an x86 machine that supports
AVX2 and AVX512, we'd compile once with -mavx2 and once with -mavx512vl).
3. Functions compiled with different instruction sets can be differentiated
by a namespace, which gets defined during the compiler invocation. For
example, for AVX2 we'd invoke the compiler with -DNAMESPACE=AVX2 and then
for something like elementwise addition of two arrays, we'd call
arrow::AVX2::VectorAdd.
I believe this would let us remove xsimd as a dependency while also giving
us lots of vectorized kernels at the cost of some extra cmake magic. After
that, it would just be a matter of making the function registry point to
these new functions.
Please let me know your thoughts!
Thanks,
Sasha Krassovsky