Dandandan commented on PR #9542: URL: https://github.com/apache/arrow-rs/pull/9542#issuecomment-4050094511
> I'm curious what a SIMD gather (AVX2, SVE) intrinsics' performance is like vs. unrolling manually on modern architectures. The former likely ends up as similar μops anyway. I don't love specializing the code like this (in either direction, SIMD intrinsics or unrolling especially with ARM increasingly prevalent in the cloud) but if today it's faster I'm okay with it. It's also a pretty well-understood technique at this point. Agreed. Auto-vectorization is preferred and in an ideal scenario we don't need to specialize the code and write our APIs so LLVM can generate good code automatically. Ideally I think, the `interleave` and `take` APIs would also have a way of not doing bounds checks - in that case the compiler could do a much better job at vectorizing the code and not having the branches. I am thinking perhaps we can at least remove the bounds checks for `interleave` of the outer batch dimension by doing it upfront - I'll check that now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
