[Adding Herbert Xu to CC since he is the maintainer of the crypto subsys
maintainer]

On 10.03.2018 20:17, Andiry Xu wrote:
<snip>

> +static inline u32 nova_crc32c(u32 crc, const u8 *data, size_t len)
> +{
> +     u8 *ptr = (u8 *) data;
> +     u64 acc = crc; /* accumulator, crc32c value in lower 32b */
> +     u32 csum;
> +
> +     /* x86 instruction crc32 is part of SSE-4.2 */
> +     if (static_cpu_has(X86_FEATURE_XMM4_2)) {
> +             /* This inline assembly implementation should be equivalent
> +              * to the kernel's crc32c_intel_le_hw() function used by
> +              * crc32c(), but this performs better on test machines.
> +              */
> +             while (len > 8) {
> +                     asm volatile(/* 64b quad words */
> +                             "crc32q (%1), %0"
> +                             : "=r" (acc)
> +                             : "r"  (ptr), "0" (acc)
> +                     );
> +                     ptr += 8;
> +                     len -= 8;
> +             }
> +
> +             while (len > 0) {
> +                     asm volatile(/* trailing bytes */
> +                             "crc32b (%1), %0"
> +                             : "=r" (acc)
> +                             : "r"  (ptr), "0" (acc)
> +                     );
> +                     ptr++;
> +                     len--;
> +             }
> +
> +             csum = (u32) acc;
> +     } else {
> +             /* The kernel's crc32c() function should also detect and use the
> +              * crc32 instruction of SSE-4.2. But calling in to this function
> +              * is about 3x to 5x slower than the inline assembly version on
> +              * some test machines.

That is really odd. Did you try to characterize why this is the case? Is
it purely the overhead of dispatching to the correct backend function?
That's a rather big performance hit.

> +              */
> +             csum = crc32c(crc, data, len);
> +     }
> +
> +     return csum;
> +}
> +

<snip>

Reply via email to