On Sun, Mar 11, 2018 at 02:00:13PM +0200, Nikolay Borisov wrote:
> [Adding Herbert Xu to CC since he is the maintainer of the crypto subsys
> maintainer]
> 
> On 10.03.2018 20:17, Andiry Xu wrote:
> <snip>
> 
> > +static inline u32 nova_crc32c(u32 crc, const u8 *data, size_t len)
> > +{
> > +   u8 *ptr = (u8 *) data;
> > +   u64 acc = crc; /* accumulator, crc32c value in lower 32b */
> > +   u32 csum;
> > +
> > +   /* x86 instruction crc32 is part of SSE-4.2 */
> > +   if (static_cpu_has(X86_FEATURE_XMM4_2)) {
> > +           /* This inline assembly implementation should be equivalent
> > +            * to the kernel's crc32c_intel_le_hw() function used by
> > +            * crc32c(), but this performs better on test machines.
> > +            */
> > +           while (len > 8) {
> > +                   asm volatile(/* 64b quad words */
> > +                           "crc32q (%1), %0"
> > +                           : "=r" (acc)
> > +                           : "r"  (ptr), "0" (acc)
> > +                   );
> > +                   ptr += 8;
> > +                   len -= 8;
> > +           }
> > +
> > +           while (len > 0) {
> > +                   asm volatile(/* trailing bytes */
> > +                           "crc32b (%1), %0"
> > +                           : "=r" (acc)
> > +                           : "r"  (ptr), "0" (acc)
> > +                   );
> > +                   ptr++;
> > +                   len--;
> > +           }
> > +
> > +           csum = (u32) acc;
> > +   } else {
> > +           /* The kernel's crc32c() function should also detect and use the
> > +            * crc32 instruction of SSE-4.2. But calling in to this function
> > +            * is about 3x to 5x slower than the inline assembly version on
> > +            * some test machines.
> 
> That is really odd. Did you try to characterize why this is the case? Is
> it purely the overhead of dispatching to the correct backend function?
> That's a rather big performance hit.
> 
> > +            */
> > +           csum = crc32c(crc, data, len);
> > +   }
> > +
> > +   return csum;
> > +}
> > +

Are you sure that CONFIG_CRYPTO_CRC32C_INTEL was enabled during your tests and
that the accelerated version was being called?  Or, perhaps CRC32C_PCL_BREAKEVEN
(defined in arch/x86/crypto/crc32c-intel_glue.c) needs to be adjusted.  Please
don't hack around performance problems like this; if they exist, they need to be
fixed for everyone.

Eric
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to