> From: Francesco Fusco
...
> One possible direction to achieve higher performance consists
> of replacing jhash2() with an architecture specific hash
> function pretty much like the Intel folks have proposed in
> DPDK. DPDK provides a very fast hash function that leverages
> the 32bit crc32l instruction part of the Intel SSE4.2
> instruction set.
...

IIRC the cpu can execute multiple crc32l instructions in parallel.
If you use a software lookup table to crc (say) 256 zero bytes
it should be possible to crc chunks of a large buffer in parallel.

> +#ifdef CONFIG_X86_64

Why not also i386?

> +static inline u32 ovs_flow_hash_crc_4b(u32 crc, u32 val)
> +{
> +     asm ("crc32l %[val], %[crc]\n"
> +             : [crc] "+r" (crc)
> +             : [val] "rm" (val));
> +     return crc;
> +}
> +
> +static inline u32 ovs_flow_hash_crc(const u32 *data, u32 len, u32 seed)
> +{
> +     const u32 *p32 = (const u32 *) data;
> +     u32 i, tmp = 0;
> +
> +     for (i = 0; i < len; i++)
> +             seed = ovs_flow_hash_crc_4b(*p32++, seed);

Doesn't that have the arguments if the wrong order ?
Maybe it doesn't affect the result, but the asm pattern works better
the other way around.

> +     switch (3 - ((len * 4) & 0x03)) {

This looked like an obscure way to calculate the remainder,
then I noticed the 'len *4' which means it is always 3
(no residual).

> +     case 0:
> +             tmp |= *((const u8 *) p32 + 2) << 16;
> +             /* Fallthrough */
> +     case 1:
> +             tmp |= *((const u8 *) p32 + 1) << 8;
> +             /* Fallthrough */
> +     case 2:
> +             tmp |= *((const u8 *) p32);
> +             seed = ovs_flow_hash_crc_4b(tmp, seed);
> +     default:
> +             break;
> +     }
> +
> +     return seed;
> +}
> +#endif

        David



_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev

Reply via email to