On Wed, Mar 18 2015, Denys Vlasenko <vda.li...@googlemail.com> wrote:


> Your code does four 16-bit stores.
> The version below does two 32-bit ones instead,
> and it is also marginally smaller.
>
> char *put_dec_full8(char *buf, unsigned r)
> {
>         unsigned q;
>         u32 v;
>
>         /* 0 <= r < 10^8 */
>         q = (r * (u64)0x28f5c29) >> 32;
>         v = (u32)decpair[r - 100*q] << 16;
>
>         /* 0 <= q < 10^6 */
>         r = (q * (u64)0x28f5c29) >> 32;
>         v = v | decpair[q - 100*r];
>         ((u32*)buf)[0] = v;
>
>         /* 0 <= r < 10^4 */
>         q = (r * 0x147b) >> 19;
>         v = (u32)decpair[r - 100*q] << 16;
>
>         /* 0 <= q < 100 */
>         v = v | decpair[q];
>         ((u32*)buf)[1] = v;
>
>         return buf + 8;
> }
>
> It may be faster not only because of having fewer stores,
> but because on x86, this code (moving 16-bit halves):
>
>         movw    decpair(%ebx,%ebx), %dx
>         movw    %dx, 4(%eax)
>         movw    decpair(%ecx,%ecx), %dx
>         movw    %dx, 6(%eax)
>
> suffers from register merge stall when 16-bit value
> is read into lower part of %edx. 32-bit code
> has no such stalls:
>
>         movzwl  decpair(%ebx,%ebx), %edx
>         sall    $16, %edx
>         movzwl  decpair(%ecx,%ecx), %ecx
>         orl     %ecx, %edx
>         movl    %edx, 4(%eax)
>

[On little-endian, I'm pretty sure the <<16 should be applied to the
second and fourth decpair value.]

Thanks for the suggestion. However, I don't see any change in the size
of the generated code (gcc 4.7), and, at least on my Xeon machine,
converting both ULONG_MAX and uniformly random u64s becomes slightly
slower (54 vs 56 cycles and 61 vs 65 cycles).

Rasmus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to