On 12/15/2012 12:46 PM, Ronald S. Bultje wrote:
> -
> -#define PREFETCH(name, op)                      \
> -static void name(void *mem, int stride, int h)  \
> -{                                               \
> -    const uint8_t *p = mem;                     \
> -    do {                                        \
> -        __asm__ volatile (#op" %0" :: "m"(*p)); \
> -        p += stride;                            \
> -    } while (--h);                              \
> -}
> -
> -PREFETCH(prefetch_mmxext, prefetcht0)
> -PREFETCH(prefetch_3dnow, prefetch)
> -#undef PREFETCH
> -
[...]
> +%macro PREFETCH_FN 1
> +cglobal prefetch, 3, 3, 0, buf, stride, h
> +.loop:
> +    %1      [bufq]
> +    add      bufq, strideq
> +    dec        hd
> +    jg .loop
> +    REP_RET
> +%endmacro
> +
> +INIT_MMX mmxext
> +PREFETCH_FN prefetcht0
> +%if ARCH_X86_32
> +INIT_MMX 3dnow
> +PREFETCH_FN prefetch
> +%endif

x86 prefetch() looks ok.

-Justin
_______________________________________________
libav-devel mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to