ffmpeg | branch: master | Rémi Denis-Courmont <r...@remlab.net> | Sun Nov 19 13:24:29 2023 +0200| [3a134e82994ff49b784056d2dfce0230a8256ebd] | committer: Rémi Denis-Courmont
lavu/fixed_dsp: optimise R-V V fmul_reverse Gathers are (unsurprisingly) a notable exception to the rule that R-V V gets faster with larger group multipliers. So roll the function to speed it up. Before: vector_fmul_reverse_fixed_c: 2840.7 vector_fmul_reverse_fixed_rvv_i32: 2430.2 After: vector_fmul_reverse_fixed_c: 2841.0 vector_fmul_reverse_fixed_rvv_i32: 962.2 It might be possible to further optimise the function by moving the reverse-subtract out of the loop and adding ad-hoc tail handling. > http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=3a134e82994ff49b784056d2dfce0230a8256ebd --- libavutil/riscv/fixed_dsp_rvv.S | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/libavutil/riscv/fixed_dsp_rvv.S b/libavutil/riscv/fixed_dsp_rvv.S index 5b666016a0..68de6d7e1b 100644 --- a/libavutil/riscv/fixed_dsp_rvv.S +++ b/libavutil/riscv/fixed_dsp_rvv.S @@ -83,16 +83,17 @@ endfunc func ff_vector_fmul_reverse_fixed_rvv, zve32x csrwi vxrm, 0 - vsetvli t0, zero, e16, m4, ta, ma + // e16/m4 and e32/m8 are possible but slow the gathers down. + vsetvli t0, zero, e16, m1, ta, ma sh2add a2, a3, a2 vid.v v0 vadd.vi v0, v0, 1 1: - vsetvli t0, a3, e16, m4, ta, ma + vsetvli t0, a3, e16, m1, ta, ma slli t1, t0, 2 vrsub.vx v4, v0, t0 // v4[i] = [VL-1, VL-2... 1, 0] sub a2, a2, t1 - vsetvli zero, zero, e32, m8, ta, ma + vsetvli zero, zero, e32, m2, ta, ma vle32.v v8, (a2) sub a3, a3, t0 vle32.v v16, (a1) _______________________________________________ ffmpeg-cvslog mailing list ffmpeg-cvslog@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-cvslog To unsubscribe, visit link above, or email ffmpeg-cvslog-requ...@ffmpeg.org with subject "unsubscribe".