On 6/4/2025 4:59 PM, Bruce Richardson wrote:
On Fri, May 30, 2025 at 02:57:19PM +0100, Anatoly Burakov wrote:
Currently, for 32-byte descriptor format, only SSE instruction set is
supported. Add implementation for AVX2 and AVX512 instruction sets. Since
we are using Rx descriptor definitions from common code, we can just use
the generic descriptor definition, as we only ever write the first 16 bytes
of it, and the layout is always the same for that part.
Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
---
Like the idea. Feedback inline below.
/Bruce
<snip>
- /**
- * merge 0 & 1, by casting 0 to 256-bit and inserting 1
- * into the high lanes. Similarly for 2 & 3
- */
- const __m256i vaddr0_256 = _mm256_castsi128_si256(vaddr0);
- const __m256i vaddr2_256 = _mm256_castsi128_si256(vaddr2);
+ const __m128i vaddr0 = _mm_loadu_si128((const __m128i
*)&mb0->buf_addr);
+ const __m128i vaddr1 = _mm_loadu_si128((const __m128i
*)&mb1->buf_addr);
Minor nit, but do we need to use unaligned loads here? The mbuf is marked
as cache-aligned, and buf_addr is the first field in it.
It was like that in the original code I think (unless it was a copypaste
error), but sure, I can make it aligned.
- __m256i addr0_1 = _mm256_inserti128_si256(vaddr0_256, vaddr1, 1);
- __m256i addr2_3 = _mm256_inserti128_si256(vaddr2_256, vaddr3,
1);
+ reg0 = _ci_rxq_rearm_desc_avx2(vaddr0, zero);
+ reg1 = _ci_rxq_rearm_desc_avx2(vaddr1, zero);
The compiler may optimize this away, but rather than calling this function
with a zero register, we can save the call to insert the zero into the high
register half by just using the SSE/AVX-128 function, and casting the
result (which should be a no-op).
Good idea actually, will do.
+ } else {
+ /* 16 byte descriptor times four */
+ const struct rte_mbuf *mb0 = rxp[0].mbuf;
+ const struct rte_mbuf *mb1 = rxp[1].mbuf;
+ const struct rte_mbuf *mb2 = rxp[2].mbuf;
+ const struct rte_mbuf *mb3 = rxp[3].mbuf;
- /* add headroom to address values */
- addr0_1 = _mm256_add_epi64(addr0_1, hdroom);
- addr0_1 = _mm256_add_epi64(addr0_1, hdroom);
+ const __m128i vaddr0 = _mm_loadu_si128((const __m128i
*)&mb0->buf_addr);
+ const __m128i vaddr1 = _mm_loadu_si128((const __m128i
*)&mb1->buf_addr);
+ const __m128i vaddr2 = _mm_loadu_si128((const __m128i
*)&mb2->buf_addr);
+ const __m128i vaddr3 = _mm_loadu_si128((const __m128i
*)&mb3->buf_addr);
-#if RTE_IOVA_IN_MBUF
- /* extract IOVA addr into Packet Buffer Address, erase Header
Buffer Address */
- addr0_1 = _mm256_unpackhi_epi64(addr0_1, zero);
- addr2_3 = _mm256_unpackhi_epi64(addr2_3, zero);
-#else
- /* erase Header Buffer Address */
- addr0_1 = _mm256_unpacklo_epi64(addr0_1, zero);
- addr2_3 = _mm256_unpacklo_epi64(addr2_3, zero);
-#endif
+ reg0 = _ci_rxq_rearm_desc_avx2(vaddr0, vaddr1);
+ reg1 = _ci_rxq_rearm_desc_avx2(vaddr2, vaddr3);
+ }
- /* flush desc with pa dma_addr */
- _mm256_store_si256(RTE_CAST_PTR(__m256i *, &rxdp[0]), addr0_1);
- _mm256_store_si256(RTE_CAST_PTR(__m256i *, &rxdp[2]), addr2_3);
+ /* flush descriptors */
+ _mm256_store_si256(RTE_CAST_PTR(__m256i *, &rxdp[0]), reg0);
+ _mm256_store_si256(RTE_CAST_PTR(__m256i *, &rxdp[2]), reg1);
This should be rxdp[desc_per_reg], not rxdp[2].
Right, will fix.
<snip>
- /**
- * merge 0 & 1, by casting 0 to 256-bit and inserting 1
- * into the high lanes. Similarly for 2 & 3, and so on.
- */
- const __m256i addr0_256 = _mm256_castsi128_si256(vaddr0);
- const __m256i addr2_256 = _mm256_castsi128_si256(vaddr2);
- const __m256i addr4_256 = _mm256_castsi128_si256(vaddr4);
- const __m256i addr6_256 = _mm256_castsi128_si256(vaddr6);
+ const __m128i vaddr0 = _mm_loadu_si128((const __m128i
*)&mb0->buf_addr);
+ const __m128i vaddr1 = _mm_loadu_si128((const __m128i
*)&mb1->buf_addr);
+ const __m128i vaddr2 = _mm_loadu_si128((const __m128i
*)&mb2->buf_addr);
+ const __m128i vaddr3 = _mm_loadu_si128((const __m128i
*)&mb3->buf_addr);
- const __m256i addr0_1 = _mm256_inserti128_si256(addr0_256, vaddr1, 1);
- const __m256i addr2_3 = _mm256_inserti128_si256(addr2_256,
vaddr3, 1);
- const __m256i addr4_5 = _mm256_inserti128_si256(addr4_256,
vaddr5, 1);
- const __m256i addr6_7 = _mm256_inserti128_si256(addr6_256,
vaddr7, 1);
+ reg0 = _ci_rxq_rearm_desc_avx512(vaddr0, zero, vaddr1,
zero);
+ reg1 = _ci_rxq_rearm_desc_avx512(vaddr2, zero, vaddr3,
zero);
I can't help but thinking we can probably do a little better than this
merging in zeros using AVX-512 mask registers, e.g. using
_mm256_maskz_broadcastq_epi64() intrinsic, but it will be ok for now! :-)
You're welcome to submit patches, this is a very welcoming community!
(seriously though, I'll look into it)
<snip>
-#if RTE_IOVA_IN_MBUF
- /* extract IOVA addr into Packet Buffer Address, erase Header
Buffer Address */
- addr0_3 = _mm512_unpackhi_epi64(addr0_3, zero);
- addr4_7 = _mm512_unpackhi_epi64(addr4_7, zero);
-#else
- /* erase Header Buffer Address */
- addr0_3 = _mm512_unpacklo_epi64(addr0_3, zero);
- addr4_7 = _mm512_unpacklo_epi64(addr4_7, zero);
-#endif
+ reg0 = _ci_rxq_rearm_desc_avx512(vaddr0, vaddr1,
vaddr2, vaddr3);
+ reg1 = _ci_rxq_rearm_desc_avx512(vaddr4, vaddr5,
vaddr6, vaddr7);
To shorten the code (and this applies elsewhere too), we can remove the
vaddr* temporary variables and just do the loads implicitly in the function
calls, e.g.
reg0 = _ci_rxq_rearm_desc_avx512((const __m128i *)&mb0->buf_addr,
(const __m128i *)&mb1->buf_addr,
(const __m128i *)&mb2->buf_addr,
(const __m128i *)&mb3->buf_addr);
+ }
/* flush desc with pa dma_addr */
- _mm512_store_si512(RTE_CAST_PTR(__m512i *, &rxdp[0]), addr0_3);
- _mm512_store_si512(RTE_CAST_PTR(__m512i *, &rxdp[4]), addr4_7);
+ _mm512_store_si512(RTE_CAST_PTR(__m512i *, &rxdp[0]), reg0);
+ _mm512_store_si512(RTE_CAST_PTR(__m512i *, &rxdp[4]), reg1);
Again, the "4" needs to be adjusted based on desc size.
Right, yes.
}
}
#endif /* __AVX512VL__ */
-#endif /* RTE_NET_INTEL_USE_16BYTE_DESC */
static __rte_always_inline void
ci_rxq_rearm(struct ci_rx_queue *rxq, const enum ci_rx_vec_level vec_level)
@@ -254,7 +292,6 @@ ci_rxq_rearm(struct ci_rx_queue *rxq, const enum
ci_rx_vec_level vec_level)
if (_ci_rxq_rearm_get_bufs(rxq) < 0)
return;
<snip>
--
Thanks,
Anatoly