In vPMD, when load Rx desc with _mm_loadu_si128,
volatile point will be cast into non-volatile point.
So GCC is allowed to reorder the load instructions,
while Rx read's correctness is reply on these load
instructions to follow a backward sequence strictly,
so we add compile barrier to prevent compiler reorder.

Fixes: 7092be8437bd ("fm10k: add vector Rx")

Signed-off-by: Qi Zhang <qi.z.zhang at intel.com>
---

v2:
- fix check-git-log.sh warning.
- add more detail commit message.

 drivers/net/fm10k/fm10k_rxtx_vec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c 
b/drivers/net/fm10k/fm10k_rxtx_vec.c
index b724486..27f3e43 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -478,6 +478,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf 
**rx_pkts,
                /* Read desc statuses backwards to avoid race condition */
                /* A.1 load 4 pkts desc */
                descs0[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+               rte_compiler_barrier();

                /* B.2 copy 2 mbuf point into rx_pkts  */
                _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
@@ -486,8 +487,10 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf 
**rx_pkts,
                mbp2 = _mm_loadu_si128((__m128i *)&mbufp[pos+2]);

                descs0[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+               rte_compiler_barrier();
                /* B.1 load 2 mbuf point */
                descs0[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+               rte_compiler_barrier();
                descs0[0] = _mm_loadu_si128((__m128i *)(rxdp));

                /* B.2 copy 2 mbuf point into rx_pkts  */
-- 
2.7.4

Reply via email to