> <snip>
> > >
> > > > >
> > > > > > > > > > > @@ -1790,9 +1792,9 @@ mlx5_rx_burst_mprq(void
> > > *dpdk_rxq,
> > > > > > > struct
> > > > > > > > > > > rte_mbuf **pkts, uint16_t pkts_n)  void *buf_addr;
> > > > > > > > > > >
> > > > > > > > > > >  /* Increment the refcnt of the whole chunk. */
> > > > > > > > > > > -rte_atomic16_add_return(&buf->refcnt, 1);
> > > > > > > > rte_atomic16_add_return includes a full barrier along with
> > > > > > > > atomic
> > > > > > > operation.
> > > > > > > > But is full barrier required here? For ex:
> > > > > > > > __atomic_add_fetch(&buf->refcnt, 1,
> > > > > > > > __ATOMIC_RELAXED) will offer atomicity, but no barrier.
> > > > > > > > Would that be enough?
> > > > > > > >
> > > > > > > > > > > -MLX5_ASSERT((uint16_t)rte_atomic16_read(&buf-
> > > > > > > > > > > >refcnt) <=
> > > > > > > > > > > -    strd_n + 1);
> > > > > > > > > > > +__atomic_add_fetch(&buf->refcnt, 1,
> > > > > > > > > > > __ATOMIC_ACQUIRE);
> > > > > > >
> > > > > > > The atomic load in MLX5_ASSERT() accesses the same memory
> > space
> > > > > > > as the previous __atomic_add_fetch() does.
> > > > > > > They will access this memory space in the program order when
> > > > > > > we enabled MLX5_PMD_DEBUG. So the ACQUIRE barrier in
> > > > > > > __atomic_add_fetch() becomes unnecessary.
> > > > > > >
> > > > > > > By changing it to RELAXED ordering, this patch got 7.6%
> > > > > > > performance improvement on N1 (making it generate A72 alike
> > > > instructions).
> > > > > > >
> > > > > > > Could you please also try it on your testbed, Alex?
> > > > > >
> > > > > > Situation got better with this modification, here are the results:
> > > > > >  - no patch:             3.0 Mpps CPU cycles/packet=51.52
> > > > > >  - original patch:    2.1 Mpps CPU cycles/packet=71.05
> > > > > >  - modified patch: 2.9 Mpps CPU cycles/packet=52.79 Also, I
> > > > > > found that the degradation is there only in case I enable bursts 
> > > > > > stats.
> > > > >
> > > > >
> > > > > Great! So this patch will not hurt the normal datapath performance.
> > > > >
> > > > >
> > > > > > Could you please turn on the following config options and see
> > > > > > if you can reproduce this as well?
> > > > > > CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
> > > > > > CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=y
> > > > >
> > > > > Thanks, Alex. Some updates.
> > > > >
> > > > > Slightly (about 1%) throughput degradation was detected after we
> > > > > enabled these two config options on N1 SoC.
> > > > >
> > > > > If we look insight the perf stats results, with this patch, both
> > > > > mlx5_rx_burst and mlx5_tx_burst consume fewer CPU cycles than
> > > > > the
> > > > original code.
> > > > > However, __memcpy_generic takes more cycles. I think that might
> > > > > be the reason for CPU cycles per packet increment after applying
> > > > > this
> > patch.
> > > > >
> > > > > Original code:
> > > > > 98.07%--pkt_burst_io_forward
> > > > >         |
> > > > >         |--44.53%--__memcpy_generic
> > > > >         |
> > > > >         |--35.85%--mlx5_rx_burst_mprq
> > > > >         |
> > > > >         |--15.94%--mlx5_tx_burst_none_empw
> > > > >         |          |
> > > > >         |          |--7.32%--mlx5_tx_handle_completion.isra.0
> > > > >         |          |
> > > > >         |           --0.50%--__memcpy_generic
> > > > >         |
> > > > >          --1.14%--memcpy@plt
> > > > >
> > > > > Use C11 with RELAXED ordering:
> > > > > 99.36%--pkt_burst_io_forward
> > > > >         |
> > > > >         |--47.40%--__memcpy_generic
> > > > >         |
> > > > >         |--34.62%--mlx5_rx_burst_mprq
> > > > >         |
> > > > >         |--15.55%--mlx5_tx_burst_none_empw
> > > > >         |          |
> > > > >         |           --7.08%--mlx5_tx_handle_completion.isra.0
> > > > >         |
> > > > >          --1.17%--memcpy@plt
> > > > >
> > > > > BTW, all the atomic operations in this patch are not the hotspot.
> > > >
> > > > Phil, we are seeing much worse degradation on our ARM platform
> > > > unfortunately.
> > > > I don't think that discrepancy in memcpy can explain this behavior.
> > > > Your patch is not touching this area of code. Let me collect some
> > > > perf stat on our side.
> > > Are you testing the patch as is or have you made the changes that
> > > were discussed in the thread?
> > >
> >
> > Yes, I made the changes you suggested. It really gets better with them.
> > Could you please respin the patch to make sure I got it right in my
> > environment?
> 
> Thanks, Alex.
> Please check the new version here.
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatchwo
> rk.dpdk.org%2Fpatch%2F76335%2F&amp;data=02%7C01%7Cakozyrev%40nvidi
> a.com%7C2486830050214bac8b9708d84fb4d9f9%7C43083d15727340c1b7db39
> efd9ccc17a%7C0%7C0%7C637346985463620568&amp;sdata=WGw0JZPcbjosSiI
> UxJuQz3r2pZBYkz%2BIXSqlOXimZdc%3D&amp;reserved=0

This patch is definitely better, do not see a degradation anymore, thank you.

Acked-by: Alexander Kozyrev <akozy...@nvidia.com>

> 
> >
> > > >
> > > > >
> > > > > >
> > > > > > > >
> > > > > > > > Can you replace just the above line with the following
> > > > > > > > lines and
> > test
> > > it?
> > > > > > > >
> > > > > > > > __atomic_add_fetch(&buf->refcnt, 1, __ATOMIC_RELAXED);
> > > > > > > > __atomic_thread_fence(__ATOMIC_ACQ_REL);
> > > > > > > >
> > > > > > > > This should make the generated code same as before this patch.
> > > > > > > > Let me know if you would prefer us to re-spin the patch
> > > > > > > > instead (for
> > > > > testing).
> > > > > > > >
> > > > > > > > > > > +MLX5_ASSERT(__atomic_load_n(&buf->refcnt,
> > > > > > > > > > > +    __ATOMIC_RELAXED) <= strd_n + 1);
> > > > > > > > > > >  buf_addr = RTE_PTR_SUB(addr,
> > > RTE_PKTMBUF_HEADROOM);
> > > > > > > > > > >  /*
> > > > > > > > > > >   * MLX5 device doesn't use iova but it is necessary
> > > > > > > > > > > in a
> > > > > > > > > > diff
> > > > > > > > > > > --git a/drivers/net/mlx5/mlx5_rxtx.h
> > > > > > > > > > > b/drivers/net/mlx5/mlx5_rxtx.h index
> > > > > > > > > > > 26621ff..0fc15f3
> > > > > > > > > > > 100644
> > > > > > > > > > > --- a/drivers/net/mlx5/mlx5_rxtx.h
> > > > > > > > > > > +++ b/drivers/net/mlx5/mlx5_rxtx.h
> > > > > <snip>
> > > > > > > >

Reply via email to