On Mon, Nov 03, 2025 at 12:40:14PM -0500, Nabil S. Alramli wrote:
> On 11/3/25 11:38, Maciej Fijalkowski wrote:
> > On Thu, Oct 09, 2025 at 03:28:30PM -0400, Nabil S. Alramli wrote:
> >> This commit adds support for `ndo_xdp_xmit` in skb mode in the ixgbe
> >> ethernet driver, by allowing the call to continue to transmit the packets
> >> using `dev_direct_xmit`.
> >>
> >> Previously, the driver did not support the operation in skb mode. The
> >> handler `ixgbe_xdp_xmit` had the following condition:
> >>
> >> ```
> >>    ring = adapter->xdp_prog ? ixgbe_determine_xdp_ring(adapter) : NULL;
> >>    if (unlikely(!ring))
> >>            return -ENXIO;
> >> ```
> >>
> >> That only works in native mode. In skb mode, `adapter->xdp_prog == NULL` so
> >> the call returned an error, which prevented the ability to send packets
> >> using `bpf_prog_test_run_opts` with the `BPF_F_TEST_XDP_LIVE_FRAMES` flag.
> > 
> > Hi Nabil,
> > 
> > What stops you from loading a dummy XDP program to interface? This has
> > been an approach that we follow when we want to use anything that utilizes
> > XDP resources (XDP Tx queues).
> > 
> 
> Hi Maciej,
> 
> Thank you for your response. In one use case we have multiple XDP programs
> already loaded on an interface in SKB mode using the dispatcher, and we want
> to use bpf_prog_test_run_opts to egress packets from another XDP program. We
> want to avoid having to unload the dispatcher or be forced to use it in native
> mode. Without this patch, that does not seem possible currently, correct?

Why does it have to be bpf_prog_test_run_opts?
You're trying to use an interface which was designed for native XDP from a
different layer. Generic XDP has support for redirect and tx.

> 
> >>
> >> Signed-off-by: Nabil S. Alramli <[email protected]>
> >> ---
> >>  drivers/net/ethernet/intel/ixgbe/ixgbe.h      |  8 ++++
> >>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 43 +++++++++++++++++--
> >>  2 files changed, 47 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h 
> >> b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> >> index e6a380d4929b..26c378853755 100644
> >> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> >> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
> >> @@ -846,6 +846,14 @@ struct ixgbe_ring *ixgbe_determine_xdp_ring(struct 
> >> ixgbe_adapter *adapter)
> >>    return adapter->xdp_ring[index];
> >>  }
> >>  
> >> +static inline
> >> +struct ixgbe_ring *ixgbe_determine_tx_ring(struct ixgbe_adapter *adapter)
> >> +{
> >> +  int index = ixgbe_determine_xdp_q_idx(smp_processor_id());
> >> +
> >> +  return adapter->tx_ring[index];
> >> +}
> >> +
> >>  static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
> >>  {
> >>    switch (adapter->hw.mac.type) {
> >> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c 
> >> b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> index 467f81239e12..fed70cbdb1b2 100644
> >> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> @@ -10748,7 +10748,8 @@ static int ixgbe_xdp_xmit(struct net_device *dev, 
> >> int n,
> >>    /* During program transitions its possible adapter->xdp_prog is assigned
> >>     * but ring has not been configured yet. In this case simply abort xmit.
> >>     */
> >> -  ring = adapter->xdp_prog ? ixgbe_determine_xdp_ring(adapter) : NULL;
> >> +  ring = adapter->xdp_prog ? ixgbe_determine_xdp_ring(adapter) :
> >> +          ixgbe_determine_tx_ring(adapter);
> >>    if (unlikely(!ring))
> >>            return -ENXIO;
> >>  
> >> @@ -10762,9 +10763,43 @@ static int ixgbe_xdp_xmit(struct net_device *dev, 
> >> int n,
> >>            struct xdp_frame *xdpf = frames[i];
> >>            int err;
> >>  
> >> -          err = ixgbe_xmit_xdp_ring(ring, xdpf);
> >> -          if (err != IXGBE_XDP_TX)
> >> -                  break;
> >> +          if (adapter->xdp_prog) {
> >> +                  err = ixgbe_xmit_xdp_ring(ring, xdpf);
> >> +                  if (err != IXGBE_XDP_TX)
> >> +                          break;
> >> +          } else {
> >> +                  struct xdp_buff xdp = {0};
> >> +                  unsigned int metasize = 0;
> >> +                  unsigned int size = 0;
> >> +                  unsigned int truesize = 0;
> >> +                  struct sk_buff *skb = NULL;
> >> +
> >> +                  xdp_convert_frame_to_buff(xdpf, &xdp);
> >> +                  size = xdp.data_end - xdp.data;
> >> +                  metasize = xdp.data - xdp.data_meta;
> >> +                  truesize = SKB_DATA_ALIGN(xdp.data_end - 
> >> xdp.data_hard_start) +
> >> +                             SKB_DATA_ALIGN(sizeof(struct 
> >> skb_shared_info));
> >> +
> >> +                  skb = napi_alloc_skb(&ring->q_vector->napi, truesize);
> >> +                  if (likely(skb)) {
> >> +                          skb_reserve(skb, xdp.data - 
> >> xdp.data_hard_start);
> >> +                          skb_put_data(skb, xdp.data, size);
> >> +                          build_skb_around(skb, skb->data, truesize);
> >> +                          if (metasize)
> >> +                                  skb_metadata_set(skb, metasize);
> >> +                          skb->dev = dev;
> >> +                          skb->queue_mapping = ring->queue_index;
> >> +
> >> +                          err = dev_direct_xmit(skb, ring->queue_index);
> >> +                          if (!dev_xmit_complete(err))
> >> +                                  break;
> >> +                  } else {
> >> +                          break;
> >> +                  }
> >> +
> >> +                  xdp_return_frame_rx_napi(xdpf);
> >> +          }
> >> +
> >>            nxmit++;
> >>    }
> >>  
> >> -- 
> >> 2.43.0
> >>
> >>
> 

Reply via email to