> -----Original Message-----
> From: Intel-wired-lan <[email protected]> On Behalf
> Of Mina Almasry
> Sent: Saturday, November 22, 2025 3:09 PM
> To: [email protected]; [email protected]; linux-
> [email protected]
> Cc: YiFei Zhu <[email protected]>; Alexei Starovoitov
> <[email protected]>; Daniel Borkmann <[email protected]>; David S.
> Miller <[email protected]>; Jakub Kicinski <[email protected]>; Jesper
> Dangaard Brouer <[email protected]>; John Fastabend
> <[email protected]>; Stanislav Fomichev <[email protected]>;
> Nguyen, Anthony L <[email protected]>; Kitszel, Przemyslaw
> <[email protected]>; Andrew Lunn <[email protected]>;
> Eric Dumazet <[email protected]>; Paolo Abeni <[email protected]>;
> Lobakin, Aleksander <[email protected]>; Richard Cochran
> <[email protected]>; [email protected]; Mina
> Almasry <[email protected]>
> Subject: [Intel-wired-lan] [PATCH net-next v1] idpf: export RX
> hardware timestamping information to XDP
> 
> From: YiFei Zhu <[email protected]>
> 
> The logic is similar to idpf_rx_hwtstamp, but the data is exported as
> a BPF kfunc instead of appended to an skb.
> 
> A idpf_queue_has(PTP, rxq) condition is added to check the queue
> supports PTP similar to idpf_rx_process_skb_fields.
> 
> Cc: [email protected]
> 
> Signed-off-by: YiFei Zhu <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
> ---
>  drivers/net/ethernet/intel/idpf/xdp.c | 27
> +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/idpf/xdp.c
> b/drivers/net/ethernet/intel/idpf/xdp.c
> index 21ce25b0567f..850389ca66b6 100644
> --- a/drivers/net/ethernet/intel/idpf/xdp.c
> +++ b/drivers/net/ethernet/intel/idpf/xdp.c
> @@ -2,6 +2,7 @@
>  /* Copyright (C) 2025 Intel Corporation */
> 
>  #include "idpf.h"
> +#include "idpf_ptp.h"
>  #include "idpf_virtchnl.h"
>  #include "xdp.h"
>  #include "xsk.h"
> @@ -369,6 +370,31 @@ int idpf_xdp_xmit(struct net_device *dev, int n,
> struct xdp_frame **frames,
>                                      idpf_xdp_tx_finalize);
>  }
> 
> +static int idpf_xdpmo_rx_timestamp(const struct xdp_md *ctx, u64
> +*timestamp) {
> +     const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
> +     const struct libeth_xdp_buff *xdp = (typeof(xdp))ctx;
> +     const struct idpf_rx_queue *rxq;
> +     u64 cached_time, ts_ns;
> +     u32 ts_high;
> +
> +     rx_desc = xdp->desc;
> +     rxq = libeth_xdp_buff_to_rq(xdp, typeof(*rxq), xdp_rxq);
> +
> +     if (!idpf_queue_has(PTP, rxq))
> +             return -ENODATA;
> +     if (!(rx_desc->ts_low & VIRTCHNL2_RX_FLEX_TSTAMP_VALID))
> +             return -ENODATA;
RX flex desc fields are little‑endian.
You already convert ts_high with le32_to_cpu(), but test ts_low directly 
against the mask.
On big‑endian this can misdetect the bit and spuriously return -ENODATA.
Please convert ts_low to host order before the bit test.
See existing IDPF/ICE patterns where descriptor words are 
leXX_to_cpu()‑converted prior to FIELD_GET() / bit checks.
Also, per the XDP RX metadata kfunc docs, -ENODATA must reflect true absence of 
per‑packet metadata; endianness‑correct testing is required to uphold the 
semantic.

> +
> +     cached_time = READ_ONCE(rxq->cached_phc_time);
> +
> +     ts_high = le32_to_cpu(rx_desc->ts_high);
> +     ts_ns = idpf_ptp_tstamp_extend_32b_to_64b(cached_time,
> ts_high);
> +
> +     *timestamp = ts_ns;
> +     return 0;
> +}
> +
>  static int idpf_xdpmo_rx_hash(const struct xdp_md *ctx, u32 *hash,
>                             enum xdp_rss_hash_type *rss_type)  { @@ -
> 392,6 +418,7 @@ static int idpf_xdpmo_rx_hash(const struct xdp_md
> *ctx, u32 *hash,  }
> 
>  static const struct xdp_metadata_ops idpf_xdpmo = {
> +     .xmo_rx_timestamp       = idpf_xdpmo_rx_timestamp,
>       .xmo_rx_hash            = idpf_xdpmo_rx_hash,
>  };
> 
> 
> base-commit: e05021a829b834fecbd42b173e55382416571b2c
> --
> 2.52.0.rc2.455.g230fcf2819-goog

Reply via email to