Jakub Kicinski wrote:
> On Tue, 10 Feb 2026 22:15:25 -0500 Willem de Bruijn wrote:
> > > It's a bit of an opportunistic optimization.
> > > 
> > > I initially intended it for for the "long sequence of packets"
> > > test. But I failed to get AF_PACKET+FQ to cooperate sufficiently
> > > to queue all of the packets in the same bucket. Otherwise FQ "sorts"
> > > the packets, and breaks what the test is trying to do :(  
> > 
> > I wonder what's going wrong here.
> > 
> > fq_classify should pick the queue based on skb->sk also for packet
> > sockets.
> > 
> > And flow_queue_add should add the packets to the tail of the linear
> > list if the delivery time is identical to that of the tail.
> 
> It works but requires that we either modify the qdisc config to set
> a orphan_mask of 1, or somehow set the skb->hash on the AF_PACKET skbs.

Oh right, fq_classify does not use skb->sk for packet sockets because
they are in default sk_state TCP_CLOSE.

And this is by design, as clearly documented, as packet sockets should
not be assumed to be a single flow:

        } else if (sk->sk_state == TCP_CLOSE) {
                unsigned long hash = skb_get_hash(skb) & q->orphan_mask;
                /*
                 * Sockets in TCP_CLOSE are non connected.
                 * Typical use case is UDP sockets, they can send packets
                 * with sendto() to many different destinations.
                 * We probably could use a generic bit advertising
                 * non connected sockets, instead of sk_state == TCP_CLOSE,
                 * if we care enough.
                 */
                sk = (struct sock *)((hash << 1) | 1UL);
        }

An orphan_mask of 1 sounds like an effective workaround.

I don't see a way to force a specific skb_get_hash result across
flows, given hashrnd.

> The test sends out multiple flows (src ports) so if we let fq compute
> the real hash we end up in different buckets.



Reply via email to