From: 'Simon Horman' <[email protected]>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
bridge: mcast: Fix a false positive lockdep splat

This commit defers the transmission of multicast queries to a high priority
workqueue to avoid a false positive lockdep splat caused by nested bridge
multicast lock acquisitions. It queues skbs to a per-context query_queue
and flushes them asynchronously.

Closes: https://lore.kernel.org/netdev/[email protected]/

> diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
> index 881d866d687a..252c46977ed5 100644
> --- a/net/bridge/br_multicast.c
> +++ b/net/bridge/br_multicast.c

[ ... ]

> @@ -1804,9 +1826,8 @@ static void __br_multicast_send_query(struct 
> net_bridge_mcast *brmctx,
>               skb->dev = pmctx->port->dev;
>               br_multicast_count(brmctx->br, pmctx->port, skb, igmp_type,
>                                  BR_MCAST_DIR_TX);
> -             NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT,
> -                     dev_net(pmctx->port->dev), NULL, skb, NULL, skb->dev,
> -                     br_dev_queue_push_xmit);
> +             skb_queue_tail(&pmctx->query_queue, skb);
> +             queue_work(system_highpri_wq, &pmctx->query_queue_work);

Can this code cause an out-of-memory condition if flooded with IGMP/MLD
Leave packets?

In the IGMP data path, br_multicast_leave_group() is triggered for each
received Leave packet. If the bridge is the active querier, it directly calls
__br_multicast_send_query() for each packet, which allocates a new query skb
and queues it here using skb_queue_tail().

Because the workqueue execution handles TX networking overhead, could softirq
RX packet processing outpace the workqueue's ability to dequeue them?

Since skb_queue_tail() does not impose any limits on the queue length, it
looks like this might result in an unbounded accumulation of skbs in the
query queue, potentially bypassing standard qdisc backpressure mechanisms and
exhausting system memory.

>  
>               if (over_lmqt && with_srcs && sflag) {
>                       over_lmqt = false;
> @@ -1816,7 +1837,8 @@ static void __br_multicast_send_query(struct 
> net_bridge_mcast *brmctx,
>               br_multicast_select_own_querier(brmctx, group, skb);
>               br_multicast_count(brmctx->br, NULL, skb, igmp_type,
>                                  BR_MCAST_DIR_RX);
> -             netif_rx(skb);
> +             skb_queue_tail(&brmctx->query_queue, skb);
> +             queue_work(system_highpri_wq, &brmctx->query_queue_work);
>       }
>  }

Reply via email to