February 28, 2026 at 11:01, "Jay Vosburgh" <[email protected] 
mailto:[email protected]?to=%22Jay%20Vosburgh%22%20%3Cjv%40jvosburgh.net%3E > 
wrote:


> 
> Jiayuan Chen <[email protected]> wrote:
> 
> > 
> > From: Jiayuan Chen <[email protected]>
> > 
> > bond_rr_gen_slave_id() dereferences bond->rr_tx_counter without a NULL
> > check. rr_tx_counter is a per-CPU counter only allocated in bond_open()
> > when the bond mode is round-robin. If the bond device was never brought
> > up, rr_tx_counter remains NULL, causing a null-ptr-deref.
> > 
> > The XDP redirect path can reach this code even when the bond is not up:
> > bpf_master_redirect_enabled_key is a global static key, so when any bond
> > device has native XDP attached, the XDP_TX -> xdp_master_redirect()
> > interception is enabled for all bond slaves system-wide. This allows the
> > path xdp_master_redirect() -> bond_xdp_get_xmit_slave() ->
> > bond_xdp_xmit_roundrobin_slave_get() -> bond_rr_gen_slave_id() to be
> > reached on a bond that was never opened.
> > 
> > The normal TX path (bond_xmit_roundrobin) is not affected because TX
> > requires the bond to be UP, which guarantees rr_tx_counter is allocated.
> > However, bond_xmit_get_slave() (ndo_get_xmit_slave) has the same code
> > pattern via bond_xmit_roundrobin_slave_get() and could theoretically
> > hit the same issue.
> > 
>  As a practical matter, though, I don't think the
> ndo_get_xmit_slave path can actually hit the issue, as that looks to
> only be called from Infiniband, which is only supported in bonding for
> active-backup mode.
> 
> > 
> > Fix this by allocating rr_tx_counter unconditionally in bond_init()
> > (ndo_init), which is called by register_netdevice() and covers both
> > device creation paths (bond_create() and bond_newlink()). This also
> > handles the case where bond mode is changed to round-robin after device
> > creation. The conditional allocation in bond_open() is removed. Since
> > bond_destructor() already unconditionally calls
> > free_percpu(bond->rr_tx_counter), the lifecycle is clean: allocate at
> > ndo_init, free at destructor.
> > 
> > Fixes: 879af96ffd72 ("net, core: Add support for XDP redirection to slave 
> > device")
> > Reported-by: [email protected]
> > Closes: 
> > https://lore.kernel.org/all/[email protected]/T/
> > Signed-off-by: Jiayuan Chen <[email protected]>
> > 
>  My only concern is that this will waste a percpu u32 per bond
> device for the majority of bonding use cases (which use modes other than
> balance-rr), which could be a few hundred bytes on a large machine.
> 
>  Does everything work reliably if the rr_tx_counter allocation
> happens conditionally on mode == BOND_MODE_ROUNDROBIN in bond_setup, as
> well as in bond_option_mode_set?
> 



Hi Jay,

Thanks for the review.

bond_setup() is not suitable here as it is a void callback with no error return 
path,
so an alloc_percpu() failure cannot be propagated.

An alternative would be to allocate conditionally in bond_init() (since the 
default mode is round-robin)
and manage allocation/deallocation in bond_option_mode_set() when the mode 
changes.

This is a trade-off between the added complexity of conditional alloc/free 
across multiple code
paths and saving a per-CPU u32 for non-round-robin bonds.

For the per-CPU u32 overhead, it's only 4 extra bytes per CPU per bond device — 
and machines with
that many CPUs tend to have plenty of memory to match.

I don't have a strong preference either way.

Thanks

>  -J
> 
> > 
> > ---
> >  drivers/net/bonding/bond_main.c | 12 ++++++------
> >  1 file changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/net/bonding/bond_main.c 
> > b/drivers/net/bonding/bond_main.c
> > index 78cff904cdc3..9f63f67d8418 100644
> > --- a/drivers/net/bonding/bond_main.c
> > +++ b/drivers/net/bonding/bond_main.c
> > @@ -4279,12 +4279,6 @@ static int bond_open(struct net_device *bond_dev)
> >  struct list_head *iter;
> >  struct slave *slave;
> >  
> > - if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
> > - bond->rr_tx_counter = alloc_percpu(u32);
> > - if (!bond->rr_tx_counter)
> > - return -ENOMEM;
> > - }
> > -
> >  /* reset slave->backup and slave->inactive */
> >  if (bond_has_slaves(bond)) {
> >  bond_for_each_slave(bond, slave, iter) {
> > @@ -6411,6 +6405,12 @@ static int bond_init(struct net_device *bond_dev)
> >  if (!bond->wq)
> >  return -ENOMEM;
> >  
> > + bond->rr_tx_counter = alloc_percpu(u32);
> > + if (!bond->rr_tx_counter) {
> > + destroy_workqueue(bond->wq);
> > + return -ENOMEM;
> > + }
> > +
> >  bond->notifier_ctx = false;
> >  
> >  spin_lock_init(&bond->stats_lock);
> > -- 
> > 2.43.0
> > 
> ---
>  -Jay Vosburgh, [email protected]
>

Reply via email to