Re: vmm guest crash in vio

2024-01-09 Thread Alexander Bluhm
On Tue, Jan 09, 2024 at 07:49:16PM +0100, Stefan Fritsch wrote:
> @bluhm: Does the attached patch fix the panic? 

Yes.  My test does not crash the patched guest anymore.

bluhm

> The fdt part is completely untested, testers welcome.
> 
> diff --git a/sys/dev/fdt/virtio_mmio.c b/sys/dev/fdt/virtio_mmio.c
> index 4f1e9eba9b7..27fb17d6102 100644
> --- a/sys/dev/fdt/virtio_mmio.c
> +++ b/sys/dev/fdt/virtio_mmio.c
> @@ -200,11 +200,19 @@ virtio_mmio_set_status(struct virtio_softc *vsc, int 
> status)
>   struct virtio_mmio_softc *sc = (struct virtio_mmio_softc *)vsc;
>   int old = 0;
>  
> - if (status != 0)
> + if (status == 0) {
> + bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
> + 0);
> + while (bus_space_read_4(sc->sc_iot, sc->sc_ioh,
> + VIRTIO_MMIO_STATUS) != 0) {
> + CPU_BUSY_CYCLE();
> + }
> + } else  {
>   old = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
> -VIRTIO_MMIO_STATUS);
> - bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
> -   status|old);
> + VIRTIO_MMIO_STATUS);
> + bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
> + status|old);
> + }
>  }
>  
>  int
> diff --git a/sys/dev/pci/virtio_pci.c b/sys/dev/pci/virtio_pci.c
> index 398dc960f6d..ef95c834823 100644
> --- a/sys/dev/pci/virtio_pci.c
> +++ b/sys/dev/pci/virtio_pci.c
> @@ -282,15 +282,29 @@ virtio_pci_set_status(struct virtio_softc *vsc, int 
> status)
>   int old = 0;
>  
>   if (sc->sc_sc.sc_version_1) {
> - if (status != 0)
> + if (status == 0) {
> + CWRITE(sc, device_status, 0);
> + while (CREAD(sc, device_status) != 0) {
> + CPU_BUSY_CYCLE();
> + }
> + } else {
>   old = CREAD(sc, device_status);
> - CWRITE(sc, device_status, status|old);
> + CWRITE(sc, device_status, status|old);
> + }
>   } else {
> - if (status != 0)
> + if (status == 0) {
> + bus_space_write_1(sc->sc_iot, sc->sc_ioh,
> + VIRTIO_CONFIG_DEVICE_STATUS, status|old);
> + while (bus_space_read_1(sc->sc_iot, sc->sc_ioh,
> + VIRTIO_CONFIG_DEVICE_STATUS) != 0) {
> + CPU_BUSY_CYCLE();
> + }
> + } else {
>   old = bus_space_read_1(sc->sc_iot, sc->sc_ioh,
>   VIRTIO_CONFIG_DEVICE_STATUS);
> - bus_space_write_1(sc->sc_iot, sc->sc_ioh,
> - VIRTIO_CONFIG_DEVICE_STATUS, status|old);
> + bus_space_write_1(sc->sc_iot, sc->sc_ioh,
> + VIRTIO_CONFIG_DEVICE_STATUS, status|old);
> + }
>   }
>  }
>  



Re: vmm guest crash in vio

2024-01-09 Thread Dave Voutila


Mark Kettenis  writes:

>> From: Dave Voutila 
>> Date: Tue, 09 Jan 2024 09:19:56 -0500
>>
>> Stefan Fritsch  writes:
>>
>> > On 08.01.24 22:24, Alexander Bluhm wrote:
>> >> Hi,
>> >> When running a guest in vmm and doing ifconfig operations on vio
>> >> interface, I can crash the guest.
>> >> I run these loops in the guest:
>> >> while doas ifconfig vio1 inet 10.188.234.74/24; do :; done
>> >> while doas ifconfig vio1 -inet; do :; done
>> >> while doas ifconfig vio1 down; do :; done
>> >> And from host I ping the guest:
>> >> ping -f 10.188.234.74
>> >
>> > I suspect there is a race condition in vmd. The vio(4) kernel driver
>> > resets the device and then frees all the mbufs from the tx and rx
>> > rings. If vmd continues doing dma for a bit after the reset, this
>> > could result in corruption. From this code in vmd's vionet.c
>> >
>> > case VIODEV_MSG_IO_WRITE:
>> > /* Write IO: no reply needed */
>> > if (handle_io_write(, dev) == 1)
>> > virtio_assert_pic_irq(dev, 0);
>> > break;
>> >
>> > it looks like the main vmd process will just send a pio write message
>> > to the vionet process but does not wait for the vionet process to
>> > actually execute the device reset. The pio write instruction in the
>> > vcpu must complete after the device reset is complete.
>>
>> Are you saying we need to wait for the emulation of the OUT instruction
>> that the vcpu is executing? I don't believe we should be blocking the
>> vcpu here as that's not how port io works with real hardware. It makes
>> no sense to block on an OUT until the device finishes emulation.
>
> Well, I/O address space is highly synchronous.  See 16.6 "Ordering
> I/O" in the Intel SDM.  There it clearly states that execution of the
> next instruction after an OUT instruction is delayed intil the store
> completes.  Now that isn't necessarily the same as completing all
> device emulation for the device.  But it does mean the store has to
> reach the device register before the next instruction gets executed.
>

Interesting. I think in this case since even if the very next
instruction is an IN to read from the same register, it's being
serialized in the virtio device process in vmd. While the vcpu may
continue forward immediately after the OUT event is relayed to the
device, an IN *does* block in the current multi-process design and
waits for the response of the register value from the device process.

Since the virtio network device is single threaded (currently), ordering
should be preserved and we should always be capable of providing the
value written via OUT as a response to the IN. Assuming no external
event in the device mutates the register value in between.

I'm not ruling out a bug in the device reset code by any means, but I'm
not convinced that vmd is violating any guarantees of the Intel
architecture with the current design.

> Yes, this is slow.  Avoid I/O address space if you can; use
> Memory-Mapped I/O instead.

Well in hypervisor-land that replaces one problem with another :)

-dv



Re: vmm guest crash in vio

2024-01-09 Thread Mark Kettenis
> From: Dave Voutila 
> Date: Tue, 09 Jan 2024 09:19:56 -0500
> 
> Stefan Fritsch  writes:
> 
> > On 08.01.24 22:24, Alexander Bluhm wrote:
> >> Hi,
> >> When running a guest in vmm and doing ifconfig operations on vio
> >> interface, I can crash the guest.
> >> I run these loops in the guest:
> >> while doas ifconfig vio1 inet 10.188.234.74/24; do :; done
> >> while doas ifconfig vio1 -inet; do :; done
> >> while doas ifconfig vio1 down; do :; done
> >> And from host I ping the guest:
> >> ping -f 10.188.234.74
> >
> > I suspect there is a race condition in vmd. The vio(4) kernel driver
> > resets the device and then frees all the mbufs from the tx and rx
> > rings. If vmd continues doing dma for a bit after the reset, this
> > could result in corruption. From this code in vmd's vionet.c
> >
> > case VIODEV_MSG_IO_WRITE:
> > /* Write IO: no reply needed */
> > if (handle_io_write(, dev) == 1)
> > virtio_assert_pic_irq(dev, 0);
> > break;
> >
> > it looks like the main vmd process will just send a pio write message
> > to the vionet process but does not wait for the vionet process to
> > actually execute the device reset. The pio write instruction in the
> > vcpu must complete after the device reset is complete.
> 
> Are you saying we need to wait for the emulation of the OUT instruction
> that the vcpu is executing? I don't believe we should be blocking the
> vcpu here as that's not how port io works with real hardware. It makes
> no sense to block on an OUT until the device finishes emulation.

Well, I/O address space is highly synchronous.  See 16.6 "Ordering
I/O" in the Intel SDM.  There it clearly states that execution of the
next instruction after an OUT instruction is delayed intil the store
completes.  Now that isn't necessarily the same as completing all
device emulation for the device.  But it does mean the store has to
reach the device register before the next instruction gets executed.

Yes, this is slow.  Avoid I/O address space if you can; use
Memory-Mapped I/O instead.

> I *do* think there could be something wrong in the device status
> register emulation, but blocking the vcpu on an OUT isn't the way to
> solve this. In fact, that's what previously happened before I split
> device emulation out into subprocesses...so if there's a bug in the
> emulation logic, it was hiding it.
> 
> >
> > I could not reproduce this issue with kvm/qemu.
> >
> 
> Thanks!
> 
> >
> >> Then I see various kind of mbuf corruption:
> >> kernel: protection fault trap, code=0
> >> Stopped at  pool_do_put+0xc9:   movq0x8(%rcx),%rcx
> >> ddb> trace
> >> pool_do_put(82519e30,fd807db89000) at pool_do_put+0xc9
> >> pool_put(82519e30,fd807db89000) at pool_put+0x53
> >> m_extfree(fd807d330300) at m_extfree+0xa5
> >> m_free(fd807d330300) at m_free+0x97
> >> soreceive(fd806f33ac88,0,80002a3e97f8,0,0,80002a3e9724,76299c799030
> >> 1bf1) at soreceive+0xa3e
> >> soo_read(fd807ed4a168,80002a3e97f8,0) at soo_read+0x4a
> >> dofilereadv(80002a399548,7,80002a3e97f8,0,80002a3e98c0) at 
> >> dofilere
> >> adv+0x143
> >> sys_read(80002a399548,80002a3e9870,80002a3e98c0) at 
> >> sys_read+0x55
> >> syscall(80002a3e9930) at syscall+0x33a
> >> Xsyscall() at Xsyscall+0x128
> >> end of kernel
> >> end trace frame: 0x7469f8836930, count: -10
> >> pool_do_put(8259a500,fd807e7fa800) at pool_do_put+0xc9
> >> pool_put(8259a500,fd807e7fa800) at pool_put+0x53
> >> m_extfree(fd807f838a00) at m_extfree+0xa5
> >> m_free(fd807f838a00) at m_free+0x97
> >> m_freem(fd807f838a00) at m_freem+0x38
> >> vio_txeof(80030118) at vio_txeof+0x11d
> >> vio_tx_intr(80030118) at vio_tx_intr+0x31
> >> virtio_check_vqs(80024800) at virtio_check_vqs+0x102
> >> virtio_pci_legacy_intr(80024800) at virtio_pci_legacy_intr+0x65
> >> intr_handler(80002a52dae0,80081000) at intr_handler+0x3c
> >> Xintr_legacy5_untramp() at Xintr_legacy5_untramp+0x1a3
> >> Xspllower() at Xspllower+0x1d
> >> vio_ioctl(800822a8,80206910,80002a52dd00) at vio_ioctl+0x16a
> >> ifioctl(fd807c0ba7a0,80206910,80002a52dd00,80002a41c810) at 
> >> ifioctl
> >> +0x721
> >> sys_ioctl(80002a41c810,80002a52de00,80002a52de50) at 
> >> sys_ioctl+0x2a
> >> b
> >> syscall(80002a52dec0) at syscall+0x33a
> >> Xsyscall() at Xsyscall+0x128
> >> end of kernel
> >> end trace frame: 0x7b3d36d55eb0, count: -17
> >> panic: pool_do_get: mcl2k free list modified: page
> >> 0xfd80068bd000; item add
> >> r 0xfd80068bf800; offset 0x0=0xa != 0x83dcdb591c6b8bf
> >> Stopped at  db_enter+0x14:  popq%rbp
> >>  TIDPIDUID PRFLAGS PFLAGS  CPU  COMMAND
> >> *143851  19121  0 0x3  00  ifconfig
> >> db_enter() at db_enter+0x14
> >> 

Re: vmm guest crash in vio

2024-01-09 Thread Stefan Fritsch
On Tue, 9 Jan 2024, Dave Voutila wrote:

> 
> Stefan Fritsch  writes:
> 
> > On 08.01.24 22:24, Alexander Bluhm wrote:
> >> Hi,
> >> When running a guest in vmm and doing ifconfig operations on vio
> >> interface, I can crash the guest.
> >> I run these loops in the guest:
> >> while doas ifconfig vio1 inet 10.188.234.74/24; do :; done
> >> while doas ifconfig vio1 -inet; do :; done
> >> while doas ifconfig vio1 down; do :; done
> >> And from host I ping the guest:
> >> ping -f 10.188.234.74
> >
> > I suspect there is a race condition in vmd. The vio(4) kernel driver
> > resets the device and then frees all the mbufs from the tx and rx
> > rings. If vmd continues doing dma for a bit after the reset, this
> > could result in corruption. From this code in vmd's vionet.c
> >
> > case VIODEV_MSG_IO_WRITE:
> > /* Write IO: no reply needed */
> > if (handle_io_write(, dev) == 1)
> > virtio_assert_pic_irq(dev, 0);
> > break;
> >
> > it looks like the main vmd process will just send a pio write message
> > to the vionet process but does not wait for the vionet process to
> > actually execute the device reset. The pio write instruction in the
> > vcpu must complete after the device reset is complete.
> 
> Are you saying we need to wait for the emulation of the OUT instruction
> that the vcpu is executing? I don't believe we should be blocking the
> vcpu here as that's not how port io works with real hardware. It makes
> no sense to block on an OUT until the device finishes emulation.
> 
> I *do* think there could be something wrong in the device status
> register emulation, but blocking the vcpu on an OUT isn't the way to
> solve this. In fact, that's what previously happened before I split
> device emulation out into subprocesses...so if there's a bug in the
> emulation logic, it was hiding it.

I am pretty sure that this is what qemu is doing with the OUT instruction. 
This is the safe thing to do, because virtio 0.9 to 1.1 do not specify 
exactly when the reset is complete. However, virtio 1.2 states:

  The driver SHOULD consider a driver-initiated reset complete when it 
  reads device status as 0.

Linux reads the value back once after writing 0. 

So, the virtio kernel driver should read the value back, too. What vmd 
should do is debatable. Blocking the OUT instruction for the device reset 
would be more robust, but that's not a strong opinion.

@bluhm: Does the attached patch fix the panic? 

The fdt part is completely untested, testers welcome.

diff --git a/sys/dev/fdt/virtio_mmio.c b/sys/dev/fdt/virtio_mmio.c
index 4f1e9eba9b7..27fb17d6102 100644
--- a/sys/dev/fdt/virtio_mmio.c
+++ b/sys/dev/fdt/virtio_mmio.c
@@ -200,11 +200,19 @@ virtio_mmio_set_status(struct virtio_softc *vsc, int 
status)
struct virtio_mmio_softc *sc = (struct virtio_mmio_softc *)vsc;
int old = 0;
 
-   if (status != 0)
+   if (status == 0) {
+   bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
+   0);
+   while (bus_space_read_4(sc->sc_iot, sc->sc_ioh,
+   VIRTIO_MMIO_STATUS) != 0) {
+   CPU_BUSY_CYCLE();
+   }
+   } else  {
old = bus_space_read_4(sc->sc_iot, sc->sc_ioh,
-  VIRTIO_MMIO_STATUS);
-   bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
- status|old);
+   VIRTIO_MMIO_STATUS);
+   bus_space_write_4(sc->sc_iot, sc->sc_ioh, VIRTIO_MMIO_STATUS,
+   status|old);
+   }
 }
 
 int
diff --git a/sys/dev/pci/virtio_pci.c b/sys/dev/pci/virtio_pci.c
index 398dc960f6d..ef95c834823 100644
--- a/sys/dev/pci/virtio_pci.c
+++ b/sys/dev/pci/virtio_pci.c
@@ -282,15 +282,29 @@ virtio_pci_set_status(struct virtio_softc *vsc, int 
status)
int old = 0;
 
if (sc->sc_sc.sc_version_1) {
-   if (status != 0)
+   if (status == 0) {
+   CWRITE(sc, device_status, 0);
+   while (CREAD(sc, device_status) != 0) {
+   CPU_BUSY_CYCLE();
+   }
+   } else {
old = CREAD(sc, device_status);
-   CWRITE(sc, device_status, status|old);
+   CWRITE(sc, device_status, status|old);
+   }
} else {
-   if (status != 0)
+   if (status == 0) {
+   bus_space_write_1(sc->sc_iot, sc->sc_ioh,
+   VIRTIO_CONFIG_DEVICE_STATUS, status|old);
+   while (bus_space_read_1(sc->sc_iot, sc->sc_ioh,
+   VIRTIO_CONFIG_DEVICE_STATUS) != 0) {
+   CPU_BUSY_CYCLE();
+   }
+   } else {
old = 

Re: vmm guest crash in vio

2024-01-09 Thread Dave Voutila


Stefan Fritsch  writes:

> On 08.01.24 22:24, Alexander Bluhm wrote:
>> Hi,
>> When running a guest in vmm and doing ifconfig operations on vio
>> interface, I can crash the guest.
>> I run these loops in the guest:
>> while doas ifconfig vio1 inet 10.188.234.74/24; do :; done
>> while doas ifconfig vio1 -inet; do :; done
>> while doas ifconfig vio1 down; do :; done
>> And from host I ping the guest:
>> ping -f 10.188.234.74
>
> I suspect there is a race condition in vmd. The vio(4) kernel driver
> resets the device and then frees all the mbufs from the tx and rx
> rings. If vmd continues doing dma for a bit after the reset, this
> could result in corruption. From this code in vmd's vionet.c
>
> case VIODEV_MSG_IO_WRITE:
> /* Write IO: no reply needed */
> if (handle_io_write(, dev) == 1)
> virtio_assert_pic_irq(dev, 0);
> break;
>
> it looks like the main vmd process will just send a pio write message
> to the vionet process but does not wait for the vionet process to
> actually execute the device reset. The pio write instruction in the
> vcpu must complete after the device reset is complete.

Are you saying we need to wait for the emulation of the OUT instruction
that the vcpu is executing? I don't believe we should be blocking the
vcpu here as that's not how port io works with real hardware. It makes
no sense to block on an OUT until the device finishes emulation.

I *do* think there could be something wrong in the device status
register emulation, but blocking the vcpu on an OUT isn't the way to
solve this. In fact, that's what previously happened before I split
device emulation out into subprocesses...so if there's a bug in the
emulation logic, it was hiding it.

>
> I could not reproduce this issue with kvm/qemu.
>

Thanks!

>
>> Then I see various kind of mbuf corruption:
>> kernel: protection fault trap, code=0
>> Stopped at  pool_do_put+0xc9:   movq0x8(%rcx),%rcx
>> ddb> trace
>> pool_do_put(82519e30,fd807db89000) at pool_do_put+0xc9
>> pool_put(82519e30,fd807db89000) at pool_put+0x53
>> m_extfree(fd807d330300) at m_extfree+0xa5
>> m_free(fd807d330300) at m_free+0x97
>> soreceive(fd806f33ac88,0,80002a3e97f8,0,0,80002a3e9724,76299c799030
>> 1bf1) at soreceive+0xa3e
>> soo_read(fd807ed4a168,80002a3e97f8,0) at soo_read+0x4a
>> dofilereadv(80002a399548,7,80002a3e97f8,0,80002a3e98c0) at 
>> dofilere
>> adv+0x143
>> sys_read(80002a399548,80002a3e9870,80002a3e98c0) at sys_read+0x55
>> syscall(80002a3e9930) at syscall+0x33a
>> Xsyscall() at Xsyscall+0x128
>> end of kernel
>> end trace frame: 0x7469f8836930, count: -10
>> pool_do_put(8259a500,fd807e7fa800) at pool_do_put+0xc9
>> pool_put(8259a500,fd807e7fa800) at pool_put+0x53
>> m_extfree(fd807f838a00) at m_extfree+0xa5
>> m_free(fd807f838a00) at m_free+0x97
>> m_freem(fd807f838a00) at m_freem+0x38
>> vio_txeof(80030118) at vio_txeof+0x11d
>> vio_tx_intr(80030118) at vio_tx_intr+0x31
>> virtio_check_vqs(80024800) at virtio_check_vqs+0x102
>> virtio_pci_legacy_intr(80024800) at virtio_pci_legacy_intr+0x65
>> intr_handler(80002a52dae0,80081000) at intr_handler+0x3c
>> Xintr_legacy5_untramp() at Xintr_legacy5_untramp+0x1a3
>> Xspllower() at Xspllower+0x1d
>> vio_ioctl(800822a8,80206910,80002a52dd00) at vio_ioctl+0x16a
>> ifioctl(fd807c0ba7a0,80206910,80002a52dd00,80002a41c810) at 
>> ifioctl
>> +0x721
>> sys_ioctl(80002a41c810,80002a52de00,80002a52de50) at 
>> sys_ioctl+0x2a
>> b
>> syscall(80002a52dec0) at syscall+0x33a
>> Xsyscall() at Xsyscall+0x128
>> end of kernel
>> end trace frame: 0x7b3d36d55eb0, count: -17
>> panic: pool_do_get: mcl2k free list modified: page
>> 0xfd80068bd000; item add
>> r 0xfd80068bf800; offset 0x0=0xa != 0x83dcdb591c6b8bf
>> Stopped at  db_enter+0x14:  popq%rbp
>>  TIDPIDUID PRFLAGS PFLAGS  CPU  COMMAND
>> *143851  19121  0 0x3  00  ifconfig
>> db_enter() at db_enter+0x14
>> panic(8206e651) at panic+0xb5
>> pool_do_get(824a1b30,2,80002a4a55d4) at pool_do_get+0x320
>> pool_get(824a1b30,2) at pool_get+0x7d
>> m_clget(fd807c4e4f00,2,800) at m_clget+0x18d
>> rtm_msg1(e,80002a4a56f0) at rtm_msg1+0xde
>> rtm_ifchg(800822a8) at rtm_ifchg+0x65
>> if_down(800822a8) at if_down+0xa4
>> ifioctl(fd8006898978,80206910,80002a4a58c0,80002a474ff0) at 
>> ifioctl
>> +0xcd5
>> sys_ioctl(80002a474ff0,80002a4a59c0,80002a4a5a10) at 
>> sys_ioctl+0x2a
>> b
>> syscall(80002a4a5a80) at syscall+0x33a
>> Xsyscall() at Xsyscall+0x128
>> end of kernel
>> end trace frame: 0x7f6c22492130, count: 3
>> OpenBSD 7.4-current (GENERIC) #3213: Mon Jan  8 22:05:58 CET 2024
>>  
>> 

Re: bnxt panic - HWRM_RING_ALLOC command returned RESOURCE_ALLOC_ERROR error.

2024-01-09 Thread Alexander Bluhm
On Tue, Jan 09, 2024 at 12:04:17PM +1000, Jonathan Matthew wrote:
> On Wed, Jan 03, 2024 at 10:14:12AM +0100, Hrvoje Popovski wrote:
> > On 3.1.2024. 7:51, Jonathan Matthew wrote:
> > > On Wed, Jan 03, 2024 at 01:50:06AM +0100, Alexander Bluhm wrote:
> > >> On Wed, Jan 03, 2024 at 12:26:26AM +0100, Hrvoje Popovski wrote:
> > >>> While testing kettenis@ ipl diff from tech@ and doing iperf3 to bnxt
> > >>> interface and ifconfig bnxt0 down/up at the same time I can trigger
> > >>> panic. Panic can be triggered without kettenis@ diff...
> > >> It is easy to reproduce.  ifconfig bnxt1 down/up a few times while
> > >> receiving TCP traffic with iperf3.  Machine still has kettenis@ diff.
> > >> My panic looks different.
> > > It looks like I wasn't trying very hard when I wrote bnxt_down().
> > > I think there's also a problem with bnxt_up() unwinding after failure
> > > in various places, but that's a different issue.
> > > 
> > > This makes it a more resilient for me, though it still logs
> > > 'bnxt0: unexpected completion type 3' a lot if I take the interface
> > > down while it's in use.  I'll look at that separately.
> > 
> > Hi,
> > 
> > with this diff I can still panic box with ifconfig up/down but not as
> > fast as without it
> 
> Right, this is the other problem where bnxt_up() wasn't cleaning up properly
> after failing part way through.  This diff should fix that, but I don't think
> it will fix the 'HWRM_RING_ALLOC command returned RESOURCE_ALLOC_ERROR error'
> problem, so the interface will still stop working at that point.

OK bluhm@

> Index: if_bnxt.c
> ===
> RCS file: /cvs/src/sys/dev/pci/if_bnxt.c,v
> retrieving revision 1.39
> diff -u -p -r1.39 if_bnxt.c
> --- if_bnxt.c 10 Nov 2023 15:51:20 -  1.39
> +++ if_bnxt.c 9 Jan 2024 01:59:38 -
> @@ -1073,7 +1081,7 @@ bnxt_up(struct bnxt_softc *sc)
>   if (bnxt_hwrm_vnic_ctx_alloc(sc, >sc_vnic.rss_id) != 0) {
>   printf("%s: failed to allocate vnic rss context\n",
>   DEVNAME(sc));
> - goto down_queues;
> + goto down_all_queues;
>   }
>  
>   sc->sc_vnic.id = (uint16_t)HWRM_NA_SIGNATURE;
> @@ -1139,8 +1147,11 @@ dealloc_vnic:
>   bnxt_hwrm_vnic_free(sc, >sc_vnic);
>  dealloc_vnic_ctx:
>   bnxt_hwrm_vnic_ctx_free(sc, >sc_vnic.rss_id);
> +
> +down_all_queues:
> + i = sc->sc_nqueues;
>  down_queues:
> - for (i = 0; i < sc->sc_nqueues; i++)
> + while (i-- > 0)
>   bnxt_queue_down(sc, >sc_queues[i]);
>  
>   bnxt_dmamem_free(sc, sc->sc_rx_cfg);