On Fri, 27 Mar 2020, Paul Goyette wrote:

With a curent built from sources updated just a few hours ago (on 2020-03-27 at 16:13:55 UTC), I get a panic during shutdown. The
stack trace doesn't seem to be saved, but I manually transcribed
it:

        vpanic + 0x178
        kern_assert + 0x48
        config_detach + 0x65
        mii_detach + 0x109
        wm_detach + 0xb0
        config_detach + 0xe5
        config_detach_all + 0x97
        cpu_reboot + 0x198
        sys_reboot
        sys_reboot + 0x63
        syscall + 0x299
        (syscall #208)

Unfortunately, the actual panic message had scrolled off the screen,
but it included "bad device fstate".

The only mii on my machine should be

        ihphy0 at wm0 phy 2: i217 10/100/1000 media interface, rev. 5

and is configured as

        ihphy0 at mii? phy ?

Anyone got any ideas?

For now I have reverted to my previous 9.99.46 kernel...

A bit more info...

The panic message is from the KASSERTMSG() at line 1742 in source file kern/subr_autoconf.c rev 1.269 (very early in config_detach()):

        KASSERTMSG((cf == NULL || cf->cf_fstate == FSTATE_FOUND ||
                cf->cf_fstate == FSTATE_STAR),
            "config_detach: %s: bad device fstate: %d",
            device_xname(dev), cf ? cf->cf_fstate : -1);

The actual panic message is

        config_detach: ihphy0: bad device fstate: 0

According to the #defines in sys/device.h we have

        #define FSTATE_NOTFOUND         0       /* has not been found */

Just idle curiousity (I am unfamiliar with these drivers), I wonder if
it's possible for the ihphy to be detached twice? Perhaps it can be detached by config_detach_all() and then the wm driver tries to detach again?


+--------------------+--------------------------+-----------------------+
| Paul Goyette       | PGP Key fingerprint:     | E-mail addresses:     |
| (Retired)          | FA29 0E3B 35AF E8AE 6651 | p...@whooppee.com     |
| Software Developer | 0786 F758 55DE 53BA 7731 | pgoye...@netbsd.org   |
+--------------------+--------------------------+-----------------------+

Reply via email to