https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=267028
--- Comment #309 from Mark Millard <[email protected]> --- (In reply to Andriy Gapon from comment #308) How many hardware watchpoints can be active at once? Which node gets the corruption varies . . . Failures based on the same kernel and kernel.debug: *(modlist_t) 0xfffff800045fd2c0 showed (vmcore.9): {link = {tqe_next = 0xfffff80000000007, tqe_prev = 0xfffff800035b6f00}, container = 0xfffff8000359f300, name = 0xffffffff82e1e010 "amdgpu_raven_mec_bin_fw", version = 1} *(modlist_t) 0xfffff800036f6300 showed (vmcore.0): {link = {tqe_next = 0xfffff80000000007, tqe_prev = 0xfffff800047571c0}, container = 0xfffff80004bfad80, name = 0xffffffff829ef000 "amdgpu_raven_me_bin_fw", version = 1} *(modlist_t) 0xfffff800035a0200 showed (vmcore.1): {link = {tqe_next = 0xfffff80000000007, tqe_prev = 0xfffff80003967980}, container = 0xfffff800039fb300, name = 0xffffffff82e62026 "amdgpu_raven_mec2_bin_fw", version = 1} But even when it is the same node by name (and same number of nodes down the list) the address varies: *(modlist_t) 0xfffff800047c1180 shows (vmcore.4): {link = {tqe_next = 0xfffff80000000007, tqe_prev = 0xfffff800047c11c0}, container = 0xfffff80004861a80, name = 0xffffffff82e62026 "amdgpu_raven_mec2_bin_fw", version = 1} Overall those suggest some sort of racy context at the time for the system, rather than a single, sequential-processing handling. For reference: vmcore.[23] were for capturing a successful boot. vmcore.2 involved a "shutdown now" which unloaded things of interest, making it of no use. vmcore.3 was a good capture of a successful boot. -- You are receiving this mail because: You are the assignee for the bug.
