Control: tags -1 + moreinfo Hi,
On Sun, Mar 08, 2026 at 12:12:56AM +0800, Vince wrote: > Package: src:linux > Version: 6.12.73-1 > Severity: important > File: linux > > Dear Maintainer, > > I am reporting repeated kernel Oops/page faults on Debian 13 with > many Docker containers running. The crashes happen in VFS dentry > lookup paths (__d_lookup / __d_lookup_rcu), and the machine becomes > unstable. > > -- Package-specific info: > ** Version: > Linux version 6.12.73+deb13-amd64 ([email protected]) > (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for > Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.73-1 (2026-02-17) > > ** Command line: > BOOT_IMAGE=/boot/vmlinuz-6.12.73+deb13-amd64 > root=UUID=5bdb0a4f-1e62-4abb-abd8-2d791699a565 ro quiet > > ** Not tainted > > ** Reproduction notes > Not a minimal deterministic reproducer yet, but issue appears under sustained > container workload: > - 50+ containers active > - health checks and frequent process/file operations in containers > - Oops first observed in `runc` context, later also `python` > I can help test patches/kernels and provide additional logs if needed. > > > ** Actual result > Kernel hits repeated Oops/page faults and system becomes unstable/crashes. > > > ** Kernel log: > Mar 07 08:03:37 lgh kernel: vethd9cef6e: entered allmulticast mode > Mar 07 08:03:37 lgh kernel: vethd9cef6e: entered promiscuous mode > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.033035455+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.033325698+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.033335978+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.033396328+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > blocking state > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > disabled state > Mar 07 08:03:38 lgh kernel: veth4a31b07: entered allmulticast mode > Mar 07 08:03:38 lgh kernel: veth4a31b07: entered promiscuous mode > Mar 07 08:03:38 lgh systemd[1]: Started > docker-e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af.scope > - libcontainer container > e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af. > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.094640555+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.094961588+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.094978588+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.095078418+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:38 lgh systemd[1]: Started > docker-6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727.scope > - libcontainer container > 6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727. > Mar 07 08:03:38 lgh kernel: eth0: renamed from vethbd00720 > Mar 07 08:03:38 lgh kernel: br-50367f1966be: port 1(vethd9cef6e) entered > blocking state > Mar 07 08:03:38 lgh kernel: br-50367f1966be: port 1(vethd9cef6e) entered > forwarding state > Mar 07 08:03:38 lgh kernel: eth0: renamed from veth4bc5ec6 > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > blocking state > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > forwarding state > Mar 07 08:03:38 lgh systemd[1]: > docker-6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727.scope: > Deactivated successfully. > Mar 07 08:03:38 lgh systemd[1]: > docker-6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727.scope: > Consumed 42ms CPU time, 6.8M memory peak, 3.1M read from disk, 52K written > to disk. > Mar 07 08:03:38 lgh systemd[1]: > docker-e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af.scope: > Deactivated successfully. > Mar 07 08:03:38 lgh systemd[1]: > docker-e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af.scope: > Consumed 42ms CPU time, 9.8M memory peak, 5.8M read from disk, 52K written > to disk. > Mar 07 08:03:38 lgh dockerd[847]: time="2026-03-07T08:03:38.399875008+08:00" > level=info msg="ignoring event" > container=e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af > module=libcontainerd namespace> > Mar 07 08:03:38 lgh dockerd[847]: time="2026-03-07T08:03:38.400015130+08:00" > level=info msg="ignoring event" > container=6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727 > module=libcontainerd namespace> > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400143101+08:00" level=info msg="shim disconnected" > id=6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727 > namespace=moby > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400166391+08:00" level=warning msg="cleaning up > after shim disconnected" > id=6208864a52f743d0b18032575016d5bc95e78fc49a6078e4667cf605fce3f727 > namespace=> > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400172071+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400164881+08:00" level=info msg="shim disconnected" > id=e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af > namespace=moby > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400329962+08:00" level=warning msg="cleaning up > after shim disconnected" > id=e79503b98d795a1aba9a039e123fbed213f276d1dae5e00ca43afa02c87358af > namespace=> > Mar 07 08:03:38 lgh containerd[812]: > time="2026-03-07T08:03:38.400341992+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > disabled state > Mar 07 08:03:38 lgh kernel: veth4bc5ec6: renamed from eth0 > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > disabled state > Mar 07 08:03:38 lgh kernel: veth4a31b07 (unregistering): left allmulticast > mode > Mar 07 08:03:38 lgh kernel: veth4a31b07 (unregistering): left promiscuous mode > Mar 07 08:03:38 lgh kernel: br-1d204413e854: port 1(veth4a31b07) entered > disabled state > Mar 07 08:03:38 lgh kernel: br-50367f1966be: port 1(vethd9cef6e) entered > disabled state > Mar 07 08:03:38 lgh kernel: vethbd00720: renamed from eth0 > Mar 07 08:03:38 lgh kernel: br-50367f1966be: port 1(vethd9cef6e) entered > disabled state > Mar 07 08:03:38 lgh kernel: vethd9cef6e (unregistering): left allmulticast > mode > Mar 07 08:03:38 lgh kernel: vethd9cef6e (unregistering): left promiscuous mode > Mar 07 08:03:38 lgh kernel: br-50367f1966be: port 1(vethd9cef6e) entered > disabled state > Mar 07 08:03:38 lgh systemd[1]: > var-lib-docker-overlay2-bda3a39dff72e4145073e3adcf5916cdc6487af1c63a9816d4a76cc34a6d3d93-merged.mount: > Deactivated successfully. > Mar 07 08:03:42 lgh systemd[1]: > var-lib-docker-overlay2-b96e4a85f094d8349a3ab634bcbf7e0ddc268b5b44207ad3c8488b300047f953\x2dinit-merged.mount: > Deactivated successfully. > Mar 07 08:03:42 lgh systemd[1]: > var-lib-docker-overlay2-b96e4a85f094d8349a3ab634bcbf7e0ddc268b5b44207ad3c8488b300047f953-merged.mount: > Deactivated successfully. > Mar 07 08:03:42 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > blocking state > Mar 07 08:03:42 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > disabled state > Mar 07 08:03:42 lgh kernel: veth8feaa35: entered allmulticast mode > Mar 07 08:03:42 lgh kernel: veth8feaa35: entered promiscuous mode > Mar 07 08:03:42 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > blocking state > Mar 07 08:03:42 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > disabled state > Mar 07 08:03:42 lgh kernel: veth1bffc5d: entered allmulticast mode > Mar 07 08:03:42 lgh kernel: veth1bffc5d: entered promiscuous mode > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.334423742+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.334474402+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.334482942+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.334586094+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.361414168+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.361496249+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.361520129+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:42 lgh containerd[812]: > time="2026-03-07T08:03:42.361609589+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:03:42 lgh systemd[1]: Started > docker-d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206.scope > - libcontainer container > d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206. > Mar 07 08:03:42 lgh systemd[1]: Started > docker-e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4.scope > - libcontainer container > e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4. > Mar 07 08:03:42 lgh kernel: eth0: renamed from veth49b97fa > Mar 07 08:03:42 lgh kernel: eth0: renamed from veth29052fa > Mar 07 08:03:42 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > blocking state > Mar 07 08:03:42 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > forwarding state > Mar 07 08:03:42 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > blocking state > Mar 07 08:03:42 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > forwarding state > Mar 07 08:03:43 lgh systemd[1]: > var-lib-docker-overlay2-bcef57bf74391152625c25532b0866f03369250689dd57ab99e74d3b0c62aa88\x2dinit-merged.mount: > Deactivated successfully. > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered blocking > state > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered disabled > state > Mar 07 08:04:03 lgh kernel: veth74bf730: entered allmulticast mode > Mar 07 08:04:03 lgh kernel: veth74bf730: entered promiscuous mode > Mar 07 08:04:03 lgh kernel: eth0: renamed from veth8d3a017 > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered blocking > state > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered forwarding > state > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered disabled > state > Mar 07 08:04:03 lgh kernel: veth8d3a017: renamed from eth0 > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered disabled > state > Mar 07 08:04:03 lgh kernel: veth74bf730 (unregistering): left allmulticast > mode > Mar 07 08:04:03 lgh kernel: veth74bf730 (unregistering): left promiscuous mode > Mar 07 08:04:03 lgh kernel: docker0: port 1(veth74bf730) entered disabled > state > Mar 07 08:04:04 lgh kernel: docker0: port 1(vethc683f48) entered blocking > state > Mar 07 08:04:04 lgh kernel: docker0: port 1(vethc683f48) entered disabled > state > Mar 07 08:04:04 lgh kernel: vethc683f48: entered allmulticast mode > Mar 07 08:04:04 lgh kernel: vethc683f48: entered promiscuous mode > Mar 07 08:04:04 lgh kernel: eth0: renamed from veth2c80864 > Mar 07 08:04:04 lgh kernel: docker0: port 1(vethc683f48) entered blocking > state > Mar 07 08:04:04 lgh kernel: docker0: port 1(vethc683f48) entered forwarding > state > Mar 07 08:04:14 lgh kernel: docker0: port 1(vethc683f48) entered disabled > state > Mar 07 08:04:14 lgh kernel: veth2c80864: renamed from eth0 > Mar 07 08:04:14 lgh kernel: docker0: port 1(vethc683f48) entered disabled > state > Mar 07 08:04:14 lgh kernel: vethc683f48 (unregistering): left allmulticast > mode > Mar 07 08:04:14 lgh kernel: vethc683f48 (unregistering): left promiscuous mode > Mar 07 08:04:14 lgh kernel: docker0: port 1(vethc683f48) entered disabled > state > Mar 07 08:04:22 lgh systemd[1]: > docker-e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4.scope: > Deactivated successfully. > Mar 07 08:04:22 lgh systemd[1]: > docker-e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4.scope: > Consumed 16.111s CPU time, 430.7M memory peak, 77.8M read from disk, 260.7M > written to disk. > Mar 07 08:04:22 lgh dockerd[847]: time="2026-03-07T08:04:22.320826261+08:00" > level=info msg="ignoring event" > container=e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4 > module=libcontainerd namespace> > Mar 07 08:04:22 lgh containerd[812]: > time="2026-03-07T08:04:22.320906731+08:00" level=info msg="shim disconnected" > id=e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4 > namespace=moby > Mar 07 08:04:22 lgh containerd[812]: > time="2026-03-07T08:04:22.320929981+08:00" level=warning msg="cleaning up > after shim disconnected" > id=e8a2f79201bae5dad8ff17af80c071199295bcb7eae0c89ff41356a41fd49fd4 > namespace=> > Mar 07 08:04:22 lgh containerd[812]: > time="2026-03-07T08:04:22.320935121+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:04:22 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > disabled state > Mar 07 08:04:22 lgh kernel: veth29052fa: renamed from eth0 > Mar 07 08:04:22 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > disabled state > Mar 07 08:04:22 lgh kernel: veth1bffc5d (unregistering): left allmulticast > mode > Mar 07 08:04:22 lgh kernel: veth1bffc5d (unregistering): left promiscuous mode > Mar 07 08:04:22 lgh kernel: br-1d204413e854: port 1(veth1bffc5d) entered > disabled state > Mar 07 08:04:22 lgh systemd[1]: run-docker-netns-05669e4ada22.mount: > Deactivated successfully. > Mar 07 08:04:22 lgh systemd[1]: > var-lib-docker-overlay2-bcef57bf74391152625c25532b0866f03369250689dd57ab99e74d3b0c62aa88-merged.mount: > Deactivated successfully. > Mar 07 08:04:26 lgh systemd[1]: > var-lib-docker-overlay2-7958574ebca4a8b4c4ca705d08dfe123af03e1e8abb4315b54400add85a06e01\x2dinit-merged.mount: > Deactivated successfully. > Mar 07 08:04:26 lgh systemd[1]: > var-lib-docker-overlay2-7958574ebca4a8b4c4ca705d08dfe123af03e1e8abb4315b54400add85a06e01-merged.mount: > Deactivated successfully. > Mar 07 08:04:26 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > blocking state > Mar 07 08:04:26 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > disabled state > Mar 07 08:04:26 lgh kernel: veth32539fc: entered allmulticast mode > Mar 07 08:04:26 lgh kernel: veth32539fc: entered promiscuous mode > Mar 07 08:04:26 lgh containerd[812]: > time="2026-03-07T08:04:26.465039605+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:04:26 lgh containerd[812]: > time="2026-03-07T08:04:26.465105145+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:04:26 lgh containerd[812]: > time="2026-03-07T08:04:26.465117485+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:04:26 lgh containerd[812]: > time="2026-03-07T08:04:26.465210096+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:04:26 lgh systemd[1]: Started > docker-c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97.scope > - libcontainer container > c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97. > Mar 07 08:04:26 lgh kernel: eth0: renamed from vethef0ee15 > Mar 07 08:04:26 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > blocking state > Mar 07 08:04:26 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > forwarding state > Mar 07 08:04:30 lgh kernel: docker0: port 1(vethc85453b) entered blocking > state > Mar 07 08:04:30 lgh kernel: docker0: port 1(vethc85453b) entered disabled > state > Mar 07 08:04:30 lgh kernel: vethc85453b: entered allmulticast mode > Mar 07 08:04:30 lgh kernel: vethc85453b: entered promiscuous mode > Mar 07 08:04:31 lgh kernel: eth0: renamed from vethc50e28d > Mar 07 08:04:31 lgh kernel: docker0: port 1(vethc85453b) entered blocking > state > Mar 07 08:04:31 lgh kernel: docker0: port 1(vethc85453b) entered forwarding > state > Mar 07 08:04:33 lgh kernel: docker0: port 1(vethc85453b) entered disabled > state > Mar 07 08:04:33 lgh kernel: vethc50e28d: renamed from eth0 > Mar 07 08:04:34 lgh kernel: docker0: port 1(vethc85453b) entered disabled > state > Mar 07 08:04:34 lgh kernel: vethc85453b (unregistering): left allmulticast > mode > Mar 07 08:04:34 lgh kernel: vethc85453b (unregistering): left promiscuous mode > Mar 07 08:04:34 lgh kernel: docker0: port 1(vethc85453b) entered disabled > state > Mar 07 08:04:36 lgh systemd[1]: > docker-c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97.scope: > Deactivated successfully. > Mar 07 08:04:36 lgh systemd[1]: > docker-c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97.scope: > Consumed 5.858s CPU time, 362.4M memory peak, 4.3M read from disk, 173.4M > written to disk. > Mar 07 08:04:36 lgh dockerd[847]: time="2026-03-07T08:04:36.570293010+08:00" > level=info msg="ignoring event" > container=c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97 > module=libcontainerd namespace> > Mar 07 08:04:36 lgh containerd[812]: > time="2026-03-07T08:04:36.570475301+08:00" level=info msg="shim disconnected" > id=c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97 > namespace=moby > Mar 07 08:04:36 lgh containerd[812]: > time="2026-03-07T08:04:36.570517432+08:00" level=warning msg="cleaning up > after shim disconnected" > id=c72add1433f3c337a1cb8449bdad3963cbed121314230e4f7f24b655985c6b97 > namespace=> > Mar 07 08:04:36 lgh containerd[812]: > time="2026-03-07T08:04:36.570523002+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:04:36 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > disabled state > Mar 07 08:04:36 lgh kernel: vethef0ee15: renamed from eth0 > Mar 07 08:04:36 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > disabled state > Mar 07 08:04:36 lgh kernel: veth32539fc (unregistering): left allmulticast > mode > Mar 07 08:04:36 lgh kernel: veth32539fc (unregistering): left promiscuous mode > Mar 07 08:04:36 lgh kernel: br-1d204413e854: port 1(veth32539fc) entered > disabled state > Mar 07 08:04:36 lgh systemd[1]: run-docker-netns-09a287225d06.mount: > Deactivated successfully. > Mar 07 08:04:36 lgh systemd[1]: > var-lib-docker-overlay2-7958574ebca4a8b4c4ca705d08dfe123af03e1e8abb4315b54400add85a06e01-merged.mount: > Deactivated successfully. > Mar 07 08:04:47 lgh systemd[1]: > var-lib-docker-overlay2-31e5fabdc528958d7ce2bd70c1bf390abbca3db03ef0fe7ce2f924767c5c76e1\x2dinit-merged.mount: > Deactivated successfully. > Mar 07 08:04:47 lgh systemd[1]: > var-lib-docker-overlay2-31e5fabdc528958d7ce2bd70c1bf390abbca3db03ef0fe7ce2f924767c5c76e1-merged.mount: > Deactivated successfully. > Mar 07 08:04:47 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > blocking state > Mar 07 08:04:47 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > disabled state > Mar 07 08:04:47 lgh kernel: veth4b80fd5: entered allmulticast mode > Mar 07 08:04:47 lgh kernel: veth4b80fd5: entered promiscuous mode > Mar 07 08:04:47 lgh kernel: docker0: port 1(vethc143282) entered blocking > state > Mar 07 08:04:47 lgh kernel: docker0: port 1(vethc143282) entered disabled > state > Mar 07 08:04:47 lgh kernel: vethc143282: entered allmulticast mode > Mar 07 08:04:47 lgh kernel: vethc143282: entered promiscuous mode > Mar 07 08:04:47 lgh containerd[812]: > time="2026-03-07T08:04:47.704486927+08:00" level=info msg="loading plugin > \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 > type=io.containerd.event.v1 > Mar 07 08:04:47 lgh containerd[812]: > time="2026-03-07T08:04:47.704560457+08:00" level=info msg="loading plugin > \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 > type=io.containerd.internal.v1 > Mar 07 08:04:47 lgh containerd[812]: > time="2026-03-07T08:04:47.704577248+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:04:47 lgh containerd[812]: > time="2026-03-07T08:04:47.704650218+08:00" level=info msg="loading plugin > \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 > type=io.containerd.ttrpc.v1 > Mar 07 08:04:47 lgh systemd[1]: Started > docker-6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc.scope > - libcontainer container > 6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc. > Mar 07 08:04:47 lgh kernel: eth0: renamed from veth60404f5 > Mar 07 08:04:47 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > blocking state > Mar 07 08:04:47 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > forwarding state > Mar 07 08:04:47 lgh kernel: eth0: renamed from veth8197f76 > Mar 07 08:04:47 lgh kernel: docker0: port 1(vethc143282) entered blocking > state > Mar 07 08:04:47 lgh kernel: docker0: port 1(vethc143282) entered forwarding > state > Mar 07 08:04:51 lgh kernel: docker0: port 1(vethaa7ff32) entered blocking > state > Mar 07 08:04:51 lgh kernel: docker0: port 1(vethaa7ff32) entered disabled > state > Mar 07 08:04:51 lgh kernel: vethaa7ff32: entered allmulticast mode > Mar 07 08:04:51 lgh kernel: vethaa7ff32: entered promiscuous mode > Mar 07 08:04:52 lgh kernel: eth0: renamed from veth7b20f01 > Mar 07 08:04:52 lgh kernel: docker0: port 1(vethaa7ff32) entered blocking > state > Mar 07 08:04:52 lgh kernel: docker0: port 1(vethaa7ff32) entered forwarding > state > Mar 07 08:05:00 lgh kernel: docker0: port 1(vethaa7ff32) entered disabled > state > Mar 07 08:05:00 lgh kernel: veth7b20f01: renamed from eth0 > Mar 07 08:05:00 lgh kernel: docker0: port 1(vethaa7ff32) entered disabled > state > Mar 07 08:05:00 lgh kernel: vethaa7ff32 (unregistering): left allmulticast > mode > Mar 07 08:05:00 lgh kernel: vethaa7ff32 (unregistering): left promiscuous mode > Mar 07 08:05:00 lgh kernel: docker0: port 1(vethaa7ff32) entered disabled > state > Mar 07 08:05:01 lgh CRON[2425498]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:05:01 lgh CRON[2425500]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:05:01 lgh CRON[2425498]: pam_unix(cron:session): session closed for > user root > Mar 07 08:05:03 lgh systemd[1]: > docker-6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc.scope: > Deactivated successfully. > Mar 07 08:05:03 lgh systemd[1]: > docker-6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc.scope: > Consumed 8.514s CPU time, 398.5M memory peak, 19.1M read from disk, 220.8M > written to disk. > Mar 07 08:05:03 lgh containerd[812]: > time="2026-03-07T08:05:03.484811679+08:00" level=info msg="shim disconnected" > id=6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc > namespace=moby > Mar 07 08:05:03 lgh containerd[812]: > time="2026-03-07T08:05:03.484834139+08:00" level=warning msg="cleaning up > after shim disconnected" > id=6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc > namespace=> > Mar 07 08:05:03 lgh containerd[812]: > time="2026-03-07T08:05:03.484839509+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:05:03 lgh dockerd[847]: time="2026-03-07T08:05:03.485011431+08:00" > level=info msg="ignoring event" > container=6c5cfba0b51e0e80bfc7a2d7d34bc92b2dfd44068a2dc17496800b263d55c6cc > module=libcontainerd namespace> > Mar 07 08:05:03 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > disabled state > Mar 07 08:05:03 lgh kernel: veth60404f5: renamed from eth0 > Mar 07 08:05:03 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > disabled state > Mar 07 08:05:03 lgh kernel: veth4b80fd5 (unregistering): left allmulticast > mode > Mar 07 08:05:03 lgh kernel: veth4b80fd5 (unregistering): left promiscuous mode > Mar 07 08:05:03 lgh kernel: br-1d204413e854: port 1(veth4b80fd5) entered > disabled state > Mar 07 08:05:03 lgh systemd[1]: run-docker-netns-c5c02fde1ccb.mount: > Deactivated successfully. > Mar 07 08:05:03 lgh systemd[1]: > var-lib-docker-overlay2-31e5fabdc528958d7ce2bd70c1bf390abbca3db03ef0fe7ce2f924767c5c76e1-merged.mount: > Deactivated successfully. > Mar 07 08:05:38 lgh kernel: docker0: port 1(vethc143282) entered disabled > state > Mar 07 08:05:38 lgh kernel: veth8197f76: renamed from eth0 > Mar 07 08:05:38 lgh kernel: docker0: port 1(vethc143282) entered disabled > state > Mar 07 08:05:38 lgh kernel: vethc143282 (unregistering): left allmulticast > mode > Mar 07 08:05:38 lgh kernel: vethc143282 (unregistering): left promiscuous mode > Mar 07 08:05:38 lgh kernel: docker0: port 1(vethc143282) entered disabled > state > Mar 07 08:05:39 lgh kernel: docker0: port 1(vethc832469) entered blocking > state > Mar 07 08:05:39 lgh kernel: docker0: port 1(vethc832469) entered disabled > state > Mar 07 08:05:39 lgh kernel: vethc832469: entered allmulticast mode > Mar 07 08:05:39 lgh kernel: vethc832469: entered promiscuous mode > Mar 07 08:05:40 lgh kernel: eth0: renamed from veth445b7ad > Mar 07 08:05:40 lgh kernel: docker0: port 1(vethc832469) entered blocking > state > Mar 07 08:05:40 lgh kernel: docker0: port 1(vethc832469) entered forwarding > state > Mar 07 08:06:09 lgh kernel: docker0: port 1(vethc832469) entered disabled > state > Mar 07 08:06:09 lgh kernel: veth445b7ad: renamed from eth0 > Mar 07 08:06:09 lgh kernel: docker0: port 1(vethc832469) entered disabled > state > Mar 07 08:06:09 lgh kernel: vethc832469 (unregistering): left allmulticast > mode > Mar 07 08:06:09 lgh kernel: vethc832469 (unregistering): left promiscuous mode > Mar 07 08:06:09 lgh kernel: docker0: port 1(vethc832469) entered disabled > state > Mar 07 08:06:10 lgh kernel: docker0: port 1(veth044ba83) entered blocking > state > Mar 07 08:06:10 lgh kernel: docker0: port 1(veth044ba83) entered disabled > state > Mar 07 08:06:10 lgh kernel: veth044ba83: entered allmulticast mode > Mar 07 08:06:10 lgh kernel: veth044ba83: entered promiscuous mode > Mar 07 08:06:10 lgh kernel: eth0: renamed from veth069b010 > Mar 07 08:06:10 lgh kernel: docker0: port 1(veth044ba83) entered blocking > state > Mar 07 08:06:10 lgh kernel: docker0: port 1(veth044ba83) entered forwarding > state > Mar 07 08:06:39 lgh kernel: docker0: port 1(veth044ba83) entered disabled > state > Mar 07 08:06:39 lgh kernel: veth069b010: renamed from eth0 > Mar 07 08:06:39 lgh kernel: docker0: port 1(veth044ba83) entered disabled > state > Mar 07 08:06:39 lgh kernel: veth044ba83 (unregistering): left allmulticast > mode > Mar 07 08:06:39 lgh kernel: veth044ba83 (unregistering): left promiscuous mode > Mar 07 08:06:39 lgh kernel: docker0: port 1(veth044ba83) entered disabled > state > Mar 07 08:06:42 lgh kernel: docker0: port 1(veth6b8550c) entered blocking > state > Mar 07 08:06:42 lgh kernel: docker0: port 1(veth6b8550c) entered disabled > state > Mar 07 08:06:42 lgh kernel: veth6b8550c: entered allmulticast mode > Mar 07 08:06:42 lgh kernel: veth6b8550c: entered promiscuous mode > Mar 07 08:06:43 lgh kernel: eth0: renamed from veth945fd72 > Mar 07 08:06:43 lgh kernel: docker0: port 1(veth6b8550c) entered blocking > state > Mar 07 08:06:43 lgh kernel: docker0: port 1(veth6b8550c) entered forwarding > state > Mar 07 08:06:43 lgh kernel: docker0: port 1(veth6b8550c) entered disabled > state > Mar 07 08:06:43 lgh kernel: veth945fd72: renamed from eth0 > Mar 07 08:06:43 lgh kernel: docker0: port 1(veth6b8550c) entered disabled > state > Mar 07 08:06:43 lgh kernel: veth6b8550c (unregistering): left allmulticast > mode > Mar 07 08:06:43 lgh kernel: veth6b8550c (unregistering): left promiscuous mode > Mar 07 08:06:43 lgh kernel: docker0: port 1(veth6b8550c) entered disabled > state > Mar 07 08:06:55 lgh systemd[1]: > docker-d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206.scope: > Deactivated successfully. > Mar 07 08:06:55 lgh systemd[1]: > docker-d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206.scope: > Consumed 1min 29.281s CPU time, 1.5G memory peak, 175.7M read from disk, > 1.8G written to disk. > Mar 07 08:06:55 lgh containerd[812]: > time="2026-03-07T08:06:55.152873081+08:00" level=info msg="shim disconnected" > id=d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206 > namespace=moby > Mar 07 08:06:55 lgh containerd[812]: > time="2026-03-07T08:06:55.152900101+08:00" level=warning msg="cleaning up > after shim disconnected" > id=d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206 > namespace=> > Mar 07 08:06:55 lgh containerd[812]: > time="2026-03-07T08:06:55.152905051+08:00" level=info msg="cleaning up dead > shim" namespace=moby > Mar 07 08:06:55 lgh dockerd[847]: time="2026-03-07T08:06:55.152962002+08:00" > level=info msg="ignoring event" > container=d571f0b2076ea9118716214beebdb9557a51c5834412b27844f7c766f3e19206 > module=libcontainerd namespace> > Mar 07 08:06:55 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > disabled state > Mar 07 08:06:55 lgh kernel: veth49b97fa: renamed from eth0 > Mar 07 08:06:55 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > disabled state > Mar 07 08:06:55 lgh kernel: veth8feaa35 (unregistering): left allmulticast > mode > Mar 07 08:06:55 lgh kernel: veth8feaa35 (unregistering): left promiscuous mode > Mar 07 08:06:55 lgh kernel: br-50367f1966be: port 1(veth8feaa35) entered > disabled state > Mar 07 08:06:55 lgh systemd[1]: run-docker-netns-945200d394ae.mount: > Deactivated successfully. > Mar 07 08:06:55 lgh systemd[1]: > var-lib-docker-overlay2-b96e4a85f094d8349a3ab634bcbf7e0ddc268b5b44207ad3c8488b300047f953-merged.mount: > Deactivated successfully. > Mar 07 08:15:01 lgh CRON[2438257]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:15:01 lgh CRON[2438259]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:15:01 lgh CRON[2438257]: pam_unix(cron:session): session closed for > user root > Mar 07 08:17:01 lgh CRON[2440499]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:17:01 lgh CRON[2440501]: (root) CMD (cd / && run-parts --report > /etc/cron.hourly) > Mar 07 08:17:01 lgh CRON[2440499]: pam_unix(cron:session): session closed for > user root > Mar 07 08:25:01 lgh CRON[2449699]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:25:01 lgh CRON[2449701]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:25:01 lgh CRON[2449699]: pam_unix(cron:session): session closed for > user root > Mar 07 08:35:01 lgh CRON[2461133]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:35:01 lgh CRON[2461135]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:35:01 lgh CRON[2461133]: pam_unix(cron:session): session closed for > user root > Mar 07 08:45:01 lgh CRON[2472520]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:45:01 lgh CRON[2472522]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:45:01 lgh CRON[2472520]: pam_unix(cron:session): session closed for > user root > Mar 07 08:55:01 lgh CRON[2483956]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 08:55:01 lgh CRON[2483958]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 08:55:01 lgh CRON[2483956]: pam_unix(cron:session): session closed for > user root > Mar 07 09:05:01 lgh CRON[2495331]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:05:01 lgh CRON[2495333]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:05:01 lgh CRON[2495331]: pam_unix(cron:session): session closed for > user root > Mar 07 09:15:01 lgh CRON[2506727]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:15:01 lgh CRON[2506729]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:15:01 lgh CRON[2506727]: pam_unix(cron:session): session closed for > user root > Mar 07 09:17:01 lgh CRON[2508992]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:17:01 lgh CRON[2508994]: (root) CMD (cd / && run-parts --report > /etc/cron.hourly) > Mar 07 09:17:01 lgh CRON[2508992]: pam_unix(cron:session): session closed for > user root > Mar 07 09:25:01 lgh CRON[2518160]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:25:01 lgh CRON[2518162]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:25:01 lgh CRON[2518160]: pam_unix(cron:session): session closed for > user root > Mar 07 09:35:01 lgh CRON[2529606]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:35:01 lgh CRON[2529608]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:35:01 lgh CRON[2529606]: pam_unix(cron:session): session closed for > user root > Mar 07 09:45:01 lgh CRON[2540972]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:45:01 lgh CRON[2540974]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:45:01 lgh CRON[2540972]: pam_unix(cron:session): session closed for > user root > Mar 07 09:55:01 lgh CRON[2552380]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 09:55:01 lgh CRON[2552382]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 09:55:01 lgh CRON[2552380]: pam_unix(cron:session): session closed for > user root > Mar 07 10:05:01 lgh CRON[2563829]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:05:01 lgh CRON[2563831]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:05:01 lgh CRON[2563829]: pam_unix(cron:session): session closed for > user root > Mar 07 10:15:01 lgh CRON[2575225]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:15:01 lgh CRON[2575227]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:15:01 lgh CRON[2575225]: pam_unix(cron:session): session closed for > user root > Mar 07 10:17:01 lgh CRON[2577489]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:17:01 lgh CRON[2577491]: (root) CMD (cd / && run-parts --report > /etc/cron.hourly) > Mar 07 10:17:01 lgh CRON[2577489]: pam_unix(cron:session): session closed for > user root > Mar 07 10:25:01 lgh CRON[2586612]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:25:01 lgh CRON[2586614]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:25:01 lgh CRON[2586612]: pam_unix(cron:session): session closed for > user root > Mar 07 10:35:01 lgh CRON[2598010]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:35:01 lgh CRON[2598012]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:35:01 lgh CRON[2598010]: pam_unix(cron:session): session closed for > user root > Mar 07 10:45:01 lgh CRON[2609389]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:45:01 lgh CRON[2609391]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:45:01 lgh CRON[2609389]: pam_unix(cron:session): session closed for > user root > Mar 07 10:47:38 lgh kernel: BUG: unable to handle page fault for address: > 0000000000080018 > Mar 07 10:47:38 lgh kernel: #PF: supervisor read access in kernel mode > Mar 07 10:47:38 lgh kernel: #PF: error_code(0x0000) - not-present page > Mar 07 10:47:38 lgh kernel: PGD 110c28067 P4D 110c28067 PUD 1f2316067 PMD 0 > Mar 07 10:47:38 lgh kernel: Oops: Oops: 0000 [#7] PREEMPT SMP NOPTI > Mar 07 10:47:38 lgh kernel: CPU: 0 UID: 0 PID: 2612371 Comm: runc Tainted: G > D 6.12.73+deb13-amd64 #1 Debian 6.12.73-1 > Mar 07 10:47:38 lgh kernel: Tainted: [D]=DIE > Mar 07 10:47:38 lgh kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, > 1996), BIOS 4.2025.05-2 11/13/2025 > Mar 07 10:47:38 lgh kernel: RIP: 0010:__d_lookup+0x58/0xd0 > Mar 07 10:47:38 lgh kernel: Code: c2 e8 4c f5 d0 ff 48 8b 03 48 89 c3 48 83 > e3 fe 48 83 f8 01 77 14 eb 39 66 2e 0f 1f 84 00 00 00 00 00 48 8b 1b 48 85 db > 74 27 <39> 6b 18 75 f3 4c 8d 63 78 4c 89 e7 e8 d7 eb 89 00 4> > Mar 07 10:47:38 lgh kernel: RSP: 0018:ffffd23c4c6779c8 EFLAGS: 00010216 > Mar 07 10:47:38 lgh kernel: RAX: 0000000000080000 RBX: 0000000000080000 RCX: > 61c8864680b583eb > Mar 07 10:47:38 lgh kernel: RDX: ffff8dc498f93080 RSI: ffffd23c4c677a30 RDI: > ffff8dc66ea93900 > Mar 07 10:47:38 lgh kernel: RBP: 00000000ae232ad0 R08: 0000000000000000 R09: > ffff8dc5d9df3080 > Mar 07 10:47:38 lgh kernel: R10: 0000000000000000 R11: 0000000000000000 R12: > ffffd23c4c677a30 > Mar 07 10:47:38 lgh kernel: R13: ffff8dc66ea93900 R14: ffffd23c4c677a30 R15: > ffffd23c4c677a30 > Mar 07 10:47:38 lgh kernel: FS: 00007f12e63be6c0(0000) > GS:ffff8dc6b7c00000(0000) knlGS:0000000000000000 > Mar 07 10:47:38 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 10:47:38 lgh kernel: CR2: 0000000000080018 CR3: 0000000159d88000 CR4: > 00000000000006f0 > Mar 07 10:47:38 lgh kernel: Call Trace: > Mar 07 10:47:38 lgh kernel: <TASK> > Mar 07 10:47:38 lgh kernel: ? __pfx_proc_fd_instantiate+0x10/0x10 > Mar 07 10:47:38 lgh kernel: d_hash_and_lookup+0x5a/0x80 > Mar 07 10:47:38 lgh kernel: proc_fill_cache+0x64/0x170 > Mar 07 10:47:38 lgh kernel: proc_readfd_common+0xca/0x210 > Mar 07 10:47:38 lgh kernel: ? __pfx_proc_fd_instantiate+0x10/0x10 > Mar 07 10:47:38 lgh kernel: iterate_dir+0x111/0x200 > Mar 07 10:47:38 lgh kernel: __x64_sys_getdents64+0x86/0x130 > Mar 07 10:47:38 lgh kernel: ? __pfx_filldir64+0x10/0x10 > Mar 07 10:47:38 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 10:47:38 lgh kernel: do_syscall_64+0x82/0x190 > Mar 07 10:47:38 lgh kernel: ? __x64_sys_fcntl+0x87/0xe0 > Mar 07 10:47:38 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 10:47:38 lgh kernel: ? syscall_exit_to_user_mode+0x37/0x1b0 > Mar 07 10:47:38 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 10:47:38 lgh kernel: ? futex_wake+0x187/0x1b0 > Mar 07 10:47:38 lgh kernel: ? _copy_to_user+0x36/0x50 > Mar 07 10:47:38 lgh kernel: ? do_statfs_native+0xaf/0xe0 > Mar 07 10:47:38 lgh kernel: ? __do_sys_fstatfs+0x5c/0x70 > Mar 07 10:47:38 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 10:47:38 lgh kernel: ? syscall_exit_to_user_mode+0x37/0x1b0 > Mar 07 10:47:38 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 10:47:38 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 10:47:38 lgh kernel: ? syscall_exit_to_user_mode+0x37/0x1b0 > Mar 07 10:47:38 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 10:47:38 lgh kernel: ? __handle_mm_fault+0x7c2/0xf70 > Mar 07 10:47:38 lgh kernel: ? __x64_sys_fcntl+0x87/0xe0 > Mar 07 10:47:38 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 10:47:38 lgh kernel: ? syscall_exit_to_user_mode+0x37/0x1b0 > Mar 07 10:47:38 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 10:47:38 lgh kernel: ? __count_memcg_events+0x53/0xf0 > Mar 07 10:47:38 lgh kernel: ? count_memcg_events.constprop.0+0x1a/0x30 > Mar 07 10:47:38 lgh kernel: ? handle_mm_fault+0x1bb/0x2c0 > Mar 07 10:47:38 lgh kernel: ? do_user_addr_fault+0x36c/0x620 > Mar 07 10:47:38 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 10:47:38 lgh kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e > Mar 07 10:47:38 lgh kernel: RIP: 0033:0x41036e > Mar 07 10:47:38 lgh kernel: Code: 24 28 44 8b 44 24 2c e9 70 ff ff ff cc cc > cc cc cc cc cc cc cc cc cc cc cc cc cc cc 49 89 f2 48 89 fa 48 89 ce 48 89 df > 0f 05 <48> 3d 01 f0 ff ff 76 15 48 f7 d8 48 89 c1 48 c7 c0 f> > Mar 07 10:47:38 lgh kernel: RSP: 002b:000000c000162798 EFLAGS: 00000206 > ORIG_RAX: 00000000000000d9 > Mar 07 10:47:38 lgh kernel: RAX: ffffffffffffffda RBX: 000000000000000b RCX: > 000000000041036e > Mar 07 10:47:38 lgh kernel: RDX: 0000000000002000 RSI: 000000c00023e000 RDI: > 000000000000000b > Mar 07 10:47:38 lgh kernel: RBP: 000000c0001627d8 R08: 0000000000000000 R09: > 0000000000000000 > Mar 07 10:47:38 lgh kernel: R10: 0000000000000000 R11: 0000000000000206 R12: > 000000c000162908 > Mar 07 10:47:38 lgh kernel: R13: 0000000000000040 R14: 000000c000002380 R15: > 000000c00023c000 > Mar 07 10:47:38 lgh kernel: </TASK> > Mar 07 10:47:38 lgh kernel: Modules linked in: iptable_filter iptable_nat > wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 > libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel udp_> > Mar 07 10:47:38 lgh kernel: scsi_transport_spi usbcore drm psmouse scsi_mod > virtio_net net_failover virtio_blk serio_raw i2c_piix4 scsi_common i2c_smbus > usb_common failover floppy > Mar 07 10:47:38 lgh kernel: CR2: 0000000000080018 > Mar 07 10:47:38 lgh kernel: ---[ end trace 0000000000000000 ]--- > Mar 07 10:47:39 lgh kernel: RIP: 0010:__d_lookup_rcu+0x51/0xe0 > Mar 07 10:47:39 lgh kernel: Code: 48 8d 04 c2 f6 07 02 0f 85 a0 00 00 00 48 > 8b 10 48 89 d0 48 83 e0 fe 48 83 fa 01 77 0d e9 80 00 00 00 48 8b 00 48 85 c0 > 74 78 <44> 8b 58 fc 48 39 78 10 75 ee 48 83 78 08 00 74 e7 4> > Mar 07 10:47:39 lgh kernel: RSP: 0018:ffffd23c4af87be0 EFLAGS: 00010216 > Mar 07 10:47:39 lgh kernel: RAX: 0000000000400000 RBX: 00000007ab2228b5 RCX: > 0000000000000000 > Mar 07 10:47:39 lgh kernel: RDX: 0000000000400000 RSI: ffffd23c4af87cd0 RDI: > ffff8dc5ecb7b480 > Mar 07 10:47:39 lgh kernel: RBP: ffffd23c4af87d04 R08: 0000000000000051 R09: > ff9a9196939b929c > Mar 07 10:47:39 lgh kernel: R10: ffff8dc5ecb7b480 R11: 0000000000000002 R12: > ffff8dc5829ee02b > Mar 07 10:47:39 lgh kernel: R13: ffffd23c4af87cc0 R14: 0000000000000000 R15: > ffffd23c4af87dfc > Mar 07 10:47:39 lgh kernel: FS: 00007f12e63be6c0(0000) > GS:ffff8dc6b7c00000(0000) knlGS:0000000000000000 > Mar 07 10:47:39 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 10:47:39 lgh kernel: CR2: 0000000000080018 CR3: 0000000159d88000 CR4: > 00000000000006f0 > Mar 07 10:48:08 lgh dockerd[847]: time="2026-03-07T10:48:08.627132591+08:00" > level=warning msg="Health check for container > 7159239adac9db43b653046fd10ad053bf151550baa3d4e916ed0d56fda9d966 error: timed > out starting > > Mar 07 10:48:08 lgh dockerd[847]: time="2026-03-07T10:48:08.627694810+08:00" > level=error msg="stream copy error: reading from a closed fifo" > Mar 07 10:48:08 lgh dockerd[847]: time="2026-03-07T10:48:08.627813150+08:00" > level=error msg="stream copy error: reading from a closed fifo" > Mar 07 10:55:01 lgh CRON[2620812]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 10:55:01 lgh CRON[2620814]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 10:55:01 lgh CRON[2620812]: pam_unix(cron:session): session closed for > user root > Mar 07 11:05:01 lgh CRON[2632257]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:05:01 lgh CRON[2632259]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:05:01 lgh CRON[2632257]: pam_unix(cron:session): session closed for > user root > Mar 07 11:13:16 lgh kernel: BUG: unable to handle page fault for address: > 0000000000080018 > Mar 07 11:13:16 lgh kernel: #PF: supervisor read access in kernel mode > Mar 07 11:13:16 lgh kernel: #PF: error_code(0x0000) - not-present page > Mar 07 11:13:16 lgh kernel: PGD 12a3c0067 P4D 12a3c0067 PUD 1b16f6067 PMD 0 > Mar 07 11:13:16 lgh kernel: Oops: Oops: 0000 [#8] PREEMPT SMP NOPTI > Mar 07 11:13:16 lgh kernel: CPU: 2 UID: 0 PID: 2641638 Comm: runc Tainted: G > D 6.12.73+deb13-amd64 #1 Debian 6.12.73-1 > Mar 07 11:13:16 lgh kernel: Tainted: [D]=DIE > Mar 07 11:13:16 lgh kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, > 1996), BIOS 4.2025.05-2 11/13/2025 > Mar 07 11:13:16 lgh kernel: RIP: 0010:__d_lookup+0x58/0xd0 > Mar 07 11:13:16 lgh kernel: Code: c2 e8 4c f5 d0 ff 48 8b 03 48 89 c3 48 83 > e3 fe 48 83 f8 01 77 14 eb 39 66 2e 0f 1f 84 00 00 00 00 00 48 8b 1b 48 85 db > 74 27 <39> 6b 18 75 f3 4c 8d 63 78 4c 89 e7 e8 d7 eb 89 00 4> > Mar 07 11:13:16 lgh kernel: RSP: 0018:ffffd23c49e77c78 EFLAGS: 00010216 > Mar 07 11:13:16 lgh kernel: RAX: 0000000000080000 RBX: 0000000000080000 RCX: > 61c8864680b583eb > Mar 07 11:13:16 lgh kernel: RDX: ffff8dc5d9df6100 RSI: ffffd23c49e77ce0 RDI: > ffff8dc58b198780 > Mar 07 11:13:16 lgh kernel: RBP: 00000000ae232714 R08: 0000000000000000 R09: > ffff8dc487c53080 > Mar 07 11:13:16 lgh kernel: R10: 0000000000000001 R11: 0000000000000000 R12: > ffffd23c49e77ce0 > Mar 07 11:13:16 lgh kernel: R13: ffff8dc58b198780 R14: ffffd23c49e77ce0 R15: > ffffd23c49e77ce0 > Mar 07 11:13:16 lgh kernel: FS: 00007fc609e5b6c0(0000) > GS:ffff8dc6b7d00000(0000) knlGS:0000000000000000 > Mar 07 11:13:16 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 11:13:16 lgh kernel: CR2: 0000000000080018 CR3: 0000000167fac000 CR4: > 00000000000006f0 > Mar 07 11:13:16 lgh kernel: Call Trace: > Mar 07 11:13:16 lgh kernel: <TASK> > Mar 07 11:13:16 lgh kernel: ? __pfx_proc_fd_instantiate+0x10/0x10 > Mar 07 11:13:16 lgh kernel: d_hash_and_lookup+0x5a/0x80 > Mar 07 11:13:16 lgh kernel: proc_fill_cache+0x64/0x170 > Mar 07 11:13:16 lgh kernel: proc_readfd_common+0xca/0x210 > Mar 07 11:13:16 lgh kernel: ? __pfx_proc_fd_instantiate+0x10/0x10 > Mar 07 11:13:16 lgh kernel: iterate_dir+0x111/0x200 > Mar 07 11:13:16 lgh kernel: __x64_sys_getdents64+0x86/0x130 > Mar 07 11:13:16 lgh kernel: ? __pfx_filldir64+0x10/0x10 > Mar 07 11:13:16 lgh kernel: do_syscall_64+0x82/0x190 > Mar 07 11:13:16 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 11:13:16 lgh kernel: ? syscall_exit_to_user_mode+0x37/0x1b0 > Mar 07 11:13:16 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 11:13:16 lgh kernel: ? do_syscall_64+0x8e/0x190 > Mar 07 11:13:16 lgh kernel: ? do_user_addr_fault+0x36c/0x620 > Mar 07 11:13:16 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 11:13:16 lgh kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e > Mar 07 11:13:16 lgh kernel: RIP: 0033:0x41036e > Mar 07 11:13:16 lgh kernel: Code: 24 28 44 8b 44 24 2c e9 70 ff ff ff cc cc > cc cc cc cc cc cc cc cc cc cc cc cc cc cc 49 89 f2 48 89 fa 48 89 ce 48 89 df > 0f 05 <48> 3d 01 f0 ff ff 76 15 48 f7 d8 48 89 c1 48 c7 c0 f> > Mar 07 11:13:16 lgh kernel: RSP: 002b:000000c0001ac798 EFLAGS: 00000206 > ORIG_RAX: 00000000000000d9 > Mar 07 11:13:16 lgh kernel: RAX: ffffffffffffffda RBX: 000000000000000c RCX: > 000000000041036e > Mar 07 11:13:16 lgh kernel: RDX: 0000000000002000 RSI: 000000c0000d6000 RDI: > 000000000000000c > Mar 07 11:13:16 lgh kernel: RBP: 000000c0001ac7d8 R08: 0000000000000000 R09: > 0000000000000000 > Mar 07 11:13:16 lgh kernel: R10: 0000000000000000 R11: 0000000000000206 R12: > 000000c0001ac908 > Mar 07 11:13:16 lgh kernel: R13: 0000000000000040 R14: 000000c000002380 R15: > 000000c0000d4000 > Mar 07 11:13:16 lgh kernel: </TASK> > Mar 07 11:13:16 lgh kernel: Modules linked in: iptable_filter iptable_nat > wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 > libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel udp_> > Mar 07 11:13:16 lgh kernel: scsi_transport_spi usbcore drm psmouse scsi_mod > virtio_net net_failover virtio_blk serio_raw i2c_piix4 scsi_common i2c_smbus > usb_common failover floppy > Mar 07 11:13:16 lgh kernel: CR2: 0000000000080018 > Mar 07 11:13:16 lgh kernel: ---[ end trace 0000000000000000 ]--- > Mar 07 11:13:17 lgh kernel: RIP: 0010:__d_lookup_rcu+0x51/0xe0 > Mar 07 11:13:17 lgh kernel: Code: 48 8d 04 c2 f6 07 02 0f 85 a0 00 00 00 48 > 8b 10 48 89 d0 48 83 e0 fe 48 83 fa 01 77 0d e9 80 00 00 00 48 8b 00 48 85 c0 > 74 78 <44> 8b 58 fc 48 39 78 10 75 ee 48 83 78 08 00 74 e7 4> > Mar 07 11:13:17 lgh kernel: RSP: 0018:ffffd23c4af87be0 EFLAGS: 00010216 > Mar 07 11:13:17 lgh kernel: RAX: 0000000000400000 RBX: 00000007ab2228b5 RCX: > 0000000000000000 > Mar 07 11:13:17 lgh kernel: RDX: 0000000000400000 RSI: ffffd23c4af87cd0 RDI: > ffff8dc5ecb7b480 > Mar 07 11:13:17 lgh kernel: RBP: ffffd23c4af87d04 R08: 0000000000000051 R09: > ff9a9196939b929c > Mar 07 11:13:17 lgh kernel: R10: ffff8dc5ecb7b480 R11: 0000000000000002 R12: > ffff8dc5829ee02b > Mar 07 11:13:17 lgh kernel: R13: ffffd23c4af87cc0 R14: 0000000000000000 R15: > ffffd23c4af87dfc > Mar 07 11:13:17 lgh kernel: FS: 00007fc609e5b6c0(0000) > GS:ffff8dc6b7d00000(0000) knlGS:0000000000000000 > Mar 07 11:13:17 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 11:13:17 lgh kernel: CR2: 0000000000080018 CR3: 0000000167fac000 CR4: > 00000000000006f0 > Mar 07 11:13:46 lgh dockerd[847]: time="2026-03-07T11:13:46.691292310+08:00" > level=warning msg="Health check for container > d09e9b18e546576c74b4a64e3bf150cd29e3027bfded77b81aee015352986af8 error: timed > out starting > > Mar 07 11:13:46 lgh dockerd[847]: time="2026-03-07T11:13:46.691414690+08:00" > level=error msg="stream copy error: reading from a closed fifo" > Mar 07 11:13:46 lgh dockerd[847]: time="2026-03-07T11:13:46.691424870+08:00" > level=error msg="stream copy error: reading from a closed fifo" > Mar 07 11:15:02 lgh CRON[2643616]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:15:02 lgh CRON[2643619]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:15:02 lgh CRON[2643616]: pam_unix(cron:session): session closed for > user root > Mar 07 11:17:02 lgh CRON[2645864]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:17:02 lgh CRON[2645867]: (root) CMD (cd / && run-parts --report > /etc/cron.hourly) > Mar 07 11:17:02 lgh CRON[2645864]: pam_unix(cron:session): session closed for > user root > Mar 07 11:25:01 lgh CRON[2655002]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:25:01 lgh CRON[2655004]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:25:01 lgh CRON[2655002]: pam_unix(cron:session): session closed for > user root > Mar 07 11:35:01 lgh CRON[2666376]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:35:01 lgh CRON[2666378]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:35:01 lgh CRON[2666376]: pam_unix(cron:session): session closed for > user root > Mar 07 11:45:01 lgh CRON[2677748]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:45:01 lgh CRON[2677750]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:45:01 lgh CRON[2677748]: pam_unix(cron:session): session closed for > user root > Mar 07 11:55:01 lgh CRON[2689196]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 11:55:01 lgh CRON[2689198]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 11:55:01 lgh CRON[2689196]: pam_unix(cron:session): session closed for > user root > Mar 07 12:00:01 lgh CRON[2694922]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:00:01 lgh CRON[2694925]: (root) CMD (test -x /usr/bin/certbot -a \! > -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q > renew --no-random-sleep-on-renew) > Mar 07 12:00:01 lgh CRON[2694922]: pam_unix(cron:session): session closed for > user root > Mar 07 12:05:01 lgh CRON[2700682]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:05:01 lgh CRON[2700685]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:05:01 lgh CRON[2700682]: pam_unix(cron:session): session closed for > user root > Mar 07 12:15:01 lgh CRON[2712070]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:15:01 lgh CRON[2712072]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:15:01 lgh CRON[2712070]: pam_unix(cron:session): session closed for > user root > Mar 07 12:17:01 lgh CRON[2714341]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:17:01 lgh CRON[2714343]: (root) CMD (cd / && run-parts --report > /etc/cron.hourly) > Mar 07 12:17:01 lgh CRON[2714341]: pam_unix(cron:session): session closed for > user root > Mar 07 12:25:01 lgh CRON[2723499]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:25:01 lgh CRON[2723501]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:25:01 lgh CRON[2723499]: pam_unix(cron:session): session closed for > user root > Mar 07 12:35:01 lgh CRON[2734912]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:35:01 lgh CRON[2734914]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:35:01 lgh CRON[2734912]: pam_unix(cron:session): session closed for > user root > Mar 07 12:37:57 lgh kernel: BUG: unable to handle page fault for address: > 000000000003fffc > Mar 07 12:37:57 lgh kernel: #PF: supervisor read access in kernel mode > Mar 07 12:37:57 lgh kernel: #PF: error_code(0x0000) - not-present page > Mar 07 12:37:57 lgh kernel: PGD 0 P4D 0 > Mar 07 12:37:57 lgh kernel: Oops: Oops: 0000 [#9] PREEMPT SMP NOPTI > Mar 07 12:37:57 lgh kernel: CPU: 1 UID: 0 PID: 2738244 Comm: python Tainted: > G D 6.12.73+deb13-amd64 #1 Debian 6.12.73-1 > Mar 07 12:37:57 lgh kernel: Tainted: [D]=DIE > Mar 07 12:37:57 lgh kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, > 1996), BIOS 4.2025.05-2 11/13/2025 > Mar 07 12:37:57 lgh kernel: RIP: 0010:__d_lookup_rcu+0x51/0xe0 > Mar 07 12:37:57 lgh kernel: Code: 48 8d 04 c2 f6 07 02 0f 85 a0 00 00 00 48 > 8b 10 48 89 d0 48 83 e0 fe 48 83 fa 01 77 0d e9 80 00 00 00 48 8b 00 48 85 c0 > 74 78 <44> 8b 58 fc 48 39 78 10 75 ee 48 83 78 08 00 74 e7 4> > Mar 07 12:37:57 lgh kernel: RSP: 0018:ffffd23c46297c70 EFLAGS: 00010216 > Mar 07 12:37:57 lgh kernel: RAX: 0000000000040000 RBX: 000000053d862fd6 RCX: > 000000003d862fd6 > Mar 07 12:37:57 lgh kernel: RDX: 0000000000040000 RSI: ffffd23c46297d70 RDI: > ffff8dc5807df600 > Mar 07 12:37:57 lgh kernel: RBP: ffffd23c46297da4 R08: 0000000000000000 R09: > ffffffcaccceccc7 > Mar 07 12:37:57 lgh kernel: R10: ffff8dc5807df600 R11: 0000000000000000 R12: > ffff8dc50943ca20 > Mar 07 12:37:57 lgh kernel: R13: fefefefefefefeff R14: ffff8dc580ded02b R15: > d0d0d0d0d0d0d0d0 > Mar 07 12:37:57 lgh kernel: FS: 0000000000000000(0000) > GS:ffff8dc6b7c80000(0000) knlGS:0000000000000000 > Mar 07 12:37:57 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 12:37:57 lgh kernel: CR2: 000000000003fffc CR3: 0000000105748000 CR4: > 00000000000006f0 > Mar 07 12:37:57 lgh kernel: Call Trace: > Mar 07 12:37:57 lgh kernel: <TASK> > Mar 07 12:37:57 lgh kernel: lookup_fast+0x26/0xf0 > Mar 07 12:37:57 lgh kernel: walk_component+0x1f/0x150 > Mar 07 12:37:57 lgh kernel: link_path_walk.part.0.constprop.0+0x1c8/0x390 > Mar 07 12:37:57 lgh kernel: path_lookupat+0x3e/0x1a0 > Mar 07 12:37:57 lgh kernel: filename_lookup+0xde/0x1d0 > Mar 07 12:37:57 lgh kernel: ? __pfx_kfree_link+0x10/0x10 > Mar 07 12:37:57 lgh kernel: do_readlinkat+0x7e/0x180 > Mar 07 12:37:57 lgh kernel: __x64_sys_readlinkat+0x1c/0x30 > Mar 07 12:37:57 lgh kernel: do_syscall_64+0x82/0x190 > Mar 07 12:37:57 lgh kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x16/0xa0 > Mar 07 12:37:57 lgh kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e > Mar 07 12:37:58 lgh kernel: RIP: 0033:0x7f70a76e7bc7 > Mar 07 12:37:58 lgh kernel: Code: 00 00 00 41 54 41 ba 00 10 00 00 bf 9c ff > ff ff 48 8d 35 9c e4 01 00 55 b8 0b 01 00 00 53 48 81 ec 00 10 00 00 48 89 e2 > 0f 05 <85> c0 7e 7d 0f b6 14 24 80 fa 5b 74 74 80 fa 2f 0f 8> > Mar 07 12:37:58 lgh kernel: RSP: 002b:00007ffce61b1f10 EFLAGS: 00000202 > ORIG_RAX: 000000000000010b > Mar 07 12:37:58 lgh kernel: RAX: ffffffffffffffda RBX: 00007f70a76d3140 RCX: > 00007f70a76e7bc7 > Mar 07 12:37:58 lgh kernel: RDX: 00007ffce61b1f10 RSI: 00007f70a7706050 RDI: > 00000000ffffff9c > Mar 07 12:37:58 lgh kernel: RBP: 0000000000000001 R08: 00007f70a76d3140 R09: > 00007f70a7712310 > Mar 07 12:37:58 lgh kernel: R10: 0000000000001000 R11: 0000000000000202 R12: > 00007f70a7712310 > Mar 07 12:37:58 lgh kernel: R13: 000000000000000e R14: 00007f70a76d3140 R15: > 00007f70a76d3150 > Mar 07 12:37:58 lgh kernel: </TASK> > Mar 07 12:37:58 lgh kernel: Modules linked in: iptable_filter iptable_nat > wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 > libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel udp_> > Mar 07 12:37:58 lgh kernel: scsi_transport_spi usbcore drm psmouse scsi_mod > virtio_net net_failover virtio_blk serio_raw i2c_piix4 scsi_common i2c_smbus > usb_common failover floppy > Mar 07 12:37:58 lgh kernel: CR2: 000000000003fffc > Mar 07 12:37:58 lgh kernel: ---[ end trace 0000000000000000 ]--- > Mar 07 12:37:58 lgh kernel: RIP: 0010:__d_lookup_rcu+0x51/0xe0 > Mar 07 12:37:58 lgh kernel: Code: 48 8d 04 c2 f6 07 02 0f 85 a0 00 00 00 48 > 8b 10 48 89 d0 48 83 e0 fe 48 83 fa 01 77 0d e9 80 00 00 00 48 8b 00 48 85 c0 > 74 78 <44> 8b 58 fc 48 39 78 10 75 ee 48 83 78 08 00 74 e7 4> > Mar 07 12:37:58 lgh kernel: RSP: 0018:ffffd23c4af87be0 EFLAGS: 00010216 > Mar 07 12:37:58 lgh kernel: RAX: 0000000000400000 RBX: 00000007ab2228b5 RCX: > 0000000000000000 > Mar 07 12:37:58 lgh kernel: RDX: 0000000000400000 RSI: ffffd23c4af87cd0 RDI: > ffff8dc5ecb7b480 > Mar 07 12:37:58 lgh kernel: RBP: ffffd23c4af87d04 R08: 0000000000000051 R09: > ff9a9196939b929c > Mar 07 12:37:58 lgh kernel: R10: ffff8dc5ecb7b480 R11: 0000000000000002 R12: > ffff8dc5829ee02b > Mar 07 12:37:58 lgh kernel: R13: ffffd23c4af87cc0 R14: 0000000000000000 R15: > ffffd23c4af87dfc > Mar 07 12:37:58 lgh kernel: FS: 0000000000000000(0000) > GS:ffff8dc6b7c80000(0000) knlGS:0000000000000000 > Mar 07 12:37:58 lgh kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > Mar 07 12:37:58 lgh kernel: CR2: 000000000003fffc CR3: 0000000105748000 CR4: > 00000000000006f0 > Mar 07 12:45:01 lgh CRON[2746306]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:45:01 lgh CRON[2746308]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:45:01 lgh CRON[2746306]: pam_unix(cron:session): session closed for > user root > Mar 07 12:55:01 lgh CRON[2757726]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 12:55:01 lgh CRON[2757728]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 12:55:01 lgh CRON[2757726]: pam_unix(cron:session): session closed for > user root > Mar 07 13:05:01 lgh CRON[2769148]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 13:05:01 lgh CRON[2769150]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 13:05:01 lgh CRON[2769148]: pam_unix(cron:session): session closed for > user root > Mar 07 13:15:01 lgh CRON[2780598]: pam_unix(cron:session): session opened for > user root(uid=0) by root(uid=0) > Mar 07 13:15:01 lgh CRON[2780600]: (root) CMD (command -v debian-sa1 > > /dev/null && debian-sa1 1 1) > Mar 07 13:15:01 lgh CRON[2780598]: pam_unix(cron:session): session closed for > user root > Mar 07 13:16:57 lgh kernel: list_del corruption. next->prev should be > fffff7dd020aa648, but was fffff7dd0202a648. (next=fffff7dd01c14b08) > Mar 07 13:16:57 lgh kernel: ------------[ cut here ]------------ > Mar 07 13:16:57 lgh kernel: kernel BUG at lib/list_debug.c:65! > Mar 07 13:16:57 lgh kernel: Oops: invalid opcode: 0000 [#10] PREEMPT SMP NOPTI Thanks for your report! A couple of initial questions: Is this a regression from a previous runnin trixie kernel, which was the latest working? If it is are regression, even you have not a minimal reprouction case, might you be able to bisect between the last good, and 6.12.73 version to identify a breaking commit? Is the problem still as well present on 6.19.6-1 as uploaded to unstable? Regards, Salvatore

