On Thu, Feb 12, 2026 at 4:36 PM Claudio Jeker <[email protected]> wrote:
> > Another ddb session, this time with show all pools included.
> >
> > ddb{1}> show panic
> > panic: malloc: out of space in kmem_map
> > Stopped at db_enter+0x14: popq %rbp
> > TID PID UID PRFLAGS PFLAGS CPU COMMAND
> > 256720 41745 76 0x1000010 0 0 p0f3
> > *475540 93944 0 0x14000 0x200 1 systq
> >
> > ddb{1}> tr
> > db_enter() at db_enter+0x14
> > panic(ffffffff8257309f) at panic+0xd5
> > malloc(2a39,2,9) at malloc+0x823
> > vmt_nicinfo_task(ffff8000000f8800) at vmt_nicinfo_task+0xec
> > taskq_thread(ffffffff82a35098) at taskq_thread+0x129
> > end trace frame: 0x0, count: -5
>
> What is this box doing and is there actually enough resources for that
> task assigned to the VM?
It runs about 10 network daemons serving TCP clients. About 64-128
open sockets each, at any given time. Not much traffic, but around 4k
pf states.
The resources:
hw.model=Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
hw.vendor=VMware, Inc.
hw.physmem=4277600256
hw.ncpuonline=2
> > ddb{1}> show all pools
> > Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg
> > Idle
> > tcpcb 736 4309640 103 4091353 20236 382 19854 19854 0 8
> > 0
> > inpcb 328 5892193 0 5673850 18519 314 18205 18205 0 8
> > 0
> > sockpl 552 7621733 0 7403339 15972 364 15608 15608 0 8
> > 0
> > mbufpl 256 286232 0 0 13640 5 13635 13635 0 8
> > 0
>
> If I read this currectly the box has 20k+ TCP sockets open. Which results
> in high resrouce usage of tcpcb, inpcb, sockpl and for the TCP template
> mbufs.
What I see now, using systat pool, sorted by Npage:
NAME SIZE REQUESTS FAIL INUSE PGREQ PGREL
NPAGE HIWAT MINPG MAXPG
tcpcb 736 1670128 0 40124 4548 308
4240 4297 0 8
inpcb 328 2438004 0 40182 4150 247
3903 3944 0 8
sockpl 552 3299804 0 40236 3621 255
3366 3385 0 8
mbufpl 256 49530665 0 39963 2949 5
2944 2944 0 8
> At least the tcpcb and sockpl use the kmem_map.
> Which is (19854 + 15608) * 4k or 141848K. Your kmem_map has a limit of 186616K
> so there is just not enough space. You may need to increase memory or you
> can also tune NKMEMPAGES via config(8).
I see. It is odd, though, that we have similar machines (both VMs and
baremetal, similar resources) and the only one that panics is this
one, running under VMware.
>
> > pfstate 384 16598777 5933960 16587284 239196 237883 1313 10001 0 8
> > 0
>
> There seems to be some strange bursts on the pfstate pool as well.
>
> --
> :wq Claudio