2010/9/13 Claudio Jeker <[email protected]>:
> When running with that many sockets a prominent warning about increasing
> kern.maxclusters shows up. This is not just dmesg spam, running
> out of mbuf clusters will stop your network stack.
I've not seen any message neither on console nor in logs.
I tried to set kern.maxclusters to 100000, no success. Same "freeze",
here is the ddb outputs.
ddb> trace
Debugger(0,3f8,0,d0a10c40,0) at Debugger+0x4
comintr(d1571000) at comintr+0x287
Xrecurse_legacy4() at Xrecurse_legacy4+0xb3
--- interrupt ---
m_cldrop(0,1,d1526054,800,d03e1aeb) at m_cldrop
re_newbuf(d1526000,10,d9a237ac,d02b30cc,d1526000) at re_newbuf+0x35
re_rx_list_fill(d1526000,20,60,58,10) at re_rx_list_fill+0x21
re_rxeof(d1526000,d9799800,3e,10,10) at re_rxeof+0x37c
re_intr(d1526000) at re_intr+0x12a
Xrecurse_legacy11() at Xrecurse_legacy11+0xb7
--- interrupt ---
m_gethdr(1,2,0,d9a23904,2) at m_gethdr+0x78
tcp_output(d9b8ac88,d9b89550,14,d9a23a70,1) at tcp_output+0x754
tcp_input(d55c4e00,14,0,0,6) at tcp_input+0x2711
ipv4_input(d55c4e00,0,d9a23b34,d0202089,d5b40058) at ipv4_input+0x42a
ipintr(d5b40058,d9a20010,d9a20010,d0510010,3) at ipintr+0x49
Bad frame pointer: 0xd9a23b34
ddb> show registers
ds 0xd9a20010 end+0x8f5802c
es 0x10
fs 0xd9a20058 end+0x8f58074
gs 0xd1310010 end+0x84802c
edi 0xd150c960 end+0xa4497c
esi 0xd15750ac end+0xaad0c8
ebp 0xd9a23640 end+0x8f5b65c
ebx 0xf9
edx 0x3f8
ecx 0xd1571000 end+0xaa901c
eax 0x1
eip 0xd05670b4 Debugger+0x4
cs 0x50
eflags 0x202
esp 0xd9a23640 end+0x8f5b65c
ss 0xd9a20010 end+0x8f5802c
Debugger+0x4: popl %ebp
ddb> show uvmexp
Current UVM status:
pagesize=4096 (0x1000), pagemask=0xfff, pageshift=12
126367 VM pages: 6421 active, 1015 inactive, 0 wired, 110020 free
min 10% (25) anon, 10% (25) vnode, 5% (12) vtext
pages 0 anon, 0 vnode, 0 vtext
freemin=4212, free-target=5616, inactive-target=0, wired-max=42122
faults=48741, traps=55367, intrs=230694, ctxswitch=32618 fpuswitch=183
softint=67847, syscalls=372383, swapins=0, swapouts=0, kmapent=19
fault counts:
noram=0, noanon=0, pgwait=0, pgrele=0
ok relocks(total)=2251(2251), anget(retries)=27555(0), amapcopy=12411
neighbor anon/obj pg=1481/24619, gets(lock/unlock)=10220/2251
cases: anon=23933, anoncow=3622, obj=9207, prcopy=1013, przero=10965
daemon and swap counts:
woke=0, revs=0, scans=0, obscans=0, anscans=0
busy=0, freed=0, reactivate=0, deactivate=0
pageouts=0, pending=0, nswget=0
nswapdev=1, nanon=0, nanonneeded=0 nfreeanon=0
swpages=66267, swpginuse=0, swpgonly=0 paging=0
kernel pointers:
objs(kern)=0xd09e7280
ddb> show all pools
Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle
inpcbpl 228 10374 0 9343 61 0 61 61 0 8 0
plimitpl 148 17 0 5 1 0 1 1 0 8 0
synpl 192 3 0 3 1 0 1 1 0 8 1
tcpqepl 16 288 0 69 1 0 1 1 0 13 0
tcpcbpl 400 10049 0 9027 108 4 104 104 0 8 0
rtentpl 116 35 0 0 2 0 2 2 0 8 0
pfosfp 28 814 0 407 3 0 3 3 0 8 0
pfosfpen 108 1392 0 696 30 11 19 19 0 8 0
pfstateitempl 12 10262 0 2024 25 0 25 25 0 8 0
pfstatekeypl 72 10262 0 2024 148 0 148 148 0 8 0
pfstatepl 212 10262 0 1870 459 0 459 459 0 527 16
pfrulepl 1148 13 0 11 5 0 5 5 0 8 3
dirhash 1024 29 0 0 8 0 8 8 0 128 0
dino1pl 128 1729 0 9 56 0 56 56 0 8 0
ffsino 184 1729 0 9 79 0 79 79 0 8 0
nchpl 88 2838 0 29 62 0 62 62 0 8 0
vnodes 156 1740 0 0 70 0 70 70 0 8 0
namei 1024 6616 0 6616 3 0 3 3 0 8 3
wdcspl 96 1440 0 1440 1 0 1 1 0 8 1
sigapl 324 236 0 207 3 0 3 3 0 8 0
knotepl 64 20042 0 18042 32 0 32 32 0 8 0
kqueuepl 192 5 0 4 1 0 1 1 0 8 0
kqueuepl 192 5 0 4 1 0 1 1 0 8 0
fdescpl 300 237 0 207 3 0 3 3 0 8 0
filepl 88 12839 0 11728 25 0 25 25 0 8 0
lockfpl 56 4 0 2 1 0 1 1 0 8 0
pcredpl 20 250 0 207 1 0 1 1 0 8 0
sessionpl 48 23 0 3 1 0 1 1 0 8 0
pgrppl 24 53 0 30 1 0 1 1 0 8 0
ucredpl 80 43 0 31 1 0 1 1 0 8 0
zombiepl 72 207 0 207 1 0 1 1 0 8 1
processpl 64 250 0 207 1 0 1 1 0 8 0
procpl 316 250 0 207 4 0 4 4 0 8 0
sockpl 212 10450 0 9397 56 0 56 56 0 8 0
mcl2k 2048 212376 26652652 210342 1017 0 1017 1017 4 50000 0
mbpl 256 27111063 0 27108014 192 0 192 192 1 6250 1
bufpl 172 1418 0 300 49 0 49 49 0 8 0
anonpl 12 15600 0 10413 16 0 16 16 0 24 0
amappl 44 9780 0 7995 20 0 20 20 0 45 0
aobjpl 44 1 0 0 1 0 1 1 0 8 0
vmmpekpl 88 596 0 545 2 0 2 2 0 8 0
vmmpepl 88 18307 0 15930 52 0 52 52 0 179 0
vmsppl 180 236 0 207 2 0 2 2 0 8 0
pmappl 72 236 0 207 1 0 1 1 0 8 0
extentpl 20 258 0 207 1 0 1 1 0 8 0
phpool 48 1391 0 4 17 0 17 17 0 8 0
BTW, when i tried 'boot sync' I got continuously running lines
...
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_unmap_remove: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_unmap_remove: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_unmap_remove: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: pool_get: want -1 have 2
splassert: uvm_unmap_remove: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
splassert: uvm_mapent_free: want -1 have 2
...
and then, after some time:
...
splassert: sched_idle: want -1 have 2
splassert: sched_idle: want -1 have 2
...
--
antonvm