show malloc was there https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AADreVye4gK770lEL3gxO6Tca/Screen%20Shot%202015-04-02%20at%2011.38.04.png?dl=0 https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AAB9Rm1bYABpSWh6Wt6YQLF5a/Screen%20Shot%202015-04-02%20at%2011.38.10.png?dl=0 https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AABwQjNYtKxG_dcEjehFTKo6a/Screen%20Shot%202015-04-02%20at%2011.38.16.png?dl=0
Now I've restarted server and started haproxy again. Maybe haproxy caused this panic? Current stats: systat: https://www.dropbox.com/s/roccdsudkgjv99c/Screenshot%202015-04-04%2018.27.20.png?dl=0 https://www.dropbox.com/s/93bhu3h183ifcsu/Screenshot%202015-04-04%2018.28.46.png?dl=0 https://www.dropbox.com/s/7tuexvt6wryzyqz/Screenshot%202015-04-04%2018.29.19.png?dl=0 https://www.dropbox.com/s/y4kyeu6comx3dku/Screenshot%202015-04-04%2018.31.09.png?dl=0 https://www.dropbox.com/s/9fmmnmogcu9mmte/Screenshot%202015-04-04%2018.31.36.png?dl=0 https://www.dropbox.com/s/37h33vw5s6dy4nw/Screenshot%202015-04-04%2018.38.13.png?dl=0 https://www.dropbox.com/s/pp271fborqnnyye/Screenshot%202015-04-04%2018.38.31.png?dl=0 https://www.dropbox.com/s/6duy0lkh2mw505s/Screenshot%202015-04-04%2018.38.53.png?dl=0 https://www.dropbox.com/s/dqv8diai3peaph6/Screenshot%202015-04-04%2018.39.29.png?dl=0 -bash-4.3# vmstat -m Memory statistics by bucket size Size In Use Free Requests HighWater Couldfree 16 120429 19603 7925875 1280 500 32 555280 10480 4798443 640 0 64 80629 53131 13455781 320 13862 128 151969 8287 5958086 160 3752 256 36115 52925 56325614 80 973910 512 357 219 405133 40 50212 1024 321 23 237572 20 3575 2048 47 21 79446 10 36155 4096 5156 13 156609 5 118649 8192 25 11 15412 5 13639 16384 15 0 47490 5 0 32768 13 0 57 5 0 65536 3 0 750716 5 0 131072 0 0 2 5 0 262144 1 0 15 5 0 524288 5 0 7 5 0 Memory usage type by bucket size Size Type(s) 16 devbuf, pcb, routetbl, dirhash, ACPI, file desc, exec, pfkey data, xform_data, UVM amap, UVM aobj, USB, USB device, temp 32 devbuf, pcb, routetbl, ifaddr, vnodes, sem, dirhash, ACPI, file desc, in_multi, exec, pfkey data, UVM amap, USB, temp 64 devbuf, pcb, routetbl, ifaddr, sysctl, vnodes, dirhash, ACPI, proc, VFS cluster, in_multi, ether_multi, VM swap, UVM amap, USB, USB device, NDP, temp 128 devbuf, pcb, routetbl, ifaddr, UFS mount, sem, dirhash, ACPI, NFS srvsock, ip_moptions, in_multi, ttys, pfkey data, VM swap, UVM amap, USB, USB device, NDP, temp 256 devbuf, routetbl, ifaddr, sysctl, ioctlops, iov, vnodes, UFS mount, shm, VM map, dirhash, ACPI, file desc, exec, pfkey data, tdb, xform_data, UVM amap, USB, USB device, ip6_options, NDP, temp 512 devbuf, ifaddr, ioctlops, iov, UFS mount, dirhash, ACPI, file desc, ttys, xform_data, newblk, UVM amap, USB device, temp 1024 devbuf, pcb, sysctl, ioctlops, iov, mount, shm, file desc, proc, ttys, exec, pfkey data, tdb, UVM amap, crypto data, temp 2048 devbuf, pcb, ioctlops, iov, UFS mount, ACPI, file desc, VM swap, UVM amap, UVM aobj, temp 4096 devbuf, pcb, ifaddr, ioctlops, iov, UFS mount, file desc, proc, UVM amap, memdesc, temp 8192 devbuf, pcb, iov, UFS mount, file desc, ttys, pagedep, UVM amap, USB, temp 16384 devbuf, pcb, iov, file desc, NFS daemon, MSDOSFS mount, temp 32768 devbuf, pcb, UFS quota, UFS mount, file desc, ISOFS mount, inodedep, temp 65536 devbuf, pcb, file desc, temp 131072 pcb 262144 pcb, UVM amap 524288 devbuf, pcb, VM swap Memory statistics by type Type Kern Type InUse MemUse HighUse Limit Requests Limit Limit Size(s) devbuf 8636 22344K 22408K 78644K 12790 0 0 16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,524288 pcb 114 2067K 3091K 78644K 13292 0 0 16,32,64,128,1024,2048,4096,8192,16384,32768,65536,131072,262144,524288 routetbl552563 17964K 18001K 78644K 4628729 0 0 16,32,64,128,256 ifaddr 220 44K 44K 78644K 223 0 0 32,64,128,256,512,4096 sysctl 3 2K 2K 78644K 3 0 0 64,256,1024 ioctlops 0 0K 4K 78644K 16831 0 0 256,512,1024,2048,4096 iov 0 0K 16K 78644K 151484 0 0 256,512,1024,2048,4096,8192,16384 mount 3 3K 3K 78644K 3 0 0 1024 vnodes 34 3K 77K 78644K 7233 0 0 32,64,256 UFS quota 1 32K 32K 78644K 1 0 0 32768 UFS mount 13 57K 57K 78644K 13 0 0 128,256,512,2048,4096,8192,32768 shm 2 2K 2K 78644K 2 0 0 256,1024 VM map 2 1K 1K 78644K 2 0 0 256 sem 2 1K 1K 78644K 2 0 0 32,128 dirhash 129 26K 116K 78644K 4266 0 0 16,32,64,128,256,512 ACPI175742 18750K 18796K 78644K 572114 0 0 16,32,64,128,256,512,2048 file desc 19 143K 248K 78644K 34106 0 0 16,32,256,512,1024,2048,4096,8192,16384,32768,65536 proc 16 10K 10K 78644K 16 0 0 64,1024,4096 VFS cluster 0 0K 1K 78644K 2249 0 0 64 NFS srvsock 1 1K 1K 78644K 1 0 0 128 NFS daemon 1 16K 16K 78644K 1 0 0 16384 ip_moptions 6 1K 1K 78644K 6 0 0 128 in_multi 123 9K 9K 78644K 124 0 0 32,64,128 ether_multi 71 5K 5K 78644K 72 0 0 64 ISOFS mount 1 32K 32K 78644K 1 0 0 32768 MSDOSFS mount 1 16K 16K 78644K 1 0 0 16384 ttys 420 308K 308K 78644K 420 0 0 128,512,1024,8192 exec 0 0K 4K 78644K 136991 0 0 16,32,256,1024 pfkey data 2 1K 2K 78644K 171 0 0 16,32,128,256,1024 tdb 15 13K 14K 78644K 51 0 0 256,1024 xform_data 24 1K 2K 78644K 84 0 0 16,256,512 pagedep 1 8K 8K 78644K 1 0 0 8192 inodedep 1 32K 32K 78644K 1 0 0 32768 newblk 1 1K 1K 78644K 1 0 0 512 VM swap 7 1155K 1155K 78644K 7 0 0 64,128,2048,524288 UVM amap211921 15406K 33581K 78644K 46948441 0 0 16,32,64,128,256,512,1024,2048,4096,8192,262144 UVM aobj 2 3K 3K 78644K 2 0 0 16,2048 USB 62 23K 23K 78644K 73 0 0 16,32,64,128,256,8192 USB device 21 2K 2K 78644K 21 0 0 16,64,128,256,512 memdesc 1 4K 4K 78644K 1 0 0 4096 crypto data 1 1K 1K 78644K 1 0 0 1024 ip6_options 1 1K 1K 78644K 17 0 0 256 NDP 34 5K 6K 78644K 40 0 0 64,128,256 temp 153 97K 223K 78644K 37626375 0 0 16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536 Memory Totals: In Use Free Requests 78573K 18537K 90156263 Memory resource pool statistics Name Size Requests Fail InUse Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle extentpl 40 139 0 56 1 0 0 1 0 8 0 phpool 104 832852 0 64828 2040 207 0 1833 0 8 0 pmappl 152 118310 0 32 3 0 0 3 0 8 1 pvpl 32 177062965 0 299230 4403 1837 0 2932 0 265 129 pdppl 4096 118310 0 32 1478 1439 0 62 0 8 7 vmsppl 224 118310 0 32 4 0 0 4 0 8 1 vmmpepl 168 71663893 0 174331 15602 7987 0 10438 0 357 29 vmmpekpl 168 1196905 0 17 3 0 0 3 0 8 1 uaddr 24 118311 0 33 1 0 0 1 0 8 0 uaddrbest 32 2 0 2 1 0 0 1 0 8 0 uaddrrnd 40 118311 0 33 1 0 0 1 0 8 0 aobjpl 64 1 0 1 1 0 0 1 0 8 0 dma16 16 4 0 0 1 0 0 1 0 8 1 dma32 32 13 0 0 1 0 0 1 0 8 1 dma64 64 4 0 0 1 0 0 1 0 8 1 dma256 256 15 0 0 1 0 0 1 0 8 1 dma512 512 8 0 2 1 0 0 1 0 8 0 dma4096 4096 2 0 0 1 0 0 1 0 8 1 amappl 72 32717841 0 171508 6345 3221 0 4274 0 75 2 anonpl 16 129266099 0 257266 1284 0 0 1284 0 1022 157 bufpl 264 1723113 0 25552 1840 134 0 1706 0 8 1 mbpl 256 24336267784 0 101210 7554 0 0 7554 1 125000 635 mcl2k 2048 8402106079 0 10561 9932 0 0 9932 4 1000000 4550 sockpl 472 432772049 0 79876 85948 75577 0 11152 0 8 8 procpl 568 118328 0 50 12 0 0 12 0 8 3 processpl 616 118328 0 50 14 0 0 14 0 8 4 zombiepl 144 118278 0 0 1 0 0 1 0 8 1 ucredpl 96 3946 0 73 3 0 0 3 0 8 0 pgrppl 40 12357 0 25 1 0 0 1 0 8 0 sessionpl 64 369 0 22 1 0 0 1 0 8 0 lockfpl 88 8775 0 3 1 0 0 1 0 8 0 filepl 120 437419685 0 3770 21640 21502 0 246 0 8 8 fdescpl 440 118311 0 33 7 0 0 7 0 8 2 pipepl 120 147646 0 14 2 0 0 2 0 8 0 kqueuepl 320 1094 0 10 2 0 0 2 0 8 0 knotepl 112 648140305 0 3686 13925 13800 0 214 0 8 4 sigapl 432 118310 0 32 7 0 0 7 0 8 2 wqtasks 40 477 0 0 1 0 0 1 0 8 1 ifaddritem 64 39 0 39 1 0 0 1 0 8 0 scxspl 192 2626771 0 0 2 0 0 2 0 8 2 pfiaddrpl 120 8 0 4 1 0 0 1 0 8 0 ehcixfer 280 129 0 6 1 0 0 1 0 8 0 namei 1024 8399090 0 0 3 0 0 3 0 8 3 vnodes 264 25913 0 25913 1728 0 0 1728 0 8 0 nchpl 144 3170542 0 5922 220 0 0 220 0 8 0 ffsino 240 2304497 0 25905 1650 30 0 1620 0 8 0 dino1pl 128 2304497 0 25905 836 0 0 836 0 8 0 dirhash 1024 5599 0 255 864 800 0 219 0 8 0 pfrule 1336 108 0 58 31 0 0 31 0 8 7 pfstate 312 435832082 0 147392 564536 550614 0 16982 0 8 0 pfstkey 104 435957663 0 123762 144801 141211 0 4433 0 8 0 pfstitem 24 435787431 0 123762 15357 14515 0 1015 0 8 8 pfruleitem 16 151715251 0 64225 2162 1819 0 413 0 8 8 pfrktable 1344 28 0 10 7 0 0 7 0 8 0 pfrke_plain 160 73 0 49 4 0 0 4 0 8 0 pfosfpen 112 2840 0 710 60 39 0 21 0 8 0 pfosfp 40 1680 0 420 5 0 0 5 0 8 0 pffrent 40 344 0 0 1 0 0 1 0 8 1 pffrag 112 175 0 0 1 0 0 1 0 22 1 strprocpl 2448 60 0 0 2 0 0 2 0 8 2 strpolpl 48 30 0 0 1 0 0 1 0 8 1 rtentpl 192 2251966 0 552430 27691 41 0 27683 0 8 8 rtmask 32 589106 0 50251 408 0 0 408 0 8 0 rttmrpl 72 2335970 0 1500 117 82 0 48 0 8 2 tcpcbpl 560 432751719 0 79760 107930 96118 0 12736 0 8 8 tcpqepl 32 100242953 0 406 9 0 0 9 0 8 4 sackhlpl 24 15385314 0 108 4 0 0 4 0 8 2 synpl 248 145920650 0 940 2624 2545 0 111 0 8 8 plimitpl 152 3066 0 18 2 0 0 2 0 8 1 inpcbpl 360 432758797 0 79763 50904 43342 0 8102 0 8 8 pfsync 72 46622 0 0 1 0 0 1 0 8 1 In use 407355K, total allocated 0K; utilization inf% Installed NICs em0-em7 are Intel I350-T4 (PCI-E), em8-em9 are I350 on board. CPU cpu0: Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, 2500.40 MHz cpu1: Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, 2500.00 MHz cpu2: Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, 2500.00 MHz cpu3: Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, 2500.00 MHz -- Evgeniy On Sat, Apr 4, 2015 at 12:43 PM, Mark Kettenis <[email protected]> wrote: >> Date: Sat, 4 Apr 2015 11:30:24 +0200 >> From: Evgeniy Sudyr <[email protected]> >> >> Sorry for delayed answer, I did both before: >> >> show uvmexp >> >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AABTdTS98GLF2vRN56mn6knpa/Screen%20Shot%202015-04-02%20at%2011.37.20.png?dl=0 >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AAAPsq3yPHI3w5u_-ViB-Elva/Screen%20Shot%202015-04-02%20at%2011.37.27.png?dl=0 >> >> show all pools: >> >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AAAbYVDgpH89Rh_g4Jl5ONOga/Screen%20Shot%202015-04-02%20at%2011.47.08.png?dl=0 >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AACi9B7jT2gbmr1YQWKQeYRpa/Screen%20Shot%202015-04-02%20at%2011.47.14.png?dl=0 >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AACi9B7jT2gbmr1YQWKQeYRpa/Screen%20Shot%202015-04-02%20at%2011.47.14.png?dl=0 >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AAA62XYFUPFAZ37vVdiDpsm-a/Screen%20Shot%202015-04-02%20at%2011.47.20.png?dl=0 >> https://www.dropbox.com/sh/dwmjt7wwunhk5gb/AAD2K3pLsAqORiknjPaFHz3sa/Screen%20Shot%202015-04-02%20at%2011.47.52.png?dl=0 >> >> I had to restart server because I'm afraid that second (same hw) can >> do same panic. > > Ah.Too bad, because I just relized I should have asked for "show > malloc" as well. > > My recommendation would be to periodically run "vmstat -m" and see if > one of the memory types keeps growing. -- -- With regards, Eugene Sudyr

