Hello.
I've been running spamd with greylisting for a few
weeks.
Today, I am getting 'pfctl: Cannot allocate memory'
notifications.
----------------------------
OpenBSD 4.0 GENERIC#1107 i386
load averages: 0.18, 0.23, 0.24
08:04:24
62 processes: 61 idle, 1 on processor
CPU states: 0.3% user, 0.0% nice, 0.0% system,
0.0% interrupt, 99.7% idle
Memory: Real: 23M/69M act/tot Free: 168M Swap:
0K/102M used/tot
# vmstat -m
Memory statistics by bucket size
Size In Use Free Requests HighWater
Couldfree
16 4988 10372 2584452 1280
792
32 290 1118 1672745 640
93
64 1211 133 148714 320
0
128 757 75 69589 160
0
256 186 38 48794 80
0
512 161 31 48241 40
1
1024 628 60 65101 20
15049
2048 78 12 560411 10
323719
4096 31 4 314 5
0
8192 7 0 7 5
0
16384 2 0 2 5
0
32768 4 0 4 5
0
Memory usage type by bucket size
Size Type(s)
16 devbuf, pcb, routetbl, ifaddr, sysctl,
vnodes, UFS mount, sem,
dirhash, in_multi, exec, xform_data, VM
swap, UVM amap, UVM aobj, USB,
temp
32 devbuf, pcb, routetbl, ifaddr, vnodes, UFS
mount, sem, dirhash, proc,
VFS cluster, ether_multi, xform_data, VM
swap, UVM amap, USB,
packet tags, temp
64 devbuf, pcb, routetbl, ifaddr, sem, dirhash,
in_multi, pfkey data,
UVM amap, USB, NDP, temp
128 devbuf, routetbl, ifaddr, sysctl, vnodes,
dirhash, ttys, exec,
UVM amap, USB, USB device, NDP
256 devbuf, routetbl, ifaddr, ioctlops, vnodes,
shm, VM map, dirhash,
file desc, proc, NFS srvsock, NFS daemon,
newblk, UVM amap, USB, temp
512 devbuf, pcb, ifaddr, ioctlops, mount, UFS
mount, shm, dirhash, ttys,
exec, UVM amap, USB device, temp
1024 devbuf, ioctlops, namecache, proc, ttys,
exec, UVM amap, UVM aobj,
crypto data, temp
2048 devbuf, ifaddr, ioctlops, UFS mount,
pagedep, VM swap, UVM amap, temp
4096 devbuf, ioctlops, UFS mount, MSDOSFS mount,
VM swap, UVM amap, temp
8192 devbuf, NFS node, namecache, UFS quota, UFS
mount, ISOFS mount,
inodedep
16384 devbuf, namecache
32768 devbuf
Memory statistics by type
Type Kern
Type InUse MemUse HighUse Limit Requests
Limit Limit Size(s)
devbuf 1016 691K 703K 38031K 554212
0 0
16,32,64,128,256,512,1024,2048,4096,8192,16384,32768
pcb 79 7K 7K 38031K 14608
0 0 16,32,64,512
routetbl 804 19K 20K 38031K 10554
0 0 16,32,64,128,256
ifaddr 74 14K 14K 38031K 76
0 0 16,32,64,128,256,512,2048
sysctl 2 1K 1K 38031K 2
0 0 16,128
ioctlops 0 0K 4K 38031K 10006
0 0 256,512,1024,2048,4096
mount 5 3K 3K 38031K 5
0 0 512
NFS node 1 8K 8K 38031K 1
0 0 8192
vnodes 82 8K 44K 38031K 9323
0 0 16,32,128,256
namecache 3 25K 25K 38031K 3
0 0 1024,8192,16384
UFS quota 1 8K 8K 38031K 1
0 0 8192
UFS mount 21 41K 41K 38031K 21
0 0 16,32,512,2048,4096,8192
shm 2 1K 1K 38031K 2
0 0 256,512
VM map 3 1K 1K 38031K 3
0 0 256
sem 3 1K 1K 38031K 3
0 0 16,32,64
dirhash 105 20K 20K 38031K 384
0 0 16,32,64,128,256,512
file desc 1 1K 1K 38031K 2
0 0 256
proc 19 3K 3K 38031K 19
0 0 32,256,1024
VFS cluster 0 0K 1K 38031K 4963
0 0 32
NFS srvsock 2 1K 1K 38031K 2
0 0 256
NFS daemon 1 1K 1K 38031K 1
0 0 256
in_multi 22 1K 1K 38031K 22
0 0 16,64
ether_multi 4 1K 1K 38031K 4
0 0 32
ISOFS mount 1 8K 8K 38031K 1
0 0 8192
MSDOSFS mount 1 4K 4K 38031K 1
0 0 4096
ttys 420 263K 263K 38031K 420
0 0 128,512,1024
exec 0 0K 6K 38031K 10176
0 0 16,128,512,1024
pfkey data 1 1K 1K 38031K 2
0 0 64
xform_data 0 0K 1K 38031K 45
0 0 16,32
pagedep 1 2K 2K 38031K 1
0 0 2048
inodedep 1 8K 8K 38031K 1
0 0 8192
newblk 1 1K 1K 38031K 1
0 0 256
VM swap 7 11K 11K 38031K 7
0 0 16,32,2048,4096
UVM amap 5554 351K 546K 38031K 2965999
0 0 16,32,64,128,256,512,1024,2048,4096
UVM aobj 2 2K 2K 38031K 2
0 0 16,1024
USB 29 3K 3K 38031K 29
0 0 16,32,64,128,256
USB device 8 4K 4K 38031K 8
0 0 128,512
crypto data 1 1K 1K 38031K 1
0 0 1024
packet tags 3 1K 3K 38031K 1128342
0 0 32
NDP 10 1K 1K 38031K 11
0 0 64,128
temp 59 8K 12K 38031K 489182
0 0 16,32,64,256,512,1024,2048,4096
Memory Totals: In Use Free Requests
1509K 340K 5198446
Memory resource pool statistics
Name Size Requests Fail Releases Pgreq Pgrel
Npage Hiwat Minpg Maxpg Idle
extentpl 20 221 0 195 1 0
1 1 0 8 0
phpool 40 319 0 2 4 0
4 4 0 8 0
pmappl 128 9772 0 9710 3 0
3 3 0 8 1
vmsppl 220 9772 0 9710 5 0
5 5 0 8 1
vmmpepl 88 2094609 0 2091559 15902 15811
91 217 0 8 8
vmmpekpl 88 32925 0 32880 2 0
2 2 0 8 0
aobjpl 52 1 0 0 1 0
1 1 0 8 0
amappl 40 966115 0 964312 1716 1682
34 52 0 8 8
bufpl 116 12 0 12 1 0
1 1 0 8 1
mbpl 256 1912743 0 1912455 26 0
26 26 1 8 7
mclpl 2048 645486 0 645214 184 0
184 184 4 3072 47
sockpl 204 10515328 0 10515236 7 0
7 7 0 8 1
procpl 360 9785 0 9710 9 0
9 9 0 8 2
zombiepl 72 9710 0 9710 1 0
1 1 0 8 1
ucredpl 80 1301 0 1272 1 0
1 1 0 8 0
pgrppl 24 445 0 429 1 0
1 1 0 8 0
sessionpl 48 231 0 216 1 0
1 1 0 8 0
pcredpl 24 9785 0 9710 1 0
1 1 0 8 0
lockfpl 52 10212 0 10189 1 0
1 1 0 8 0
filepl 88 10642425 0 10642199 6 0
6 6 0 8 1
fdescpl 296 9786 0 9710 7 0
7 7 0 8 1
pipepl 72 13524 0 13460 2 0
2 2 0 8 0
sigapl 316 9772 0 9710 7 0
7 7 0 8 1
wdcspl 96 77772 0 77772 1 0
1 1 0 8 1
pfiaddrpl 100 12 0 6 1 0
1 1 0 8 0
namei 1024 519206 0 519206 10 2
8 10 0 8 8
vnodes 156 2442 0 0 94 0
94 94 0 8 0
nchpl 72 1310 0 0 24 0
24 24 0 8 0
ffsino 168 98882 0 96548 102 0
102 102 0 8 0
dino1pl 128 98882 0 96548 79 0
79 79 0 8 0
dirhash 1024 625 0 461 47 0
47 47 0 128 6
semapl 68 1 0 0 1 0
1 1 0 8 0
semupl 100 2 0 2 1 0
1 1 0 8 1
pfrulepl 632 33 0 23 4 0
4 4 0 8 1
pfstatepl 284 22334 0 22319 15 0
15 15 0 715 12
pfpooladdrpl 68 3 0 2 1 0
1 1 0 8 0
pfrktable 1240 2813 0 2809 3 0
3 3 0 334 1
pfrkentry 156 2197269 61 2131675 3847 0
3847 3847 0 3847 328
pfosfpen 108 1044 0 696 14 4
10 10 0 8 0
pfosfp 28 624 0 416 2 0
2 2 0 8 0
pffrent 16 1 0 1 1 0
1 1 0 20 1
pffrag 48 1 0 1 1 0
1 1 0 12 1
rtentpl 108 1286 0 1258 1 0
1 1 0 8 0
rttmrpl 32 1975 0 1974 1 0
1 1 0 8 0
tcpcbpl 400 9302 0 9282 5 0
5 5 0 8 2
tcpqepl 16 3343 0 3343 1 0
1 1 0 13 1
sackhlpl 20 44 0 44 1 0
1 1 0 163 1
synpl 184 8051 0 8051 1 0
1 1 0 8 1
plimitpl 152 295 0 281 1 0
1 1 0 8 0
inpcbpl 216 10500728 0 10500706 3 0
3 3 0 8 1
In use 12473K, total allocated 20184K; utilization
61.8%
#
____________________________________________________________________________________
No need to miss a message. Get email on-the-go
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail