Obviously, the snapshot above is done after the loop (for some reason the forum
ate up my indentation). Anyway, I increased swap to 8G, and here's some more
info, after the backup taken last night (which went just fine):
***********************
BEFORE the backup:
***********************
(from "top -b -o size")
42 processes: 41 sleeping, 1 on cpu
CPU states: 99.5% idle, 0.0% user, 0.5% kernel, 0.0% iowait, 0.0% swap
Kernel: 197 ctxsw, 9 trap, 423 intr, 339 syscall, 9 flt
Memory: 3551M phys mem, 2491M free mem, 8192M total swap, 8192M free swap
***********************
After the backup
***********************
top -b -o size
last pid: 11155; load avg: 0.00, 0.02, 0.06; up 0+22:25:19 12:59:40
42 processes: 41 sleeping, 1 on cpu
CPU states: 99.5% idle, 0.0% user, 0.5% kernel, 0.0% iowait, 0.0% swap
Kernel: 187 ctxsw, 9 trap, 428 intr, 327 syscall, 9 flt
Memory: 3551M phys mem, 491M free mem, 8192M total swap, 8192M free swap
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
433 root 19 59 0 23M 12M sleep 0:03 0.00% fmd
220 root 23 59 0 23M 8640K sleep 0:06 0.00% nscd
360 daemon 6 59 0 20M 8980K sleep 0:09 0.00% idmapd
7 root 12 59 0 18M 10M sleep 0:07 0.00% svc.startd
...
and from mdb (echo ::memstat | mdb -k)
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 240134 938 26%
ZFS File Data 517533 2021 57%
Anon 20378 79 2%
Exec and libs 1314 5 0%
Page cache 5248 20 1%
Free (cachelist) 8561 33 1%
Free (freelist) 113886 444 13%
Total 907054 3543
Physical 907053 3543
Which makes me worry some more, since ZFS has already "eaten up" 57% of the
memory.
Will keep monitoring every day, and report back if/when disaster strikes
again...
--
This message posted from opensolaris.org
_______________________________________________
opensolaris-help mailing list
[email protected]