Tomas Ögren wrote On 11/09/06 13:47,:
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s said we had a hitrate of ~30%, so I set ncsize to
600k and rebooted.. Didn't seem to change much, still seeing hitrates at
about the same and manual find(1) doesn't seem to be that cached
(according to vmstat and dnlcsnoop.d).
When booting, the following message came up, not sure if it matters or not:
NOTICE: setting nrnode to max value of 351642
NOTICE: setting nrnode to max value of 235577
Is there a separate ZFS-DNLC knob to adjust for this? Wild guess is that
it has its own implementation which is integrated with the rest of the
ZFS cache which throws out metadata cache in favour of data cache.. or
something..
A more complete and useful set of dnlc statistic can be obtained via
"kstat -n dnlcstats". As well as soft the limit on dnlc entries (ncsize)
the current number of cached entries is also useful:
This is after ~28h uptime:
module: unix instance: 0
name: dnlcstats class: misc
crtime 47.5600948
dir_add_abort 0
dir_add_max 0
dir_add_no_memory 0
dir_cached_current 4
dir_cached_total 107
dir_entries_cached_current 4321
dir_fini_purge 0
dir_hits 11000
dir_misses 172814
dir_reclaim_any 25
dir_reclaim_last 16
dir_remove_entry_fail 0
dir_remove_space_fail 0
dir_start_no_memory 0
dir_update_fail 0
double_enters 234918
enters 59193543
hits 36690843
misses 59384436
negative_cache_hits 1366345
pick_free 0
pick_heuristic 57069023
pick_last 2035111
purge_all 1
purge_fs1 0
purge_total_entries 3748
purge_vfs 187
purge_vp 95
snaptime 99177.711093
vmstat -s:
96080561 total name lookups (cache hits 38%)
echo ncsize/D | mdb -k
echo dnlc_nentries/D | mdb -k
ncsize: 600000
dnlc_nentries: 19230
Not quite the same..
Having said that I actually think your problem is lack of memory.
For each ZFS vnode held by the DNLC it uses a *lot* more memory
than say UFS. Consequently it has to purge dnlc entries and I
suspect with only 1GB that the ZFS ARC doesn't allow many dnlc entries.
I don't know if that number is maintained anywhere, for you to check.
Mark?
Current memory usage (for some values of usage ;):
# echo ::memstat|mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 95584 746 75%
Anon 20868 163 16%
Exec and libs 1703 13 1%
Page cache 1007 7 1%
Free (cachelist) 97 0 0%
Free (freelist) 7745 60 6%
Total 127004 992
Physical 125192 978
/Tomas
This memory usage shows nearly all of memory consumed by the kernel
and probably by ZFS. ZFS can't add any more DNLC entries due to lack of
memory without purging others. This can be seen from the number of
dnlc_nentries being way less than ncsize.
I don't know if there's a DMU or ARC bug to reduce the memory footprint
of their internal structures for situations like this, but we are aware
of the
issue.
Neil.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss