On Mon, 6 Dec 2004 11:14:03 -0500, Sonny Rao wrote:

>Yes, this is a consequence of the way memory is partitioned on IA32
>machines (which I'm assuming you're using). 

Correct - Intel Xeons.  

>If you look at the amount of memory being used by the kernel slab cache, 
>I'd bet it's using much of that 1GB for kernel data structures (inodes, 
>dentrys, etc) and whenever the kernel needs to allocate some more memory 
>it has to evict some of those structures which is a very expensive process.

thorium:/srv/www/vhosts/spamchek/htdocs/admin # cat /proc/slabinfo
slabinfo - version: 1.1 (SMP)
kmem_cache           116    116    132    4    4    1 :  252  126
ip_conntrack        2688   6162    288  474  474    1 :  124   62
tcp_tw_bucket        480    480     96   12   12    1 :  252  126
tcp_bind_bucket      370    678     32    6    6    1 :  252  126
tcp_open_request     124    177     64    3    3    1 :  252  126
inet_peer_cache       17    354     64    6    6    1 :  252  126
ip_fib_hash           18    339     32    3    3    1 :  252  126
ip_dst_cache        1991   2064    160   86   86    1 :  252  126
arp_cache             11     90    128    3    3    1 :  252  126
blkdev_requests    11264  11280     96  282  282    1 :  252  126
jfs_mp               336    336     80    7    7    1 :  252  126
jfs_ip            338276 359975    524 51425 51425    1 :  124   62
dnotify_cache          0      0     20    0    0    1 :  252  126
file_lock_cache      551    600     96   15   15    1 :  252  126
fasync_cache           0      0     16    0    0    1 :  252  126
uid_cache             11    226     32    2    2    1 :  252  126
skbuff_head_cache   1099   2040    160   85   85    1 :  252  126
sock                1222   1530    864  170  170    2 :  124   62
sigqueue             261    261    132    9    9    1 :  252  126
kiobuf                 0      0     64    0    0    1 :  252  126
cdev_cache            16    531     64    9    9    1 :  252  126
bdev_cache            17    177     64    3    3    1 :  252  126
mnt_cache             13    236     64    4    4    1 :  252  126
inode_cache       339288 363713    512 51959 51959    1 :  124   62
dentry_cache        1860   5730    128  191  191    1 :  252  126
filp                8396   8430    128  281  281    1 :  252  126
names_cache            8      8   4096    8    8    1 :   60   30
buffer_head       237627 279520     96 6988 6988    1 :  252  126
mm_struct            432    648    160   27   27    1 :  252  126
vm_area_struct      8956  10040     96  251  251    1 :  252  126
fs_cache             434    826     64   14   14    1 :  252  126
files_cache          371    405    416   45   45    1 :  124   62
signal_act           358    363   1312  121  121    1 :   60   30


OK, I can tell inode_cache is using up a lot here.  Apart from using
a multi-level subdir structure for my 500.000 files, is there anything
else I can tweak to assist the process?  

Many thanks for the explanation, Sonny - much appreciated! 


cheers,
Per Jessen

-- 
regards,
Per Jessen, Zurich
http://www.spamchek.com - let your spam stop here!


_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to