On Mon, Dec 06, 2004 at 07:01:36PM +0100, Per Jessen wrote:
> On Mon, 6 Dec 2004 11:14:03 -0500, Sonny Rao wrote:
> 
> >Yes, this is a consequence of the way memory is partitioned on IA32
> >machines (which I'm assuming you're using). 
> 
> Correct - Intel Xeons.  
> 
> >If you look at the amount of memory being used by the kernel slab cache, 
> >I'd bet it's using much of that 1GB for kernel data structures (inodes, 
> >dentrys, etc) and whenever the kernel needs to allocate some more memory 
> >it has to evict some of those structures which is a very expensive process.
> 

<snip>
> jfs_ip            338276 359975    524 51425 51425    1 :  124   62
<snip>
> inode_cache       339288 363713    512 51959 51959    1 :  124   62


> 
> OK, I can tell inode_cache is using up a lot here.  Apart from using
> a multi-level subdir structure for my 500.000 files, is there anything
> else I can tweak to assist the process?  
> 
> Many thanks for the explanation, Sonny - much appreciated! 

Right, so there's really only one thing i can think of and it's not
much of a solution.  You can change the memory split so that you can
use all 2gb for kernel memory.  I know there are some patches floating
around to convert the 1GB to a 2GB split, or you can use one of the
so-called 4GB/4GB kernels which keeps the kernel in a totally
different address space.  I belive the Redhat enterprise kernel does
it this way.  

It's really only a solution if all of the inodes in your working set
fit into 2GB, otherwise you're just delaying the inevitable.
Ultimately, this is what 64bit machines (with a lot of ram) are good
for :-) 

Sonny
_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to