On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:

> Tomas,
> 
> comments inline...
> 
> 
> >>arc::print struct arc               
> >>   
> >>
> >{
> >   anon = ARC_anon
> >   mru = ARC_mru
> >   mru_ghost = ARC_mru_ghost
> >   mfu = ARC_mfu
> >   mfu_ghost = ARC_mfu_ghost
> >   size = 0x6f7a400
> >   p = 0x5d9bd5a
> >   c = 0x5f6375a
> >   c_min = 0x4000000
> >   c_max = 0x2e82a000
> >   hits = 0x40e0a15
> >   misses = 0x1cec4a4
> >   deleted = 0x1b0ba0d
> >   skipped = 0x24ea64e13
> >   hash_elements = 0x179d
> >   hash_elements_max = 0x60bb
> >   hash_collisions = 0x8dca3a
> >   hash_chains = 0x391
> >   hash_chain_max = 0x8
> >   no_grow = 0x1
> >}
> >
> >So, about 100MB and a memory crunch..
> > 
> >
> Interesting ! So, it is not the ARC which is consuming too much memory....
> It is some other piece (not sure if it belongs to ZFS) which is causing 
> the crunch...
> 
> Or the other possibility is that ARC ate up too much and caused a near 
> crunch situation
> and the kmem hit back and caused ARC to free up it's buffers (hence the 
> no_grow flag enabled).
> So, it (ARC) could be osscillating between large caching and then 
> purging the caches.
> 
> You might want to keep track of these values (ARC size and no_grow flag) 
> and see how they
> change over a period of time. This would help us understand the pattern.

I would guess it grows after boot until it hits some max and then stays
there.. but I can check it out..

> And if we know it ARC which is causing the crunch we could manually 
> change the values of
> c_max to a comfortable value and that would limit the size of ARC. 

But in the ZFS world, DNLC is part of the ARC, right?
My original question was how to get rid of "data cache", but keep
"metadata cache" (such as DNLC)...

> However, I would suggest
> that you try it out on a non-production machine first.
> 
> By, default the c_max is set to 75% of physmem and that is the hard 
> limit. "c" is the soft limit and
> ARC would try and grow upto 'c". The value of "c" is adjusted when there 
> is a need to cache more
> but, it will never exceed "c_max".
> 
> Regarding the huge number of reads, I am sure you have already tried 
> disabling the VDEV prefetch.
> If not, it is worth a try.

That was part of my original question, how? :)

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to