Something else to consider, depending upon how you set arc_c_max, you
may just want to set arc_c and arc_p at the same time.  If you try
setting arc_c_max, and then setting arc_c to arc_c_max, and then set
arc_p to arc_c / 2, do you still get this problem?

-j

On Thu, Mar 15, 2007 at 05:18:12PM -0700, [EMAIL PROTECTED] wrote:
> Gar.  This isn't what I was hoping to see.  Buffers that aren't
> available for eviction aren't listed in the lsize count.  It looks like
> the MRU has grown to 10Gb and most of this could be successfully
> evicted.
> 
> The calculation for determining if we evict from the MRU is in
> arc_adjust() and looks something like:
> 
> top_sz = ARC_anon.size + ARC_mru.size
> 
> Then if top_sz > arc.p and ARC_mru.lsize > 0 we evict the smaller of
> ARC_mru.lsize and top_size - arc.p
> 
> In your previous message it looks like arc.p is > (ARC_mru.size +
> ARC_anon.size).  It might make sense to double-check these numbers
> together, so when you check the size and lsize again, also check arc.p.
> 
> How/when did you configure arc_c_max?  arc.p is supposed to be
> initialized to half of arc.c.  Also, I assume that there's a reliable
> test case for reproducing this problem?
> 
> Thanks,
> 
> -j
> 
> On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote:
> > 
> > 
> > > ARC_mru::print -d size lsize
> > size = 0t10224433152
> > lsize = 0t10218960896
> > > ARC_mfu::print -d size lsize
> > size = 0t303450112
> > lsize = 0t289998848
> > > ARC_anon::print -d size
> > size = 0
> > >
> > 
> > So it looks like the MRU is running at 10GB...
> > 
> > What does this tell us?
> > 
> > Thanks,
> > /jim
> > 
> > 
> > 
> > [EMAIL PROTECTED] wrote:
> > >This seems a bit strange.  What's the workload, and also, what's the
> > >output for:
> > >
> > >  
> > >>ARC_mru::print size lsize
> > >>ARC_mfu::print size lsize
> > >>    
> > >and
> > >  
> > >>ARC_anon::print size
> > >>    
> > >
> > >For obvious reasons, the ARC can't evict buffers that are in use.
> > >Buffers that are available to be evicted should be on the mru or mfu
> > >list, so this output should be instructive.
> > >
> > >-j
> > >
> > >On Thu, Mar 15, 2007 at 02:08:37PM -0400, Jim Mauro wrote:
> > >  
> > >>FYI - After a few more runs, ARC size hit 10GB, which is now 10X c_max:
> > >>
> > >>
> > >>    
> > >>>arc::print -tad
> > >>>      
> > >>{
> > >>. . .
> > >>   ffffffffc02e29e8 uint64_t size = 0t10527883264
> > >>   ffffffffc02e29f0 uint64_t p = 0t16381819904
> > >>   ffffffffc02e29f8 uint64_t c = 0t1070318720
> > >>   ffffffffc02e2a00 uint64_t c_min = 0t1070318720
> > >>   ffffffffc02e2a08 uint64_t c_max = 0t1070318720
> > >>. . .
> > >>
> > >>Perhaps c_max does not do what I think it does?
> > >>
> > >>Thanks,
> > >>/jim
> > >>
> > >>
> > >>Jim Mauro wrote:
> > >>    
> > >>>Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
> > >>>(update 3). All file IO is mmap(file), read memory segment, unmap, close.
> > >>>
> > >>>Tweaked the arc size down via mdb to 1GB. I used that value because
> > >>>c_min was also 1GB, and I was not sure if c_max could be larger than
> > >>>c_min....Anyway, I set c_max to 1GB.
> > >>>
> > >>>After a workload run....:
> > >>>      
> > >>>>arc::print -tad
> > >>>>        
> > >>>{
> > >>>. . .
> > >>> ffffffffc02e29e8 uint64_t size = 0t3099832832
> > >>> ffffffffc02e29f0 uint64_t p = 0t16540761088
> > >>> ffffffffc02e29f8 uint64_t c = 0t1070318720
> > >>> ffffffffc02e2a00 uint64_t c_min = 0t1070318720
> > >>> ffffffffc02e2a08 uint64_t c_max = 0t1070318720
> > >>>. . .
> > >>>
> > >>>"size" is at 3GB, with c_max at 1GB.
> > >>>
> > >>>What gives? I'm looking at the code now, but was under the impression
> > >>>c_max would limit ARC growth. Granted, it's not a factor of 10, and
> > >>>it's certainly much better than the out-of-the-box growth to 24GB
> > >>>(this is a 32GB x4500), so clearly ARC growth is being limited, but it
> > >>>still grew to 3X c_max.
> > >>>
> > >>>Thanks,
> > >>>/jim
> > >>>_______________________________________________
> > >>>zfs-discuss mailing list
> > >>>zfs-discuss@opensolaris.org
> > >>>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > >>>      
> > >>_______________________________________________
> > >>zfs-discuss mailing list
> > >>zfs-discuss@opensolaris.org
> > >>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > >>    
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to