On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote:
> On Thu, 20 Mar 2008, Jonathan Edwards wrote:
>>
>> in that case .. try fixing the ARC size .. the dynamic resizing on  
>> the ARC
>> can be less than optimal IMHO
>
> Is a 16GB ARC size not considered to be enough? ;-)
>
> I was only describing the behavior that I observed.  It seems to me
> that when large files are written very quickly, that when the file
> becomes bigger than the ARC, that what is contained in the ARC is
> mostly stale and does not help much any more.  If the file is smaller
> than the ARC, then there is likely to be more useful caching.

sure i got that - it's not the size of the arc in this case since  
caching is going to be a lost cause.. but explicitly setting a  
zfs_arc_max should result in fewer calls to arc_shrink() when you hit  
memory pressure between the application's page buffer competing with  
the arc

in other words, as soon as the arc is 50% full of dirty pages (8GB)  
it'll start evicting pages .. you can't avoid that .. but what you can  
avoid is the additional weight of constantly growing and shrinking the  
cache as it tries to keep up with your constantly changing blocks in a  
large file

---
.je
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to