Hi ,

A result from the arc_summary.pl script :

System Memory:
         Physical RAM:  12279 MB
         Free Memory :  3106 MB
         LotsFree:      191 MB

ZFS Tunables (/etc/system):

ARC Size:
         Current Size:             3049 MB (arcsize)
         Target Size (Adaptive):   4628 MB (c)
         Min Size (Hard Limit):    1406 MB (zfs_arc_min)
         Max Size (Hard Limit):    11255 MB (zfs_arc_max)

ARC Size Breakdown:
         Most Recently Used Cache Size:          88%    4078 MB (p)
         Most Frequently Used Cache Size:        11%    550 MB (c-p)

ARC Efficency:
         Cache Access Total:             174248528
         Cache Hit Ratio:      68%       119190438      [Defined State for
buffer]
         Cache Miss Ratio:     31%       55058090       [Undefined State
for Buffer]
         REAL Hit Ratio:       57%       100978493      [MRU/MFU Hits Only]

         Data Demand   Efficiency:    52%
         Data Prefetch Efficiency:    58%

        CACHE HITS BY CACHE LIST:
          Anon:                       10%        12368838               [
New Customer, First Cache Hit ]
          Most Recently Used:         30%        35806906 (mru)         [
Return Customer ]
          Most Frequently Used:       54%        65171587 (mfu)         [
Frequent Customer ]
          Most Recently Used Ghost:    1%        1643153 (mru_ghost)    [
Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  3%        4199954 (mfu_ghost)    [
Frequent Customer Evicted, Now Back ]
        CACHE HITS BY DATA TYPE:
          Demand Data:                32%        38700071
          Prefetch Data:              14%        17637720
          Demand Metadata:            38%        45514276
          Prefetch Metadata:          14%        17338371
        CACHE MISSES BY DATA TYPE:
          Demand Data:                63%        34781822
          Prefetch Data:              23%        12708139
          Demand Metadata:            12%        6915559
          Prefetch Metadata:           1%        652570


So now i'm even more confused...in the ARC Efficency we see :

Data Demand   Efficiency:    52%
Data Prefetch Efficiency:    58%

is this a good thig or a bad thing?

Thank you,
Bruno

On Wed, 22 Apr 2009 21:08:00 -0700, Ben Rockwood <[email protected]>
wrote:
> bsousa wrote:
>> Hi Ben,
>>
>> First of all thank you for quick feedback, and some more questions pop
in
>> my mind :)
>>   
> 
> 
> Bob's feedback is great, I'll supplement.
>>     * the ssd drive will only help in write I/O's ? i ask this because
we
>> will need to backup this data to another host running solaris attached
>> with
>> fibre to a tape library. our current setup takes around 10 hours for a
>> full
>> backup (!!), and my intention is to decrease this backup time window
>>   
> 
> SSD's can be used in 3 ways:
>  1) As pool members, like an ordinary disk.  (This is a waste)
>  2) As a write cache, ZIL offload
>  3) As an extended read cache to suppliment the in-memory ARC.  Rarely
> is this needed.
> 
> The latter two are what become known as a "hybrid pool", a caching layer
> between DRAM and Disk.  For NFS or mail this is really helpful because
> they tend to fsync or do direct writes which can't be properly and
> cleanly queued for async write.  ZIL Offload SSD combined with 7,200RPM
> SATA is like putting a jet engine on a pinto... cheap but kick ass.
> 
> Use cuddletech.com/arc_summary to better understand your ARC usage and
> prefetch.
> 
>>     * will the atime defined as off will help to maximize performance ?
>>   
> 
> Yes, disabling atime is the first thing you should do after creating the
> pool.
> 
>>     * is there any rule of thumb to setup the recordsize, or any real
>>     word
>> experience ,something like ZFS in the Trenches ? ;)
>>   
> 
> ZFS uses a variable block size.  The "record size" sets the upper
> limit.  The default is 128K.  So, for instance, if you write 8K of data,
> you get an 8K record.  If you write 64K of data you get a 64K record. 
> If you write 256K of data you get two 128K records.  The default block
> size tends to work out really nicely.
> 
>>     * more "exotic" features of ZFS like file prefetch, device
>> prefecth..any advise on those?
>>   
> 
> Use tools like arc_summary to get a feel for prefetch.  The only case in
> which I really had problem with prefetch was in iSCSI environments where
> I/O is fairly costly.  If you /can/ tolerate the extra I/O its worth it
> to have your ARC primed.
> 
> 
> benr.
> 
>

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to