Re: [zfs-discuss] high read iops - more memory for arc?

2009-12-25 Thread Richard L. Hamilton
FYI, the arc and arc-discuss lists or forums are not appropriate for this.  
There are
two arc acronyms:

* Architecture Review Committee (arc list is for cases being considered, 
arc-discuss is for
other discussion.  Non-committee business is most unwelcome on the arc list.)

* the ZFS Adaptive Replacement Cache.  That is what you are talking about.

The zfs-discuss list is appropriate for that subject; storage-discuss and 
database-discuss
_may_ relate, but rather than sending to every list that _might_ relate, I'd 
suggest starting
with the most appropriate first, and reading enough of the posts already on a 
list to
get some idea of what's appropriate there and what isn't, before just adding it 
as
and additional CC in the hopes that someone might answer.

Very few people are likely to be responding here at this time, insofar as the 
largest
part of the people that might are probably observing (at least socially) the 
Christmas
holiday right now (their families might not appreciate them being distracted by
anything else!), and many of the rest aren't interacting much because of how
many are not around right now.  Don't expect too much until the first Monday
after 1 January.  And anyway, discussion lists are not a place where anyone is
_obligated_ to answer.  Those with support contracts presumably have other ways
of getting help.

Now...I probably couldn't answer your question even if I had all the information
you left out,but maybe someone could, eventually.  Some of the information they
might need:

* what are you running (uname -a will do)?  ZFS is constantly being improved; 
problems
get fixed (and sometimes introduced) in just about every build

* what system, how is it configured, exactly what disk models, etc?

Free memory is _supposed_ to be low.  Free memory is wasted memory, except that
a little is kept free to quickly respond to requests for more.  Most memory not 
otherwise
being used for mappings, kernel data structures, etc, is used as either 
additional VM
page cache of pages that might be used again, or by the ZFS ARC.  The tools to
report on just how memory is used behave differently on Solaris (and even on 
different
versions) than they do on other OSs, because Solaris tries really hard to make 
best
use of all RAM.  The uname -a information would also help someone (more 
knowledgeable
than I, although I might be able to look it up) suggest which tools would best 
help to
understand your situation.

So while free memory alone doesn't tell you much, there's a good chance that 
more
would help unless there's some specific problem that's involved.  There's also 
a good
chance that your problem is known, recognizable, and probably has a fix in a 
newer
version or a workaround, if you provide enough information to help someone find
that for you.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] high read iops - more memory for arc?

2009-12-24 Thread Brad
I'm running into a issue where there seems to be a high number of read iops 
hitting disks and physical free memory is fluctuating between 200MB - 450MB 
out of 16GB total.  We have the l2arc configured on a 32GB Intel X25-E ssd and 
slog on another32GB X25-E ssd.

According to our tester, Oracle writes are extremely slow (high latency).   

Below is a snippet of iostat:

r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
 4898.3   34.2   23.21.4  0.1 385.30.0   78.1   0 1246 c1
0.00.80.00.0  0.0  0.00.0   16.0   0   1 c1t0d0
  401.70.01.90.0  0.0 31.50.0   78.5   1 100 c1t1d0
  421.20.02.00.0  0.0 30.40.0   72.3   1  98 c1t2d0
  403.90.01.90.0  0.0 32.00.0   79.2   1 100 c1t3d0
  406.70.02.00.0  0.0 33.00.0   81.3   1 100 c1t4d0
  414.20.01.90.0  0.0 28.60.0   69.1   1  98 c1t5d0
  406.30.01.80.0  0.0 32.10.0   79.0   1 100 c1t6d0
  404.30.01.90.0  0.0 31.90.0   78.8   1 100 c1t7d0
  404.10.01.90.0  0.0 34.00.0   84.1   1 100 c1t8d0
  407.10.01.90.0  0.0 31.20.0   76.6   1 100 c1t9d0
  407.50.02.00.0  0.0 33.20.0   81.4   1 100 c1t10d0
  402.80.02.00.0  0.0 33.50.0   83.2   1 100 c1t11d0
  408.90.02.00.0  0.0 32.80.0   80.3   1 100 c1t12d0
9.6   10.80.10.9  0.0  0.40.0   20.1   0  17 c1t13d0
0.0   22.70.00.5  0.0  0.50.0   22.8   0  33 c1t14d0

Is this an indicator that we need more physical memory?  From 
http://blogs.sun.com/brendan/entry/test, the order that a read request is 
satisfied is:

1) ARC
2) vdev cache of L2ARC devices
3) L2ARC devices
4) vdev cache of disks
5) disks

Using arc_summary.pl, we determined that prefletch was not helping much so we 
disabled.

CACHE HITS BY DATA TYPE:
  Demand Data:22%158853174
  Prefetch Data:  17%123009991   ---not helping???
  Demand Metadata:60%437439104
  Prefetch Metadata:   0%2446824

The write iops started to kick in more and latency reduced on spinning disks:
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
 1629.0  968.0   17.47.3  0.0 35.90.0   13.8   0 1088 c1
0.01.90.00.0  0.0  0.00.01.7   0   0 c1t0d0
  126.7   67.31.40.2  0.0  2.90.0   14.8   0  90 c1t1d0
  129.7   76.11.40.2  0.0  2.80.0   13.7   0  90 c1t2d0
  128.0   73.91.40.2  0.0  3.20.0   16.0   0  91 c1t3d0
  128.3   79.11.30.2  0.0  3.60.0   17.2   0  92 c1t4d0
  125.8   69.71.30.2  0.0  2.90.0   14.9   0  89 c1t5d0
  128.3   81.91.40.2  0.0  2.80.0   13.1   0  89 c1t6d0
  128.1   69.21.40.2  0.0  3.10.0   15.7   0  93 c1t7d0
  128.3   80.31.40.2  0.0  3.10.0   14.7   0  91 c1t8d0
  129.2   69.31.40.2  0.0  3.00.0   15.2   0  90 c1t9d0
  130.1   80.01.40.2  0.0  2.90.0   13.6   0  89 c1t10d0
  126.2   72.61.30.2  0.0  2.80.0   14.2   0  89 c1t11d0
  129.7   81.01.40.2  0.0  2.70.0   12.9   0  88 c1t12d0
   90.4   41.31.04.0  0.0  0.20.01.2   0   6 c1t13d0
0.0   24.30.01.2  0.0  0.00.00.2   0   0 c1t14d0


Is it true if your MFU stats start to go over 50% then more memory is needed?
CACHE HITS BY CACHE LIST:
  Anon:   10%74845266   [ New 
Customer, First Cache Hit ]
  Most Recently Used: 19%140478087 (mru)[ 
Return Customer ]
  Most Frequently Used:   65%475719362 (mfu)[ 
Frequent Customer ]
  Most Recently Used Ghost:2%20785604 (mru_ghost)   [ 
Return Customer Evicted, Now Back ]
  Most Frequently Used Ghost:  1%9920089 (mfu_ghost)[ 
Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
  Demand Data:22%158852935
  Prefetch Data:  17%123009991
  Demand Metadata:60%437438658
  Prefetch Metadata:   0%2446824

My theory is since there's not enough memory for the arc to cache data, its 
hits the l2arc where it can't find data and has to query the disk for the 
request.  This causes contention between reads and writes causing the service 
times to inflate.

Thoughts?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org