Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Rob Logan

 correct ratio of arc to l2arc?

from http://blogs.sun.com/brendan/entry/l2arc_screenshots

It costs some DRAM to reference the L2ARC, at a rate proportional to record 
size.
For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes 
of
L2ARC - at an 8 Kbyte ZFS record size. If you use a 16 Kbyte record size, that 
cost
would be halve - 7.5 Gbytes. This means you shouldn't, for example, configure a
system with only 8 Gbytes of DRAM, 600 Gbytes of L2ARC, and an 8 Kbyte record 
size -
if you did, the L2ARC would never fully populate.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Ethan Erchinger
 
   correct ratio of arc to l2arc?
 
 from http://blogs.sun.com/brendan/entry/l2arc_screenshots
 
Thanks Rob. Hmm...that ratio isn't awesome.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-18 Thread Richard Elling

Ethan Erchinger wrote:

  correct ratio of arc to l2arc?

from http://blogs.sun.com/brendan/entry/l2arc_screenshots



Thanks Rob. Hmm...that ratio isn't awesome.
  


TANSTAAFL

A good SWAG is about 200 bytes for L2ARC directory in the ARC for
each record in the L2ARC.

So if your recordsize is 512 bytes (pathologically worst case), you'll need
200/512 * size of L2ARC for a minimum ARC size, so ARC needs to be
about 40% of the size of L2ARC.  For 8 kByte recordsize it will be about
200/8192 or 2.5%.  Neel liked using 16kByte recordsize for InnoDB, so
figure about about 1.2%.

In this case, if you have about 150 GBytes of L2ARC disk, and are using
8 kByte recordsize, you'll need at least 3.75 GBytes for the ARC, instead
of 2 GBytes.  Since this space competes with the regular ARC caches,
you'll want even more headroom, so maybe 5 GBytes would be a
reasonable minimum ARC cap?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] problems with l2arc in 2009.06

2009-06-17 Thread Ethan Erchinger
Hi all,

Since we've started running 2009.06 on a few servers we seem to be
hitting a problem with l2arc that causes it to stop receiving evicted
arc pages.  Has anyone else seen this kind of problem?

The filesystem contains about 130G of compressed (lzjb) data, and looks
like:
$ zpool status -v data
  pool: data
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
data   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t1d0p0   ONLINE   0 0 0
c1t9d0p0   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t2d0p0   ONLINE   0 0 0
c1t10d0p0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t3d0p0   ONLINE   0 0 0
c1t11d0p0  ONLINE   0 0 0
logs   ONLINE   0 0 0
  c1t7d0p0 ONLINE   0 0 0
  c1t15d0p0ONLINE   0 0 0
cache
  c1t14d0p0ONLINE   0 0 0
  c1t6d0p0 ONLINE   0 0 0

$ zpool iostat -v data
  capacity operationsbandwidth
poolused  avail   read  write   read  write
-  -  -  -  -  -  -
data133G   275G334926  2.35M  8.62M
  mirror   44.4G  91.6G111257   799K  1.60M
c1t1d0p0   -  - 55145   979K  1.61M
c1t9d0p0   -  - 54145   970K  1.61M
  mirror   44.3G  91.7G111258   804K  1.61M
c1t2d0p0   -  - 55140   979K  1.61M
c1t10d0p0  -  - 55140   973K  1.61M
  mirror   44.4G  91.6G111258   801K  1.61M
c1t3d0p0   -  - 55145   982K  1.61M
c1t11d0p0  -  - 55145   975K  1.61M
  c1t7d0p0   12K  29.7G  0 76 71  1.90M
  c1t15d0p0 152K  29.7G  0 78 11  1.96M
cache  -  -  -  -  -  -
  c1t14d0p051.3G  23.2G 51 35   835K  4.07M
  c1t6d0p0 48.7G  25.9G 45 34   750K  3.86M
-  -  -  -  -  -  -

After adding quite a bit of data to l2arc, it quits getting new writes,
and read traffic is quite low, even though arc misses are quite high:
  capacity operationsbandwidth
poolused  avail   read  write   read  write
-  -  -  -  -  -  -
data133G   275G550263  3.85M  1.57M
  mirror   44.4G  91.6G180  0  1.18M  0
c1t1d0p0   -  - 88  0  3.22M  0
c1t9d0p0   -  - 91  0  3.36M  0
  mirror   44.3G  91.7G196  0  1.29M  0
c1t2d0p0   -  - 95  0  2.74M  0
c1t10d0p0  -  -100  0  3.60M  0
  mirror   44.4G  91.6G174  0  1.38M  0
c1t3d0p0   -  - 85  0  2.71M  0
c1t11d0p0  -  - 88  0  3.34M  0
  c1t7d0p08K  29.7G  0131  0   790K
  c1t15d0p0 156K  29.7G  0131  0   816K
cache  -  -  -  -  -  -
  c1t14d0p051.3G  23.2G 16  0   271K  0
  c1t6d0p0 48.7G  25.9G 14  0   224K  0
-  -  -  -  -  -  -

$ perl arcstat.pl
Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz
c
21:21:31   10M5M 535M   53 002M   31   857M
1G
21:21:32   20984 4084   40 0060   32   833M
1G
21:21:33   25557 2257   22 00 94   832M
1G
21:21:34   630   483 76   483   76 00   232   63   831M
1G

Arcstats output, just for completeness:
$ kstat -n arcstats
module: zfs instance: 0
name:   arcstatsclass:misc
c   1610325248
c_max   2147483648
c_min   1610325248
crtime  129.137246015
data_size   528762880
deleted 14452910
demand_data_hits589823
demand_data_misses  3812972
demand_metadata_hits4477921
demand_metadata_misses  2069450
evict_skip  5347558
hash_chain_max  13
hash_chains 521232
hash_collisions 9991276
hash_elements   1750708
hash_elements_max   2627838
hdr_size25463208
hits5067744
l2_abort_lowmem 3225
l2_cksum_bad0

Re: [zfs-discuss] problems with l2arc in 2009.06

2009-06-17 Thread Ethan Erchinger
 
 This is a mysql database server, so if you are wondering about the
 smallish arc size, it's being artificially limited by set
 zfs:zfs_arc_max = 0x8000 in /etc/system, so that the majority of
 ram can be allocated to InnoDb.
 
I was told offline that it's likely because my arc size has been limited
to a point that it cannot utilize l2arc correctly.  Can anyone tell me
the correct ratio of arc to l2arc?

Thanks again,
Ethan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss