Yes, here it is .... (performance is vmware on laptop, so sorry for that)

How did I test ? 

1) My Disks: 

LUN ID      Device    Type         Size       Volume     Mounted Remov Attach
c0t0d0      sd4       cdrom        No Media              no      yes   ata
c1t0d0      sd0       disk         8GB        syspool    no      no    mpt
c1t1d0      sd1       disk         20GB       data       no      no    mpt
c1t2d0      sd2       disk         20GB       data       no      no    mpt
c1t3d0      sd3       disk         20GB       data       no      no    mpt
c1t4d0      sd8       disk         4GB                   no      no    mpt
syspo~/swap           zvol         768.0MB    syspool    no      no

2) My Pools:
  
volume: data
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0

errors: No known data errors

volume: syspool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        syspool     ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0

errors: No known data errors

3) Add the cache device to syspool:
zpool add -f syspool cache c1t4d0s2


r...@nexenta:/volumes# zpool status
  pool: data
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0

errors: No known data errors

  pool: syspool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        syspool     ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0
        cache
          c1t4d0s2  ONLINE       0     0     0

errors: No known data errors

4) Do I/O on the data volume and watch if the l2arc is filled with "zpool 
iostat": 

cmd: 
cd /volumes/data
iozone -s 1G -i 0 -i 1 (for I/O) 

Typically looks like this: 

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data        1.47G  58.0G      0    131      0  9.47M
  raidz1    1.47G  58.0G      0    131      0  9.47M
    c1t1d0      -      -      0    100      0  8.45M
    c1t2d0      -      -      0     77      0  4.74M
    c1t3d0      -      -      0     77      0  5.48M
----------  -----  -----  -----  -----  -----  -----
syspool     1.87G  6.06G      2      0  23.8K      0
  c1t0d0s0  1.87G  6.06G      2      0  23.7K      0
cache           -      -      -      -      -      -
  c1t4d0s2  95.9M  3.89G      0      0      0   127K
----------  -----  -----  -----  -----  -----  -----

5) Do the same I/O on the syspool: 

cd /volumes
iozone -s 1G -i 0 -i 1 (for I/O)

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data         407K  59.5G      0      0      0      0
  raidz1     407K  59.5G      0      0      0      0
    c1t1d0      -      -      0      0      0      0
    c1t2d0      -      -      0      0      0      0
    c1t3d0      -      -      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
syspool     2.35G  5.59G      0    167  6.25K  14.2M
  c1t0d0s0  2.35G  5.59G      0    167  6.25K  14.2M
cache           -      -      -      -      -      -
  c1t4d0s2   406M  3.59G      0     80      0  9.59M
----------  -----  -----  -----  -----  -----  -----


6) You see only if I/O to syspool is done, the l2arc in syspool used. 

Release is build 104 with some patches.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to