I've got a system with 24 Gig of RAM, and I'm running into some interesting 
issues playing with the ARC, L2ARC, and the DDT. I'll post a separate thread 
here shortly.  I think even if you add more RAM, you'll run into what I'm 
noticing (and posting about).

-----Original Message-----
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Frank Van Damme
Sent: Tuesday, May 03, 2011 4:33 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] gaining speed with l2arc

Hi, hello,

another dedup question. I just installed an ssd disk as l2arc.  This is a 
backup server with 6 GB RAM (ie I don't often read the same data again), 
basically it has a large number of old backups on it and they need to be 
deleted. Deletion speed seems to have improved although the majority of reads 
are still coming from disk.

                 capacity     operations    bandwidth
pool          alloc   free   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
backups       5.49T  1.58T  1.03K      6  3.13M  91.1K
  raidz1      5.49T  1.58T  1.03K      6  3.13M  91.1K
    c0t0d0s1      -      -    200      2  4.35M  20.8K
    c0t1d0s1      -      -    202      1  4.28M  24.7K
    c0t2d0s1      -      -    202      1  4.28M  24.9K
    c0t3d0s1      -      -    197      1  4.27M  13.1K
cache             -      -      -      -      -      -
  c1t5d0       112G  7.96M     63      2   337K  66.6K

The above output is while the machine is only deleting files (so I guess the 
goal is to have *all* metadata reads from the cache). So the first riddle: how 
to explain the low number of writes to l2arc compared to the reads from disk.

Because reading bits of the DDT is supposed to be the biggest bottleneck, I 
reckoned it would be a good idea to try not to expire any part of my DDT from 
l2arc. l2arc is memory mapped, so they say, so perhaps there is a method to 
reserve as much memory for this as possible, too.
Could one attain this by setting zfs_arc_meta_limit to a higher value?
I don't need much process memory on this machine (I use rsync and not much 
else).

I was also wondering if setting secondarycache=metadata for that zpool would be 
a good idea (to make sure l2arc stays reserver for metadata, since the DDT is 
considered metadata).
Bad idea, or would it even help to set primarycache=metadata too, to not let 
RAM fill up with file data?

P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris variants 
with bugs fixed/better performance, too).

--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen, dead or 
alive or by any means, including but not limited to telepathy without the 
benevolence of the author.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to