[zfs-discuss] 'primarycache' and 'secondarycache'

2010-09-16 Thread Jackie Cheng
My understand of the read cache is that L2ARC has a read thread to read the cache from ARC. Hence my question. if primarycache is set to 'metadata', will L2ARC get to cache user data? similarly, what if primarycache is set to none. Thanks, --Jackie -- This message posted from opensolaris.org

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread erik.ableson
On 15 sept. 2010, at 22:04, Mike Mackovitch wrote: On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote: any resolution to this issue? I'm experiencing the same annoying lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver 3. Would somehow going back to the earlier 8/2

[zfs-discuss] Replacing a disk never completes

2010-09-16 Thread Ben Miller
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks (Seagate Constellation) and the pool seems sick now. The pool has four raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few months ago. I replaced two disks in the second set (c2t0d0, c3t0d0) a

Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We downloaded zilstat from http://www.richardelling.com/Home/scripts-and-programs-1 but we never could get the script to run. We are not really sure how to debug. :( ./zilstat.ksh dtrace: invalid probe specifier #pragma D option quiet inline int OPT_time = 0; inline int OPT_txg = 0; inline

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread Rich Teer
On Thu, 16 Sep 2010, erik.ableson wrote: And for reference, I have a number of 10.6 clients using NFS for sharing Fusion virtual machines, iTunes library, iPhoto libraries etc. without any issues. Excellent; what OS is your NFS server running? -- Rich Teer, Publisher Vinylphile Magazine

Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We have the following setup configured. The drives are running on a couple PAC PS-5404s. Since these units do not support JBOD, we have configured each individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is connected to the solaris box through a dual port 4G Emulex

Re: [zfs-discuss] resilver = defrag?

2010-09-16 Thread David Dyer-Bennet
On Wed, September 15, 2010 16:18, Edward Ned Harvey wrote: For example, if you start with an empty drive, and you write a large amount of data to it, you will have no fragmentation. (At least, no significant fragmentation; you may get a little bit based on random factors.) As life goes

[zfs-discuss] recordsize

2010-09-16 Thread Mike DeMarco
What are the ramifications to changing the recordsize of a zfs filesystem that already has data on it? I want to tune down the recordsize to speed up very small reads to a size that is more in line with the read size. can I do this on a filestystem that has data already on it and how does it

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread Rich Teer
On Thu, 16 Sep 2010, Erik Ableson wrote: OpenSolaris snv129 Hmm, SXCE snv_130 here. Did you have to do any server-side tuning (e.g., allowing remote connections), or did it just work out of the box? I know that Sendmail needs some gentle persuasion to accept remote connections out of the box;

Re: [zfs-discuss] recordsize

2010-09-16 Thread Freddie Cash
On Thu, Sep 16, 2010 at 8:21 AM, Mike DeMarco mikej...@yahoo.com wrote: What are the ramifications to changing the recordsize of a zfs filesystem that already has data on it? I want to tune down the recordsize to speed up very small reads to a size that is more in line with the read size.

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread Mike Mackovitch
On Thu, Sep 16, 2010 at 08:15:53AM -0700, Rich Teer wrote: On Thu, 16 Sep 2010, Erik Ableson wrote: OpenSolaris snv129 Hmm, SXCE snv_130 here. Did you have to do any server-side tuning (e.g., allowing remote connections), or did it just work out of the box? I know that Sendmail needs

Re: [zfs-discuss] resilver = defrag?

2010-09-16 Thread Miles Nordin
dd == David Dyer-Bennet d...@dd-b.net writes: dd Sure, if only a single thread is ever writing to the disk dd store at a time. video warehousing is a reasonable use case that will have small numbers of sequential readers and writers to large files. virtual tape library is another

Re: [zfs-discuss] resilver = defrag?

2010-09-16 Thread Marty Scholes
David Dyer-Bennet wote: Sure, if only a single thread is ever writing to the disk store at a time. This situation doesn't exist with any kind of enterprise disk appliance, though; there are always multiple users doing stuff. Ok, I'll bite. Your assertion seems to be that any kind of

Re: [zfs-discuss] Compression block sizes

2010-09-16 Thread Bob Friesenhahn
On Wed, 15 Sep 2010, Brandon High wrote: When using compression, are the on-disk record sizes determined before or after compression is applied? In other words, if record size is set to 128k, is that the amount of data fed into the compression engine, or is the output size trimmed to fit? I

[zfs-discuss] Best practice for Sol10U9 ZIL -- mirrored or not?

2010-09-16 Thread Ray Van Dolson
Best practice in Solaris 10 U8 and older was to use a mirrored ZIL. With the ability to remove slog devices in Solaris 10 U9, we're thinking we may get more bang for our buck to use two slog devices for improved IOPS performance instead of needing the redundancy so much. Any thoughts on this?

Re: [zfs-discuss] Best practice for Sol10U9 ZIL -- mirrored or not?

2010-09-16 Thread Bryan Horstmann-Allen
+-- | On 2010-09-16 18:08:46, Ray Van Dolson wrote: | | Best practice in Solaris 10 U8 and older was to use a mirrored ZIL. | | With the ability to remove slog devices in Solaris 10 U9, we're | thinking we may get more