[zfs-discuss] gaining speed with l2arc

2011-05-03 Thread Frank Van Damme
Hi, hello,

another dedup question. I just installed an ssd disk as l2arc.  This
is a backup server with 6 GB RAM (ie I don't often read the same data
again), basically it has a large number of old backups on it and they
need to be deleted. Deletion speed seems to have improved although the
majority of reads are still coming from disk.

 capacity operationsbandwidth
pool  alloc   free   read  write   read  write
  -  -  -  -  -  -
backups   5.49T  1.58T  1.03K  6  3.13M  91.1K
  raidz1  5.49T  1.58T  1.03K  6  3.13M  91.1K
c0t0d0s1  -  -200  2  4.35M  20.8K
c0t1d0s1  -  -202  1  4.28M  24.7K
c0t2d0s1  -  -202  1  4.28M  24.9K
c0t3d0s1  -  -197  1  4.27M  13.1K
cache -  -  -  -  -  -
  c1t5d0   112G  7.96M 63  2   337K  66.6K

The above output is while the machine is only deleting files (so I
guess the goal is to have *all* metadata reads from the cache). So the
first riddle: how to explain the low number of writes to l2arc
compared to the reads from disk.

Because reading bits of the DDT is supposed to be the biggest
bottleneck, I reckoned it would be a good idea to try not to expire
any part of my DDT from l2arc. l2arc is memory mapped, so they say, so
perhaps there is a method to reserve as much memory for this as
possible, too.
Could one attain this by setting zfs_arc_meta_limit to a higher value?
I don't need much process memory on this machine (I use rsync and not
much else).

I was also wondering if setting secondarycache=metadata for that zpool
would be a good idea (to make sure l2arc stays reserver for metadata,
since the DDT is considered metadata).
Bad idea, or would it even help to set primarycache=metadata too, to not
let RAM fill up with file data?

P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris
variants with bugs fixed/better performance, too).

-- 
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Joerg Schilling
Freddie Cash fjwc...@gmail.com wrote:

 On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton dan.shel...@oracle.com wrote:
  Is anyone aware of any freeware program that can speed up copying tons of
  data (2 TB) from UFS to ZFS on same server?

 rsync, with --whole-file --inplace (and other options), works well for
 the initial copy.

But this is most likely slower than star and does rsync support sparse files?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Andrew Gabriel

Dan Shelton wrote:
Is anyone aware of any freeware program that can speed up copying tons 
of data (2 TB) from UFS to ZFS on same server?


I use 'ufsdump | ufsrestore'*. I would also suggest try setting 
'sync=disabled' during the operation, and reverting it afterwards. 
Certainly, fastfs (a similar although more dangerous option for ufs) 
makes ufs to ufs copying significantly faster.


*ufsrestore works fine on ZFS filesystems (although I haven't tried it 
with any POSIX ACLs on the original ufs filesystem, which would probably 
simply get lost).


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Joerg Schilling
Andrew Gabriel andrew.gabr...@oracle.com wrote:

 Dan Shelton wrote:
  Is anyone aware of any freeware program that can speed up copying tons 
  of data (2 TB) from UFS to ZFS on same server?

 I use 'ufsdump | ufsrestore'*. I would also suggest try setting 
 'sync=disabled' during the operation, and reverting it afterwards. 
 Certainly, fastfs (a similar although more dangerous option for ufs) 
 makes ufs to ufs copying significantly faster.

 *ufsrestore works fine on ZFS filesystems (although I haven't tried it 
 with any POSIX ACLs on the original ufs filesystem, which would probably 
 simply get lost).

star -copy -no-fsync  is typically 30% faster that ufsdump | ufsrestore.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
 But this is most likely slower than star and does rsync support sparse files?

'rsync -ASHXavP'

-A: ACLs
-S: Sparse files
-H: Hard links
-X: Xattrs
-a: archive mode; equals -rlptgoD (no -H,-A,-X)

You don't need to specify --whole-file, it's implied when copying on
the same system. --inplace can play badly with hard links and
shouldn't be used.

It probably will be slower than other options but it may be more
accurate, especially with -H

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Erik Trimble

On 5/3/2011 8:55 AM, Brandon High wrote:

On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de  wrote:

But this is most likely slower than star and does rsync support sparse files?

'rsync -ASHXavP'

-A: ACLs
-S: Sparse files
-H: Hard links
-X: Xattrs
-a: archive mode; equals -rlptgoD (no -H,-A,-X)

You don't need to specify --whole-file, it's implied when copying on
the same system. --inplace can play badly with hard links and
shouldn't be used.

It probably will be slower than other options but it may be more
accurate, especially with -H

-B


rsync is indeed slower than star; so far as I can tell, this is due 
almost exclusively to the fact that rsync needs to build an in-memory 
table of all work being done *before* it starts to copy. After that, it 
copies at about the same rate as star (my observations). I'd have to 
look at the code, but rsync appears to internally buffer a signification 
amount (due to its expect network use pattern), which helps for ZFS 
copying.  The one thing I'm not sure of is whether rsync uses a socket, 
pipe, or semaphore method when doing same-host copying. I presume socket 
(which would slightly slow it down vs star).


That said, rsync is really the only solution if you have a partial or 
interrupted copy.  It's also really the best method to do verification.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Ian Collins

 On 05/ 4/11 01:35 AM, Joerg Schilling wrote:

Andrew Gabrielandrew.gabr...@oracle.com  wrote:


Dan Shelton wrote:

Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?

I use 'ufsdump | ufsrestore'*. I would also suggest try setting
'sync=disabled' during the operation, and reverting it afterwards.
Certainly, fastfs (a similar although more dangerous option for ufs)
makes ufs to ufs copying significantly faster.

*ufsrestore works fine on ZFS filesystems (although I haven't tried it
with any POSIX ACLs on the original ufs filesystem, which would probably
simply get lost).

star -copy -no-fsync  is typically 30% faster that ufsdump | ufsrestore.


Does it preserve ACLs?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Freddie Cash
On Tue, May 3, 2011 at 12:36 PM, Erik Trimble erik.trim...@oracle.com wrote:
 On 5/3/2011 8:55 AM, Brandon High wrote:

 On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
 joerg.schill...@fokus.fraunhofer.de  wrote:

 But this is most likely slower than star and does rsync support sparse
 files?

 'rsync -ASHXavP'

 -A: ACLs
 -S: Sparse files
 -H: Hard links
 -X: Xattrs
 -a: archive mode; equals -rlptgoD (no -H,-A,-X)

 You don't need to specify --whole-file, it's implied when copying on
 the same system. --inplace can play badly with hard links and
 shouldn't be used.

 It probably will be slower than other options but it may be more
 accurate, especially with -H

 -B

 rsync is indeed slower than star; so far as I can tell, this is due almost
 exclusively to the fact that rsync needs to build an in-memory table of all
 work being done *before* it starts to copy.

rsync 2.x works that way., building a complete list of
files/directories to copy before starting the copy.

rsync 3.x doesn't.  3.x builds an initial file list for the first
directory and then starts copying files while continuing to build the
list of files, so there's only a small pause at the beginning.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-03 Thread Brandon High
On Tue, May 3, 2011 at 12:36 PM, Erik Trimble erik.trim...@oracle.com wrote:
 rsync is indeed slower than star; so far as I can tell, this is due almost
 exclusively to the fact that rsync needs to build an in-memory table of all
 work being done *before* it starts to copy. After that, it copies at about

rsync 3.0+ will start copying almost immediately, so it's much better
in that respect than previous versions. It continues to scan update
the list of files while sending data.

 network use pattern), which helps for ZFS copying.  The one thing I'm not
 sure of is whether rsync uses a socket, pipe, or semaphore method when doing
 same-host copying. I presume socket (which would slightly slow it down vs

It creates a socketpair() before clone()ing itself and uses the socket
for communications.

 That said, rsync is really the only solution if you have a partial or
 interrupted copy.  It's also really the best method to do verification.

For verification you should specify -c (checksums), otherwise it will
only look at the size, permissions, owner and date and if they all
match it will not look at the file contents. It can take as long (or
longer) to complete than the original copy, since files on both side
need to be read and checksummed.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] multipl disk failures cause zpool hang

2011-05-03 Thread TianHong Zhao
Hi,

 

There seems to be a few threads about zpool hang,  do we have a
workaround to resolve the hang issue without rebooting ?

 

In my case,  I have a pool with disks from external LUNs via a fiber
cable. When the cable is unplugged while there is IO in the pool,

All zpool related command hang (zpool status, zpool list, etc.), put the
cable back does not solve the problem.

 

Eventually, I cannot even open a new SSH session to the box,  somehow
the system goes into  half-locked state.

 

Any clue about resolving this problem would be appreciated.

 

Thanks

 

The followings are trace for zpoo list

 

#truss zpool list

execve(/sbin/zpool, 0x08047CB4, 0x08047CC0)  argc = 2

sysinfo(SI_MACHINE, i86pc, 257)   = 6

mmap(0x, 32, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEFB

mmap(0x, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1,
0) = 0xFEFA

mmap(0x, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1,
0) = 0xFEF9

mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEF8

memcntl(0xFEFB7000, 32064, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

memcntl(0x0805, 15416, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12

resolvepath(/sbin/zpool, /sbin/zpool, 1023) = 11

sysconfig(_CONFIG_PAGESIZE) = 4096

stat64(/sbin/zpool, 0x080478F8)   = 0

open(/var/ld/ld.config, O_RDONLY) Err#2 ENOENT

stat64(/lib/libumem.so.1, 0x080470A8) = 0

resolvepath(/lib/libumem.so.1, /lib/libumem.so.1, 1023) = 17

open(/lib/libumem.so.1, O_RDONLY) = 3

mmapobj(3, MMOBJ_INTERPRET, 0xFEF80AE0, 0x08047114, 0x) = 0

close(3)= 0

memcntl(0xFE53, 29304, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEF7

stat64(/lib/libc.so.1, 0x080470A8)= 0

resolvepath(/lib/libc.so.1, /lib/libc.so.1, 1023) = 14

open(/lib/libc.so.1, O_RDONLY)= 3

mmapobj(3, MMOBJ_INTERPRET, 0xFEF70108, 0x08047114, 0x) = 0

close(3)= 0

memcntl(0xFEE2, 187236, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

stat64(/lib/libzfs.so.1, 0x08046DB8)  = 0

resolvepath(/lib/libzfs.so.1, /lib/libzfs.so.1, 1023) = 16

open(/lib/libzfs.so.1, O_RDONLY)  = 3

mmapobj(3, MMOBJ_INTERPRET, 0xFEF70958, 0x08046E24, 0x) = 0

close(3)= 0

memcntl(0xFED7, 35024, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFED6

mmap(0x0001, 24576, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON|MAP_ALIGN, -1, 0) = 0xFED5

getcontext(0x08047758)

getrlimit(RLIMIT_STACK, 0x08047750) = 0

getpid()= 1103 [1102]

lwp_private(0, 1, 0xFED52A40)   = 0x01C3

setustack(0xFED52AA0)

sysconfig(_CONFIG_PAGESIZE) = 4096

stat64(/usr/lib/fm//libtopo.so, 0x08046B24)   = 0

resolvepath(/usr/lib/fm//libtopo.so, /usr/lib/fm/libtopo.so.1, 1023)
= 24

open(/usr/lib/fm//libtopo.so, O_RDONLY)   = 3

mmapobj(3, MMOBJ_INTERPRET, 0xFED607C0, 0x08046B90, 0x) = 0

close(3)= 0

memcntl(0xFED0, 40688, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFECF

stat64(/lib/libxml2.so.2, 0x08046714) = 0

resolvepath(/lib/libxml2.so.2, /lib/libxml2.so.2, 1023) = 17

open(/lib/libxml2.so.2, O_RDONLY) = 3

mmapobj(3, MMOBJ_INTERPRET, 0xFECF01C8, 0x08046780, 0x) = 0

close(3)= 0

memcntl(0xFE2A, 181964, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

stat64(/lib/libpthread.so.1, 0x08046654)  = 0

resolvepath(/lib/libpthread.so.1, /lib/libpthread.so.1, 1023) = 20

open(/lib/libpthread.so.1, O_RDONLY)  = 3

mmapobj(3, MMOBJ_INTERPRET, 0xFECF0818, 0x080466C0, 0x) = 0

close(3)= 0

stat64(/lib/libz.so.1, 0x08046654)= 0

resolvepath(/lib/libz.so.1, /lib/libz.so.1, 1023) = 14

open(/lib/libz.so.1, O_RDONLY)= 3

mmapobj(3, MMOBJ_INTERPRET, 0xFECF0DB8, 0x080466C0, 0x) = 0

close(3)= 0

mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFECB

memcntl(0xFECC, 6984, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0

stat64(/lib/libm.so.2, 0x08046654)= 0

resolvepath(/lib/libm.so.2, /lib/libm.so.2, 1023) = 14

open(/lib/libm.so.2, O_RDONLY)= 3

mmapobj(3, MMOBJ_INTERPRET, 0xFECB0508, 0x080466C0, 0x) = 0

close(3)= 0

memcntl(0xFEB9, 39464, 

[zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Rich Teer
Hi all,

I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick because only the stuff that
had changed between the previous snapshot and the current one be
copied?  Is my understanding of incremental zfs send correct?

Also related to this is a performance question.  My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete.  The strikes me as being a bit high for a mere 50 MB;
are my expectation realistic or is it just because of my very budget
concious set up?  If so, where's the bottleneck?

The source pool is on a pair of 146 GB 10K RPM disks on separate
busses in a D1000 (split bus arrangement) and the destination pool
is on a IOMega 1 GB USB attached disk.  The machine to which both
pools are connected is a Sun Blade 1000 with a pair of 900 MHz US-III
CPUs and 2 GB of RAM.  The HBA is Sun's dual differential UltraSCSI
PCI card.  The machine was relatively quiescent apart from doing the
local zfs send | zfs recv.

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Eric D. Mudama

On Tue, May  3 at 17:39, Rich Teer wrote:

Hi all,

I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick because only the stuff that
had changed between the previous snapshot and the current one be
copied?  Is my understanding of incremental zfs send correct?


Your understanding is correct.  We use -I, not -i, since it can send
multiple snapshots with a single command.  Only the amount of changed
data is sent with an incremental 'zfs send'.


Also related to this is a performance question.  My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete.  The strikes me as being a bit high for a mere 50 MB;
are my expectation realistic or is it just because of my very budget
concious set up?  If so, where's the bottleneck?


Our setup does a send/recv at roughly 40MB/s over ssh connected to a
1gbit/s ethernet connection.  There are ways to make this faster by
not using an encrypted transport, but setup is a bit more advanced
than just an ssh 'zfs recv' command line.


The source pool is on a pair of 146 GB 10K RPM disks on separate
busses in a D1000 (split bus arrangement) and the destination pool
is on a IOMega 1 GB USB attached disk.  The machine to which both
pools are connected is a Sun Blade 1000 with a pair of 900 MHz US-III
CPUs and 2 GB of RAM.  The HBA is Sun's dual differential UltraSCSI
PCI card.  The machine was relatively quiescent apart from doing the
local zfs send | zfs recv.


I'm guessing that the USB bus and/or the USB disk is part of your
bottleneck.  UltraSCSI should be plenty fast and your CPU should be
fine too.

--eric


--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Peter Jeremy
On 2011-May-04 08:39:39 +0800, Rich Teer rich.t...@rite-group.com wrote:
Also related to this is a performance question.  My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete.  The strikes me as being a bit high for a mere 50 MB;
are my expectation realistic or is it just because of my very budget
concious set up?  If so, where's the bottleneck?

Possibilities I can think of:
- Do you have lots of snapshots?  There's an overhead of a second or so
  for each snapshot to be sent.
- Is the source pool heavily fragmented with lots of small files?

The source pool is on a pair of 146 GB 10K RPM disks on separate
busses in a D1000 (split bus arrangement) and the destination pool
is on a IOMega 1 GB USB attached disk.  The machine to which both
pools are connected is a Sun Blade 1000 with a pair of 900 MHz US-III
CPUs and 2 GB of RAM.

Hopefully a silly question but does the SB1000 support USB2?  All of
the Sun hardware I've dealt with only has USB1 ports.

And, BTW, 2GB RAM is very light on for ZFS (though I note you only
have a very small amount of data).

-- 
Peter Jeremy


pgp8UazHZQHJM.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Rich Teer
On Wed, 4 May 2011, Peter Jeremy wrote:

 Possibilities I can think of:
 - Do you have lots of snapshots?  There's an overhead of a second or so
   for each snapshot to be sent.
 - Is the source pool heavily fragmented with lots of small files?

Nope, and I don't think so.

 Hopefully a silly question but does the SB1000 support USB2?  All of
 the Sun hardware I've dealt with only has USB1 ports.

Not such a silly question.  :-)  The USB1 port was indeed the source of
much of the bottleneck.  The same 50 MB file system took only 8 seconds
to copy when I plugged the drive into a USB 2.0 card I had in the machine!

An 80 GB file system took 2 hours with the USB 2 port in use, with
compression off.  I'm tryin git again right now with compression 
turned on in the receiving pool.  Should be interesting...

 And, BTW, 2GB RAM is very light on for ZFS (though I note you only
 have a very small amount of data).

True, but the SB1000 only supports 2GB of RAM IIRC!  I'll soon be
migrating this machine's duties to an Ultra 20 M2.  A faster CPU
and 4 GB should make an noticable improvement (not to mention, on
board USB 2.0 ports).

Thanks for you ideas!

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss