Re: [zfs-discuss] Large scale performance query

2011-08-09 Thread Richard Elling
On Aug 8, 2011, at 4:01 PM, Peter Jeremy wrote:

 On 2011-Aug-08 17:12:15 +0800, Andrew Gabriel andrew.gabr...@oracle.com 
 wrote:
 periodic scrubs to cater for this case. I do a scrub via cron once a 
 week on my home system. Having almost completely filled the pool, this 
 was taking about 24 hours. However, now that I've replaced the disks and 
 done a send/recv of the data across to a new larger pool which is only 
 1/3rd full, that's dropped down to 2 hours.
 
 FWIW, scrub time is more related to how fragmented a pool is, rather
 than how full it is.  My main pool is only at 61% (of 5.4TiB) and has
 never been much above that but has lots of snapshots and a fair amount
 of activity.  A scrub takes around 17 hours.

Don't forget, scrubs are throttled on later versions of ZFS.

In a former life, we did a study of when to scrub and the answer was about once 
a year
for enterprise-grade storage.  Once a week is ok for the paranoid.

 
 This is another area where the mythical block rewrite would help a lot.

Maybe, by then I'll be retired and fishing somewhere, scaring the children with 
stories
about how hard we had it back in the days when we stored data on spinning 
platters :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] matching zpool versions to development builds

2011-08-09 Thread Richard Elling
On Aug 8, 2011, at 9:01 AM, John Martin wrote:

 Is there a list of zpool versions for development builds?
 
 I found:
 
  http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system

Since Oracle no longer shares that info, you might look inside the firewall :-)

 
 where it says Solaris 11 Express is zpool version 31, but my
 system has BEs back to build 139 and I have not done a zpool upgrade
 since installing this system but it reports on the current
 development build:
 
  # zpool upgrade -v
  This system is currently running ZFS pool version 33.

In the ZFS community, the old version numbering system is being deprecated
in favor of a feature-based versioning system.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool recover not using zfs send/receive

2011-08-09 Thread marvin curlee
I discovered my problem.  I didn't notice that the base rpool was broken out 
and mounted as /rpool .  After restoring /rpool  the machine booted without 
error.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk IDs and DD

2011-08-09 Thread Lanky Doodle
Hiya,

Is there any reason (and anything to worry about) if disk target IDs don't 
start at 0 (zero). For some reason mine are like this (3 controllers - 1 
onboard and 2 PCIe);

AVAILABLE DISK SELECTIONS:
   0. c8t0d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@0,0
   1. c8t1d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@1,0
   2. c9t7d0 ATA-HitachiHDS72302-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@7,0
   3. c9t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@8,0
   4. c9t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@9,0
   5. c9t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@a,0
   6. c9t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@b,0
   7. c9t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@c,0
   8. c9t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@d,0
   9. c9t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@e,0
  10. c10t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@8,0
  11. c10t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@9,0
  12. c10t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@a,0
  13. c10t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@b,0
  14. c10t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@c,0
  15. c10t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@d,0
  16. c10t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@e,0

So apart from the onboard controller, the tx (where x is the number) doesn't 
start at 0.

Also, I am trying to make disk LEDs blink by using dd so I can match up disks 
in Solaris to the physical slot, but I can't work out the right command;

admin@ok-server01:~# dd if=/dev/dsk/c9t7d0 of=/dev/null
dd: /dev/dsk/c9t7d0: open: No such file or directory

admin@ok-server01:~# dd if=/dev/rdsk/c9t7d0 of=/dev/null
dd: /dev/rdsk/c9t7d0: open: No such file or directory

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread LaoTsao
nothing to worry about
as for dd you need s? in addition to c8t0d0

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 9, 2011, at 4:51, Lanky Doodle lanky_doo...@hotmail.com wrote:

 Hiya,
 
 Is there any reason (and anything to worry about) if disk target IDs don't 
 start at 0 (zero). For some reason mine are like this (3 controllers - 1 
 onboard and 2 PCIe);
 
 AVAILABLE DISK SELECTIONS:
   0. c8t0d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@0,0
   1. c8t1d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@1,0
   2. c9t7d0 ATA-HitachiHDS72302-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@7,0
   3. c9t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@8,0
   4. c9t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@9,0
   5. c9t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@a,0
   6. c9t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@b,0
   7. c9t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@c,0
   8. c9t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@d,0
   9. c9t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@e,0
  10. c10t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@8,0
  11. c10t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@9,0
  12. c10t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@a,0
  13. c10t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@b,0
  14. c10t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@c,0
  15. c10t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@d,0
  16. c10t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@e,0
 
 So apart from the onboard controller, the tx (where x is the number) doesn't 
 start at 0.
 
 Also, I am trying to make disk LEDs blink by using dd so I can match up disks 
 in Solaris to the physical slot, but I can't work out the right command;
 
 admin@ok-server01:~# dd if=/dev/dsk/c9t7d0 of=/dev/null
 dd: /dev/dsk/c9t7d0: open: No such file or directory
 
 admin@ok-server01:~# dd if=/dev/rdsk/c9t7d0 of=/dev/null
 dd: /dev/rdsk/c9t7d0: open: No such file or directory
 
 Thanks
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread Paul Kraus
On Tue, Aug 9, 2011 at 7:51 AM, Lanky Doodle lanky_doo...@hotmail.com wrote:

 Is there any reason (and anything to worry about) if disk target IDs don't 
 start at 0
 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 
 PCIe);

 AVAILABLE DISK SELECTIONS:
       0. c8t0d0 ATA    -ST9160314AS    -SDM1 cyl 19454 alt 2 hd 255 sec 63
          /pci@0,0/pci10de,cb84@5/disk@0,0
       1. c8t1d0 ATA    -ST9160314AS    -SDM1 cyl 19454 alt 2 hd 255 sec 63
          /pci@0,0/pci10de,cb84@5/disk@1,0
       2. c9t7d0 ATA-HitachiHDS72302-A5C0 cyl 60798 alt 2 hd 255 sec 252
          /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@7,0

Nothing to worry about here. Controller IDs (cn) are assigned
based on the order the kernel probes the hardware. On the SPARC
systems you can usually change this in the firmware (OBP), but they
really don't _mean_ anything (other than the kernel found c8 before it
found c9).

There are (or were) other things the kernel considers a disk
controller on the system beyond the three you indicate. Keep in mind
that once the kernel finds a controller it makes an entry in
/etc/path_to_inst so that the IDs remain consistent if new controllers
are added (earlier in the search path).

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread Brandon High
On Tue, Aug 9, 2011 at 8:20 AM, Paul Kraus p...@kraus-haus.org wrote:
    Nothing to worry about here. Controller IDs (cn) are assigned
 based on the order the kernel probes the hardware. On the SPARC
 systems you can usually change this in the firmware (OBP), but they
 really don't _mean_ anything (other than the kernel found c8 before it
 found c9).

If you're really bothered by the device names, you can rebuild the
device map. There's no reason to do it unless you've had to replace
hardware, etc.

The steps are similar to these:
http://spiralbound.net/blog/2005/12/21/rebuilding-the-solaris-device-tree

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Issues with supermicro

2011-08-09 Thread Gregory Durham
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.

We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
9200-8e hba. We are not using failover/redundancy. Meaning that one
port of the hba goes to the primary front backplane interface, and the
other goes to the primary rear backplane interface.

For testing, we have done the following:
Installed 12 disks in the front, 0 in the back.
Created a stripe of different numbers of disks. After each test, I
destroy the underlying storage volume and create a new one. As you can
see by the results, adding more disks, makes no difference to the
performance. This should make a large difference from 4 disks to 8
disks, however no difference is shown.

Any help would be greatly appreciated!

This is the result:

root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 4
spares are: c0t5000CCA223C00A25d0
spares are: c0t5000CCA223C00B2Fd0
spares are: c0t5000CCA223C00BA6d0
spares are: c0t5000CCA223C00BB7d0
root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
of=/fooPool0/86gb.tst bs=4096 count=20971520
^C3503681+0 records in
3503681+0 records out
14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s


real0m39.396s
user0m1.791s
sys 0m36.029s
root@cm-srfe03:/home/gdurham~#
root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 6
spares are: c0t5000CCA223C00A25d0
spares are: c0t5000CCA223C00B2Fd0
spares are: c0t5000CCA223C00BA6d0
spares are: c0t5000CCA223C00BB7d0
spares are: c0t5000CCA223C02C22d0
spares are: c0t5000CCA223C009B9d0
root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
of=/fooPool0/86gb.tst bs=4096 count=20971520
^C2298711+0 records in
2298711+0 records out
9415520256 bytes (9.4 GB) copied, 25.813 s, 365 MB/s


real0m25.817s
user0m1.171s
sys 0m23.544s
root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 8
spares are: c0t5000CCA223C00A25d0
spares are: c0t5000CCA223C00B2Fd0
spares are: c0t5000CCA223C00BA6d0
spares are: c0t5000CCA223C00BB7d0
spares are: c0t5000CCA223C02C22d0
spares are: c0t5000CCA223C009B9d0
spares are: c0t5000CCA223C012B5d0
spares are: c0t5000CCA223C029AFd0
root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
of=/fooPool0/86gb.tst bs=4096 count=20971520
^C6272342+0 records in
6272342+0 records out
25691512832 bytes (26 GB) copied, 70.4122 s, 365 MB/s


real1m10.433s
user0m3.187s
sys 1m4.426s
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-09 Thread Bob Friesenhahn

On Tue, 9 Aug 2011, Gregory Durham wrote:


Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.

root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 4


What is 'createPool.sh'?

You really have not told us anything useful since we have no idea what 
your mystery script might be doing.  All we can see is that something 
reports more spare disks as the argument is increased as if the 
argument is the number of spare disks to allocate.  For all we know, 
it is always using the same number of data disks.


You also failed to tell us how memory you have installed in the 
machine.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-09 Thread Paul Kraus
On Tue, Aug 9, 2011 at 8:45 PM, Gregory Durham gregory.dur...@gmail.com wrote:

 For testing, we have done the following:
 Installed 12 disks in the front, 0 in the back.
 Created a stripe of different numbers of disks.

So you are creating one zpool with one disk per vdev and varying the
number of vdevs (the number of vdevs == the number of disks), there is
NO redundancy ?

Do you have compression enabled ?
Do you have dedup enabled ?
I expect the answer to both of the above is no given the test data is
/dev/zero, although that would tend to be limited by your memory
bandwidth (and if this is a modern server I would expect _much_ higher
numbers if compression were on). What is the server hardware
configuration ?

You are testing sequential write access only, is this really what the
application will be doing ?

 After each test, I
 destroy the underlying storage volume and create a new one. As you can
 see by the results, adding more disks, makes no difference to the
 performance. This should make a large difference from 4 disks to 8
 disks, however no difference is shown.

Unless you are being limited by something else... What does `iostat
-xn 1` show during the test ? There should be periods of zero activity
and then huge peaks (as the transaction group is committed to disk).

You are using a 4KB test data block size, is that realistic ? My
experience is that ZFS performance with block sizes that small with
the default suggested recordsize of 128K is not very good, try
setting recordsize to 16K (zfs set recordsize=16k poolname) and see
if you get different results. Try using a different tool (iozone is OK
but the best I have found is filebench, but that takes a bit more to
get useful data out of) instead of dd. Try a different test data block
size.

See 
https://spreadsheets.google.com/a/kraus-haus.org/spreadsheet/pub?hl=en_UShl=en_USkey=0AtReWsGW-SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXcoutput=html
for my experience changing configurations. I did not bother changing
the total number of drives as that was already fixed by what we
bought.

 Any help would be greatly appreciated!

 This is the result:

 root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
 root@cm-srfe03:/home/gdurham~# sh createPool.sh 4
 spares are: c0t5000CCA223C00A25d0
 spares are: c0t5000CCA223C00B2Fd0
 spares are: c0t5000CCA223C00BA6d0
 spares are: c0t5000CCA223C00BB7d0
 root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
 of=/fooPool0/86gb.tst bs=4096 count=20971520
 ^C3503681+0 records in
 3503681+0 records out
 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s


 real    0m39.396s
 user    0m1.791s
 sys     0m36.029s
 root@cm-srfe03:/home/gdurham~#
 root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
 root@cm-srfe03:/home/gdurham~# sh createPool.sh 6
 spares are: c0t5000CCA223C00A25d0
 spares are: c0t5000CCA223C00B2Fd0
 spares are: c0t5000CCA223C00BA6d0
 spares are: c0t5000CCA223C00BB7d0
 spares are: c0t5000CCA223C02C22d0
 spares are: c0t5000CCA223C009B9d0
 root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
 of=/fooPool0/86gb.tst bs=4096 count=20971520
 ^C2298711+0 records in
 2298711+0 records out
 9415520256 bytes (9.4 GB) copied, 25.813 s, 365 MB/s


 real    0m25.817s
 user    0m1.171s
 sys     0m23.544s
 root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
 root@cm-srfe03:/home/gdurham~# sh createPool.sh 8
 spares are: c0t5000CCA223C00A25d0
 spares are: c0t5000CCA223C00B2Fd0
 spares are: c0t5000CCA223C00BA6d0
 spares are: c0t5000CCA223C00BB7d0
 spares are: c0t5000CCA223C02C22d0
 spares are: c0t5000CCA223C009B9d0
 spares are: c0t5000CCA223C012B5d0
 spares are: c0t5000CCA223C029AFd0
 root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
 of=/fooPool0/86gb.tst bs=4096 count=20971520
 ^C6272342+0 records in
 6272342+0 records out
 25691512832 bytes (26 GB) copied, 70.4122 s, 365 MB/s


 real    1m10.433s
 user    0m3.187s
 sys     1m4.426s
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss