Re: [zfs-discuss] Comstar production-ready?

2009-03-13 Thread Jim Dunham

On Mar 4, 2009, at 7:04 AM, Jacob Ritorto wrote:

Caution:  I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older.  It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes.  Couldn't find resolution through the
various Solaris knowledge bases.  Closest I got was to find out that
there's a problem only in the *Solaris 10* iscsi target code that
incorrectly frobs some counter when it shouldn't, violating the iscsi
target specifications.  The problem is fixed in Nevada/OpenSolaris.

Long story short, I tried OpenSolaris 2008.11 and the iscsi crashes
ceased and things ran smoothly.  Not the solution I was hoping for,
since this was to eventually be a prod box, but then Sun announced
that I could purchase OpenSolaris support, so I was covered.  On OS,
my two big filers have been running really nicely for months and
months now.

Don't try to use Solaris 10 as a filer OS unless you can identify and
resolve the iscsi target issue.


The iSCSI Target Daemon in OpenSolaris 2008.xx, has been back ported  
to Solaris 10 u7.


- Jim


On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz 
 wrote:



Stephen Nelson-Smith wrote:


Hi,

I recommended a ZFS-based archive solution to a client needing to  
have

a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.



At some point I am sure you will convince them to see the light! ;)


As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?


The 7000 series devices can present NFS, CIFS and iSCSI. Looks very  
nice if

you need
a nice Gui / Don't know command line / need nice analytics. I had a  
play

with one the other
day and am hoping to get my mit's on one shortly for testing. I  
would like

to give it a real
gd crack with VMWare for VDI VM's.


 If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.



For sure the above gear would be fine for that. If you use standard  
Solaris

10 10/08 you have
NFS and iSCSI ability directly in the OS and also available to be  
supported

via a support contract
if needed. Best bet would probably be NFS for the Linux machines,  
but you

would need
to test in *their* environment with *their* workload.


The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.



If you want fibre channel target then you will need to use  
OpenSolaris or

SXDE I believe. It's
not available in mainstream Solaris yet. I am personally waiting  
till then

when it has been
*well* tested in the bleeding edge community. I have too much data  
to take

big risks with it.


What other options are there, and what advice/experience can you  
share?




I do very similar stuff here with J4500's and T2K's for compliance  
archives,

NFS and iSCSI targets
for Windows machines. Works fine for me. Biggest system is 48TB on  
J4500 for

Veritas Netbackup
DDT staging volumes. Very good throughput indeed. Perfect in fact,  
based on

the large files that
are created in this environment. One of these J4500's can keep 4  
LTO4 drives

in a SL500  saturated with
data on a T5220. (4 streams at ~160 MB/sec)

I think you have pretty much the right idea though. Certainly if  
you use Sun

kit you will be able to deliver
a commercially supported solution for them.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Jacob Ritorto
Caution:  I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older.  It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes.  Couldn't find resolution through the
various Solaris knowledge bases.  Closest I got was to find out that
there's a problem only in the *Solaris 10* iscsi target code that
incorrectly frobs some counter when it shouldn't, violating the iscsi
target specifications.  The problem is fixed in Nevada/OpenSolaris.

Long story short, I tried OpenSolaris 2008.11 and the iscsi crashes
ceased and things ran smoothly.  Not the solution I was hoping for,
since this was to eventually be a prod box, but then Sun announced
that I could purchase OpenSolaris support, so I was covered.  On OS,
my two big filers have been running really nicely for months and
months now.

Don't try to use Solaris 10 as a filer OS unless you can identify and
resolve the iscsi target issue.



On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz wrote:


 Stephen Nelson-Smith wrote:

 Hi,

 I recommended a ZFS-based archive solution to a client needing to have
 a network-based archive of 15TB of data in a remote datacentre.  I
 based this on an X2200 + J4400, Solaris 10 + rsync.

 This was enthusiastically received, to the extent that the client is
 now requesting that their live system (15TB data on cheap SAN and
 Linux LVM) be replaced with a ZFS-based system.

 The catch is that they're not ready to move their production systems
 off Linux - so web, db and app layer will all still be on RHEL 5.


 At some point I am sure you will convince them to see the light! ;)

 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?

 The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice if
 you need
 a nice Gui / Don't know command line / need nice analytics. I had a play
 with one the other
 day and am hoping to get my mit's on one shortly for testing. I would like
 to give it a real
 gd crack with VMWare for VDI VM's.

  If so, presumably it would be
 relatively easy to build something equivalent, but without the
 (awesome) interface.


 For sure the above gear would be fine for that. If you use standard Solaris
 10 10/08 you have
 NFS and iSCSI ability directly in the OS and also available to be supported
 via a support contract
 if needed. Best bet would probably be NFS for the Linux machines, but you
 would need
 to test in *their* environment with *their* workload.

 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.


 If you want fibre channel target then you will need to use OpenSolaris or
 SXDE I believe. It's
 not available in mainstream Solaris yet. I am personally waiting till then
 when it has been
 *well* tested in the bleeding edge community. I have too much data to take
 big risks with it.

 What other options are there, and what advice/experience can you share?


 I do very similar stuff here with J4500's and T2K's for compliance archives,
 NFS and iSCSI targets
 for Windows machines. Works fine for me. Biggest system is 48TB on J4500 for
 Veritas Netbackup
 DDT staging volumes. Very good throughput indeed. Perfect in fact, based on
 the large files that
 are created in this environment. One of these J4500's can keep 4 LTO4 drives
 in a SL500  saturated with
 data on a T5220. (4 streams at ~160 MB/sec)

 I think you have pretty much the right idea though. Certainly if you use Sun
 kit you will be able to deliver
 a commercially supported solution for them.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Toby Thain


On 4-Mar-09, at 2:07 AM, Stephen Nelson-Smith wrote:


Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.


Considered zones/virtualisation? Just a wild thought.

--Toby



As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, ...
Thanks,

S.
--
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
www.atalanta-systems.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Bob Friesenhahn

On Wed, 4 Mar 2009, Stephen Nelson-Smith wrote:


The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.


While this is indeed exciting, the solutions you have proposed vary 
considerably in the type of functionality they offer.  Comstar and 
iSCSI provide access to a storage volume similar to SAN storage. 
This volume is then formatted with some alien filesystem which is 
unlikely to support the robust features of ZFS.  Even though the 
storage volume is implemented in robust ZFS, the client still has the 
ability to scramble its own filesystem.  ZFS snapshots can help defend 
against that by allowing to rewind the entire content of the storage 
volume to a former point in time.


With the NFS/CIFS server model, only ZFS is used.  There is no 
dependence on a client filesystem.


With the Comstar/iSCSI approach, you are balkanizing 
(http://en.wikipedia.org/wiki/Balkanization) your storage so that each 
client owns its own filesystems without ability to share the data 
unless the client does it.  With the native ZFS server approach, all 
clients share the pool storage and can share files on the server if 
the server allows it.  A drawback of the native ZFS server approach is 
that the server needs to know about the users on the clients in order 
to support access control.


Regardless, there are cases where Comstar/iSCSI makes the most sense, 
or the ZFS fileserver makes the most sense.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson



Jacob Ritorto wrote:

Caution:  I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older.  It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes.  Couldn't find resolution through the
various Solaris knowledge bases.  Closest I got was to find out that
there's a problem only in the *Solaris 10* iscsi target code that
incorrectly frobs some counter when it shouldn't, violating the iscsi
target specifications.  The problem is fixed in Nevada/OpenSolaris.

  
Can't say I have had a problem myself. Initiator is the default 
Microsoft Vista
initiator. Mine has been running for at least 6-9 months fine. It 
doesn't get a absolute hammering
though. Using it to provide extra storage to IT staff desktops who need 
it and at
the same time allowing staff to play with iSCSI. I run a largish fibre 
channel shop and prefer that mostly

anyway.


Long story short, I tried OpenSolaris 2008.11 and the iscsi crashes
ceased and things ran smoothly.  Not the solution I was hoping for,
since this was to eventually be a prod box, but then Sun announced
that I could purchase OpenSolaris support, so I was covered.  On OS,
my two big filers have been running really nicely for months and
months now.
  
If you get Solaris 10 support sun will provide fixes for that too I 
imagine. Again, can't
say I have had a problem myself. But as I mentioned in my previous 
email, I can't
stress how important it is to *test* a solution in your environment with 
*your* workload
with the hardware/OS *you* choose. The Sun try and buy on hardware is  a 
great way to do this
relatively risk free. If it doesn't work send it back. Also they have 
startup essentials which will
potentially allow you to ge the try and buy hardware in the cheap if 
you are a new customer.

Don't try to use Solaris 10 as a filer OS unless you can identify and
resolve the iscsi target issue.
  
If iSCSI is truly broken then one could log a support call on this is 
you take basic maintenance. This

is cheaper the RHEL for the entry level stuff by the way...

flame
Being that this is a Linux shop you are selling into, OpenSolaris might 
be the best way to go as the GNU
userland might be more familiar to them and they might not understand 
having to change their shell paths

to get the userland that they want ;)
/flame



On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz wrote:
  

Stephen Nelson-Smith wrote:


Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.

  

At some point I am sure you will convince them to see the light! ;)


As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?
  

The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice if
you need
a nice Gui / Don't know command line / need nice analytics. I had a play
with one the other
day and am hoping to get my mit's on one shortly for testing. I would like
to give it a real
gd crack with VMWare for VDI VM's.


 If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.

  

For sure the above gear would be fine for that. If you use standard Solaris
10 10/08 you have
NFS and iSCSI ability directly in the OS and also available to be supported
via a support contract
if needed. Best bet would probably be NFS for the Linux machines, but you
would need
to test in *their* environment with *their* workload.


The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.

  

If you want fibre channel target then you will need to use OpenSolaris or
SXDE I believe. It's
not available in mainstream Solaris yet. I am personally waiting till then
when it has been
*well* tested in the bleeding edge community. I have too much data to take
big risks with it.


What other options are there, and what advice/experience can you share?

  

I do very similar stuff here with J4500's and T2K's for compliance archives,
NFS and iSCSI targets
for 

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
Right on the money there Bob. Without knowing more detail about the 
clients workload, it would be hard to
advise either way. I would imagine based purely on the small amount of 
info around the clients apps and
workload that NFS would most likely be the appropriate solution on top 
of ZFS. You will make more efficient
use of the your ZFS storage this way and provide all the niceties like 
snapshots and rollbacks from the Solaris

based filer whilst still maintaining Linux front ends.

Do take heed to the various list posts around ZFS/NFS with certain types 
of workloads however.


Bob Friesenhahn wrote:

On Wed, 4 Mar 2009, Stephen Nelson-Smith wrote:


The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.


While this is indeed exciting, the solutions you have proposed vary 
considerably in the type of functionality they offer.  Comstar and 
iSCSI provide access to a storage volume similar to SAN storage. 
This volume is then formatted with some alien filesystem which is 
unlikely to support the robust features of ZFS.  Even though the 
storage volume is implemented in robust ZFS, the client still has the 
ability to scramble its own filesystem.  ZFS snapshots can help defend 
against that by allowing to rewind the entire content of the storage 
volume to a former point in time.


With the NFS/CIFS server model, only ZFS is used.  There is no 
dependence on a client filesystem.


With the Comstar/iSCSI approach, you are balkanizing 
(http://en.wikipedia.org/wiki/Balkanization) your storage so that each 
client owns its own filesystems without ability to share the data 
unless the client does it.  With the native ZFS server approach, all 
clients share the pool storage and can share files on the server if 
the server allows it.  A drawback of the native ZFS server approach is 
that the server needs to know about the users on the clients in order 
to support access control.


Regardless, there are cases where Comstar/iSCSI makes the most sense, 
or the ZFS fileserver makes the most sense.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, 
http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
___


Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz




perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Fajar A. Nugraha
On Wed, Mar 4, 2009 at 2:07 PM, Stephen Nelson-Smith sanel...@gmail.com wrote:
 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?  If so, presumably it would be

it can also share block device (zvol) with iscsi

 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.

If you want production rady software, starting from Solaris 10 8/07
Release you can create a ZFS volume as a Solaris iSCSI target device
by setting the shareiscsi property on the ZFS volume. It's not
Comstar, but it works.

You may want consider opensolaris (I like it better than SXCE) instead
of solaris if you want to stay on bleeding-edge, or even Nexenta which
recentely integrated Comstar

http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Erast Benson
Hi Stephen,

NexentaStor v1.1.5+ could be an alternative, I think. And it includes
new cool COMSTAR integration, i.e. ZFS shareiscsi property actually
implements COMSTAR iSCSI target share functionality not available in
SXCE. http://www.nexenta.com/nexentastor-relnotes 

On Wed, 2009-03-04 at 07:07 +, Stephen Nelson-Smith wrote:
 Hi,
 
 I recommended a ZFS-based archive solution to a client needing to have
 a network-based archive of 15TB of data in a remote datacentre.  I
 based this on an X2200 + J4400, Solaris 10 + rsync.
 
 This was enthusiastically received, to the extent that the client is
 now requesting that their live system (15TB data on cheap SAN and
 Linux LVM) be replaced with a ZFS-based system.
 
 The catch is that they're not ready to move their production systems
 off Linux - so web, db and app layer will all still be on RHEL 5.
 
 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?  If so, presumably it would be
 relatively easy to build something equivalent, but without the
 (awesome) interface.
 
 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.
 
 What other options are there, and what advice/experience can you share?
 
 Thanks,
 
 S.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Scott Lawson



Stephen Nelson-Smith wrote:

Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.
  

At some point I am sure you will convince them to see the light! ;)

As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?
The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice 
if you need
a nice Gui / Don't know command line / need nice analytics. I had a 
play with one the other
day and am hoping to get my mit's on one shortly for testing. I would 
like to give it a real

gd crack with VMWare for VDI VM's.

  If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.
  
For sure the above gear would be fine for that. If you use standard 
Solaris 10 10/08 you have
NFS and iSCSI ability directly in the OS and also available to be 
supported via a support contract
if needed. Best bet would probably be NFS for the Linux machines, but 
you would need

to test in *their* environment with *their* workload.

The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.
  
If you want fibre channel target then you will need to use OpenSolaris 
or SXDE I believe. It's
not available in mainstream Solaris yet. I am personally waiting till 
then when it has been
*well* tested in the bleeding edge community. I have too much data to 
take big risks with it.

What other options are there, and what advice/experience can you share?
  
I do very similar stuff here with J4500's and T2K's for compliance 
archives, NFS and iSCSI targets
for Windows machines. Works fine for me. Biggest system is 48TB on J4500 
for Veritas Netbackup
DDT staging volumes. Very good throughput indeed. Perfect in fact, based 
on the large files that
are created in this environment. One of these J4500's can keep 4 LTO4 
drives in a SL500  saturated with

data on a T5220. (4 streams at ~160 MB/sec)

I think you have pretty much the right idea though. Certainly if you use 
Sun kit you will be able to deliver

a commercially supported solution for them.

Thanks,

S.
  


--
_

Scott Lawson
Systems Architect
Information Communication Technology Services

Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz

__

perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

__



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss