Re: [zfs-discuss] Weird performance issue with ZFS with lots of simultaneous reads

2008-05-16 Thread Robert Milkowski
Hello Chris,

Thursday, May 15, 2008, 5:42:32 AM, you wrote:

CS I wrote:
CS |  I have a ZFS-based NFS server (Solaris 10 U4 on x86) where I am
CS | seeing a weird performance degradation as the number of simultaneous
CS | sequential reads increases.

CS  To update zfs-discuss on this: after more investigation, this seems
CS to be due to file-level prefetching. Turning file-level prefetching
CS off (following the directions of the ZFS Evil Tuning Guide) returns
CS NFS server performance to full network bandwidth when there are lots
CS of simultaneous sequential reads. Unfortunately it significantly
CS reduces the performance of a single sequential read (when the server is
CS otherwise idle).


Have you tried to disable vdev caching and leave file level
prefetching?


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using O_EXCL flag on /dev/zvol nodes

2008-05-16 Thread Matthew Ahrens
Sumit Gupta wrote:
 The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using
  the O_EXCL, subsequent open(2) to that node fail. But I dont think the 
 same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ?

Yes, that seems like a fine RFE.  Or a bug, if there's a policy saying that 
this must work.  A quick search didn't turn up any documentation.

I think this would be implemented in zvol_open().  (Though it's unfortunate 
that every device must implement this on its own; it would be better if the 
infrastructure took care of it.)

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Danilo Poccia

Hi,

using VirtualBox I just tried to move an OpenSolaris 2008.05 boot  
environment (ZFS) on a gzip-9 compressed dataset, but I have the  
following error from grub:


Error 16: Inconsistent filesystem structure

Googling around I found the same error with ZFS boot and Xen in July  
2007:


http://mail.opensolaris.org/pipermail/xen-discuss/2007-July/000961.html
http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6584697

That was fixed.. Is mine another bug?

BTW, using gzip-9 compression I got the whole OpenSolaris image in   
815MB... The only issue is that it is not booting anymore ;-)


Rgrds,
Danilo.

Danilo Poccia
Senior Systems Engineer
Sun Microsystems Italia S.p.A.
Via G. Romagnosi, 4
Roma 00196 ITALY
Phone +39 06 36708 022
Mobile +39 335 6983999
Fax +39 06 3221969
Email [EMAIL PROTECTED]
Blog http://blogs.sun.com/danilop

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread andrew
What is the current estimated ETA on the integration of install support for ZFS 
boot/root support to Nevada?

Also, do you have an idea when we can expect the improved ZFS write throttling 
to integrate?

Thanks

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Victor Latushkin
 Hi,
 
 using VirtualBox I just tried to move an OpenSolaris 2008.05 boot 
 environment (ZFS) on a gzip-9 compressed dataset, but I have the 
 following error from grub:
 
 Error 16: Inconsistent filesystem structure
 
 Googling around I found the same error with ZFS boot and Xen in July 2007:
 
 http://mail.opensolaris.org/pipermail/xen-discuss/2007-July/000961.html
 http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6584697
 
 That was fixed.. Is mine another bug?

This is because grub does not support booting from gzip-compressed 
datasets now. At this moment only LZJB algorithm is supported, please 
see decomp_table here:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c#65

 BTW, using gzip-9 compression I got the whole OpenSolaris image in  
 815MB... The only issue is that it is not booting anymore ;-)

Adding support for gzip may be a candidate for community project.

Hth,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Danilo Poccia

Hi Victor,

this seems quite easy to me but I don't know how to move around  
to actually implement/propose the required changes.


To make grub aware of gzip (as it already is of lzjb) the steps should  
be:


1. create a new
/onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
starting from
/onnv-gate/usr/src/uts/common/fs/zfs/gzip.c
and removing the gzip_compress funcion

2. add gzip_decompress
at the end of
/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h

3. update the decomp_table function to link gzip and all gzip- 
N (with N=1...9) to the gzip_decompress function in

/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c

What should I do to go on with this changes? Should I start a  
community project?


Thanks  Rgrds,
Danilo.

Il giorno 16/mag/08, alle ore 12:53, Victor Latushkin ha scritto:


Hi,
using VirtualBox I just tried to move an OpenSolaris 2008.05 boot  
environment (ZFS) on a gzip-9 compressed dataset, but I have the  
following error from grub:

Error 16: Inconsistent filesystem structure
Googling around I found the same error with ZFS boot and Xen in  
July 2007:

http://mail.opensolaris.org/pipermail/xen-discuss/2007-July/000961.html
http://bugs.opensolaris.org/bugdatabase/printableBug.do? 
bug_id=6584697

That was fixed.. Is mine another bug?


This is because grub does not support booting from gzip-compressed  
datasets now. At this moment only LZJB algorithm is supported,  
please see decomp_table here:


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c#65

BTW, using gzip-9 compression I got the whole OpenSolaris image in  
 815MB... The only issue is that it is not booting anymore ;-)


Adding support for gzip may be a candidate for community project.

Hth,
Victor


Danilo Poccia
Senior Systems Engineer
Sun Microsystems Italia S.p.A.
Via G. Romagnosi, 4
Roma 00196 ITALY
Phone +39 06 36708 022
Mobile +39 335 6983999
Fax +39 06 3221969
Email [EMAIL PROTECTED]
Blog http://blogs.sun.com/danilop

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Version Correct

2008-05-16 Thread Kenny
I have Sun Solaris 5.10 Generic_120011-14 and the zpool version is 4.  I've 
found references to version 5-10 on the Open Solaris site.

Are these versions for Open solaris only?  I've searched the SUN site for ZFS 
patches and found nothing (most likely operator headspace).  Can I update ZFS 
on my Sun box and if so where are the updates?

Thanks

--Kenny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread James C. McPherson
Kenny wrote:
 Hi!  I'm new to the list and new to zfs
 
 I have the following hardware and would like opinions on implementation.
 
 Sun Enterprise T5220 FC HBA Brocade 200E 4 Gbit switch Sun 2540 FC Disk
 Array w/12 1TB disk drives
 
 My plan is to create a small SAN fabric with the 5220 as the initiator
 (additional initiators to be added later) connected to the switch and the
 2540 as the target.
 
 My desire is to create 2 5disk RAID 5 sets with one hot spare each.  Then
 using ZFS to pool the 2 sets into one 8 TB Pool with several ZFS file
 systems in the pool.
 
 Now I have several questions:
 
 1) Does this plan seem ok?

There doesn't seem to be anything inherently wrong with it :-)

 2) Does anyone have experiance with the 2540?

Kinda. I worked on adding MPxIO support to the mpt driver so
we could support the SAS version of this unit - the ST2530.

What sort of experience are you after? I'ver never used one
of these boxes in production - only ever for benchmarking and
bugfixing :-) I think Robert Milkowski might have one or two
of them, however.

 3) I've read that it's best practice to create the RAID set utilizing
 Hardware RAID utilities vice using ZFS raidz.  Any wisdom on this?

You've got a whacking great cache in the ST2540, so you might as
well make use of it.

Once you've got more questions after reading the Best Practices
guide (http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide)
post a followup to this thread. You _will_ have questions. You
will, I just know it! :-)


cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Brian Hechinger
On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
 Hi, Paul
 
   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

As far as root zfs goes, are there any plans to support more than just single
disks or mirrors in U6, or will that be for a later date?

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Danilo Poccia
...just noticed there is a bug on that, but it seems there no activity  
even if it is in state accepted:


http://bugs.opensolaris.org/view_bug.do?bug_id=6538017

Should I send an email to request-sponsor AT opensolaris DOT org to  
propose my fix?


Rgrds,
Danilo.

Il giorno 16/mag/08, alle ore 14:34, Danilo Poccia ha scritto:


Hi Victor,

this seems quite easy to me but I don't know how to move  
around to actually implement/propose the required changes.


To make grub aware of gzip (as it already is of lzjb) the steps  
should be:


1. create a new
/onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
starting from
/onnv-gate/usr/src/uts/common/fs/zfs/gzip.c
and removing the gzip_compress funcion

2. add gzip_decompress
at the end of
/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h

3. update the decomp_table function to link gzip and all gzip- 
N (with N=1...9) to the gzip_decompress function in

/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c

What should I do to go on with this changes? Should I start a  
community project?


Thanks  Rgrds,
Danilo.

Il giorno 16/mag/08, alle ore 12:53, Victor Latushkin ha scritto:


Hi,
using VirtualBox I just tried to move an OpenSolaris 2008.05 boot  
environment (ZFS) on a gzip-9 compressed dataset, but I have the  
following error from grub:

Error 16: Inconsistent filesystem structure
Googling around I found the same error with ZFS boot and Xen in  
July 2007:

http://mail.opensolaris.org/pipermail/xen-discuss/2007-July/000961.html
http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6584697
That was fixed.. Is mine another bug?


This is because grub does not support booting from gzip-compressed  
datasets now. At this moment only LZJB algorithm is supported,  
please see decomp_table here:


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c#65

BTW, using gzip-9 compression I got the whole OpenSolaris image in  
 815MB... The only issue is that it is not booting anymore ;-)


Adding support for gzip may be a candidate for community project.

Hth,
Victor


Danilo Poccia
Senior Systems Engineer
Sun Microsystems Italia S.p.A.
Via G. Romagnosi, 4
Roma 00196 ITALY
Phone +39 06 36708 022
Mobile +39 335 6983999
Fax +39 06 3221969
Email [EMAIL PROTECTED]
Blog http://blogs.sun.com/danilop



Danilo Poccia
Senior Systems Engineer
Sun Microsystems Italia S.p.A.
Via G. Romagnosi, 4
Roma 00196 ITALY
Phone +39 06 36708 022
Mobile +39 335 6983999
Fax +39 06 3221969
Email [EMAIL PROTECTED]
Blog http://blogs.sun.com/danilop

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Robert Milkowski




Hello Danilo,

Friday, May 16, 2008, 1:34:56 PM, you wrote:







Hi Victor,

this seems quite easy to me but I don't know how to "move around" to actually implement/propose the required changes.

To make grub aware of gzip (as it already is of lzjb) the steps should be:

1. create a new
/onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
starting from
/onnv-gate/usr/src/uts/common/fs/zfs/gzip.c
and removing the gzip_compress funcion

2. add gzip_decompress
at the end of
/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h

3. update the decomp_table function to link "gzip" and all "gzip-N" (with N=1...9) to the gzip_decompress function in
/onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c

What should I do to go on with this changes? Should I start a "community project"?
















These changes look simple enough so there is no point setting up community project imho.
Just implement it, test it then ask for a sponsor and integrate it.




--
Best regards,
Robert  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panics solaris while switching a volume to read-only

2008-05-16 Thread Veltror
Is there any possibility that the psarc 2007/567 can be made as a patch to 
Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible but 
since all storage on production machines is on EMC Symmetrix with back-end 
mirroring, this panic is a showstopper for us.  Or is it so intertwined that a 
back port of this PSARC to U5 is out of the question.

Thanks


Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Victor Latushkin
 this seems quite easy to me but I don't know how to move around to 
 actually implement/propose the required changes.
 
 
 To make grub aware of gzip (as it already is of lzjb) the steps should be:
 
 
 1. create a new
 
 /onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
 
 starting from
 
 /onnv-gate/usr/src/uts/common/fs/zfs/gzip.c
 
 and removing the gzip_compress funcion

Yes, but it is a little bit more complicated than that. gzip support in 
in-kernel ZFS leverages in-kernel zlib implementation. There is support 
for gzip decompression algorithm in grub (see gunzip.c), so one need to 
figure out how to leverage that and replace z_uncompress() with proper 
call.

 2. add gzip_decompress
 
 at the end of
 
 /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h
 
 
 3. update the decomp_table function to link gzip and all gzip-N 
 (with N=1...9) to the gzip_decompress function in
 
 /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c

This sounds reasonable.

Also one need to make sure that resulting binary does not exceed size 
requirements (if any), thoroughly test it to verify that it works on all 
HW architectures with all compression algorithms (and even mix of them).

This may be not an exhaustive list of things to do.

 What should I do to go on with this changes? Should I start a community 
 project?
 

 These changes look simple enough so there is no point setting up 
 community project imho.
 
 Just implement it, test it then ask for a sponsor and integrate it.

As Robert points out, this indeed may be quite simple to bother setting 
up community project, so it may be better to treat this just like 
bite-size-rfe ;-)

Wbr,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Robert Milkowski
Hello James,


 2) Does anyone have experiance with the 2540?

JCM Kinda. I worked on adding MPxIO support to the mpt driver so
JCM we could support the SAS version of this unit - the ST2530.

JCM What sort of experience are you after? I'ver never used one
JCM of these boxes in production - only ever for benchmarking and
JCM bugfixing :-) I think Robert Milkowski might have one or two
JCM of them, however.


Yeah, I do have several of them (both 2530 and 2540).

2530 (SAS) - cables tend to pop-out sometimes when you are around
servers... then MPxIO does not work properly if you just hot-unplug
and hot-replug the sas cable... there is still 2TB LUN size limit
IIRC... other than that generally it is a good value

2540 (FC) - 2TB LUN size limit IIRC, other than that it is a good
value array



-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Robert Milkowski




Hello Danilo,

Friday, May 16, 2008, 2:00:42 PM, you wrote:







...just noticed there is a bug on that, but it seems there no activity even if it is in state "accepted":

http://bugs.opensolaris.org/view_bug.do?bug_id=6538017

Should I send an email to request-sponsor AT opensolaris DOT org to propose my fix?








Send a request and without waiting for the response just start coding :)


--
Best regards,
Robert  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread James C. McPherson
Robert Milkowski wrote:

 Yeah, I do have several of them (both 2530 and 2540).
 
 2530 (SAS) - cables tend to pop-out sometimes when you are around
 servers... then MPxIO does not work properly if you just hot-unplug
 and hot-replug the sas cable...

If you plug the cable back in within 20 seconds of it coming
loose that might just give MPxIO a bit of a headache.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Andy Lubel

On May 16, 2008, at 10:04 AM, Robert Milkowski wrote:

 Hello James,


 2) Does anyone have experiance with the 2540?

 JCM Kinda. I worked on adding MPxIO support to the mpt driver so
 JCM we could support the SAS version of this unit - the ST2530.

 JCM What sort of experience are you after? I'ver never used one
 JCM of these boxes in production - only ever for benchmarking and
 JCM bugfixing :-) I think Robert Milkowski might have one or two
 JCM of them, however.


 Yeah, I do have several of them (both 2530 and 2540).

we did a try and buy of the 2510,2530 and 2540.



 2530 (SAS) - cables tend to pop-out sometimes when you are around
 servers... then MPxIO does not work properly if you just hot-unplug
 and hot-replug the sas cable... there is still 2TB LUN size limit
 IIRC... other than that generally it is a good value

Yeah the sff-8088 connectors are a bit rigid and clumsy, but the  
performance was better than everything we tested in the 2500 series.



 2540 (FC) - 2TB LUN size limit IIRC, other than that it is a good
 value array


Echo.  We like the 2540 as well, and will be buying lots of them  
shortly.




 -- 
 Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com


-Andy


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Robin Guo
Hi, Brian

  You mean stripe type with multiple-disks or raidz type? I'm afraid 
it's still single disk
or mirrors only. If opensolaris start new project of this kind of 
feature, it'll be backport
to s10u* eventually, but that's need some time to go, sounds no 
possibility in U6, I think.

Brian Hechinger wrote:
 On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
   
 Hi, Paul

   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
 

 As far as root zfs goes, are there any plans to support more than just single
 disks or mirrors in U6, or will that be for a later date?

 -brian
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Darren J Moffat
Robin Guo wrote:
 Hi, Brian
 
   You mean stripe type with multiple-disks or raidz type? I'm afraid 
 it's still single disk
 or mirrors only. If opensolaris start new project of this kind of 
 feature, it'll be backport
 to s10u* eventually, but that's need some time to go, sounds no 
 possibility in U6, I think.

Not necessarily true.  Not all things in OpenSolaris get backported and 
not all future ZFS features are guaranteed to get backported eventually. 
  For example I have no current plans to backport the ZFS Crypto 
functionality.


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird performance issue with ZFS with lots of simultaneous reads

2008-05-16 Thread Chris Siebenmann
| Have you tried to disable vdev caching and leave file level
| prefetching?

 If you mean setting zfs_vdev_cache_bshift to 13 (per the ZFS Evil
Tuning Guide) to turn off device-level prefetching then yes, I have
tried turning off just that; it made no difference.

 If there's another tunable then I don't know about it and haven't
tried it (and would be pleased to).

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Hiding files with ZFS ACLs

2008-05-16 Thread Vincent Boisard
Hi everyone,

I've been experimenting with ZFS for some time and I have one question:
Is it possible to hide file with ZFS ACL ?
Let me explain what I would like to do:
A directory (chmod 0755) contains 3 subdirs: public, private an veryprivate
public has read access to everyone (0755)
private has no access at all for everyone (0750), but anyone can still see it 
in ls
veryprivate has no access at all and it doesn't even show up in ls for non 
authorized user (let say non-root for simplification).

I tried removing  read_acl for everyone on veryprivate, but ls shows a error:
ls: can't read ACL on ./veryprivate: Permission denied

Is it possible to do this with acls ?

Thank in advance,

Vincent
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Andre Wenas
I have been using zfs boot with lzjb compression on since build 75, from 
time to time I had similar problem, not sure why.

As best practise, I do snapshot the root filesystem frequently, so that 
I can rollback to the last working snapshot.

Rgds,
Andre W.

Victor Latushkin wrote:
 this seems quite easy to me but I don't know how to move around to 
 actually implement/propose the required changes.


 To make grub aware of gzip (as it already is of lzjb) the steps should be:


 1. create a new

 /onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c

 starting from

 /onnv-gate/usr/src/uts/common/fs/zfs/gzip.c

 and removing the gzip_compress funcion
   

 Yes, but it is a little bit more complicated than that. gzip support in 
 in-kernel ZFS leverages in-kernel zlib implementation. There is support 
 for gzip decompression algorithm in grub (see gunzip.c), so one need to 
 figure out how to leverage that and replace z_uncompress() with proper 
 call.

   
 2. add gzip_decompress

 at the end of

 /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h


 3. update the decomp_table function to link gzip and all gzip-N 
 (with N=1...9) to the gzip_decompress function in

 /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c
   

 This sounds reasonable.

 Also one need to make sure that resulting binary does not exceed size 
 requirements (if any), thoroughly test it to verify that it works on all 
 HW architectures with all compression algorithms (and even mix of them).

 This may be not an exhaustive list of things to do.

   
 What should I do to go on with this changes? Should I start a community 
 project?

   

   
 These changes look simple enough so there is no point setting up 
 community project imho.

 Just implement it, test it then ask for a sponsor and integrate it.
 

 As Robert points out, this indeed may be quite simple to bother setting 
 up community project, so it may be better to treat this just like 
 bite-size-rfe ;-)

 Wbr,
 Victor
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Bob Friesenhahn
On Fri, 16 May 2008, Kenny wrote:
 Sun 2540 FC Disk Array w/12 1TB disk drives

It is interesting that the 2540 is available with large disks now.

 My desire is to create 2 5disk RAID 5 sets with one hot spare each. 
 Then using ZFS to pool the 2 sets into one 8 TB Pool with several 
 ZFS file systems in the pool.

 Now I have several questions:

 1) Does this plan seem ok?

Another option is to export each entire drive as a LUN and put 10 
active drives into one zfs pool as two raidzs, or one raidz2.  The 
other two drives can be retained as spares for the pool.  If 
replacement drives are readily sourced on demand, you could use all 12 
drives as two raidz2s.

This approach does not allow other systems to use the 2540 since one 
host owns the pool.  However, you could move the ZFS pool to another 
system if need be.

 2) Does anyone have experiance with the 2540?

Yes.  Please see my white paper at
http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf
 
which discusses my experience with ZFS and the 2540.  The paper was 
written back in February and I have yet to experience a hickup with 
the 2540 or ZFS.  Not even one bad block.

 3) I've read that it's best practice to create the RAID set 
 utilizing Hardware RAID utilities vice using ZFS raidz.  Any wisdom 
 on this?

This is really a philosophical or requirements issue.  The 2540 allows 
you create pools and then export only part of the pool as a LUN to be 
used by an initator.  This allows you create LUNs on disks which are 
shared by multiple hosts (initiators), each of which has its own ZFS 
pool (or traditional filesystem).  If you really need to divide up 
storage at this level, then the 2540 offers flexibility that you won't 
get from ZFS.  A drawback to sharing sliced pools in this way is that 
if there is a problem with the underlying disks, then multiple hosts 
may be impacted during recovery.

The 2540 CAM provides a 4-disk RAID5 config which claims to be tuned 
for ZFS.  Someone on the list created three 4-disk RAID5 LUNs this way 
and put them all in one ZFS pool, obtaining very good performance. 
If one of those LUNs was to irreparably fail, his entire pool would be 
toast.

ZFS experts will tell you that you should not be trusting the 2540 or 
its firmware to catch all errors and so there should always be 
redundancy (e.g. mirroring) at the ZFS level.  By exporting each 2540 
disk as a LUN, then any of the redundancy schemes supported by ZFS 
(mirror, raidz, raidz2) can be used from the initiator, essentially 
ignoring the ones built into the 2540.  While the 2540's CAM interface 
is nice, you will find that it is far slower than ZFS is at 
incorporating your disks (25 tedious minutes in the CAM admin tool vs 
less than a second for ZFS).

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Bob Friesenhahn
On Fri, 16 May 2008, James C. McPherson wrote:

 3) I've read that it's best practice to create the RAID set utilizing
 Hardware RAID utilities vice using ZFS raidz.  Any wisdom on this?

 You've got a whacking great cache in the ST2540, so you might as
 well make use of it.

Exporting each disk as a LUN for use by ZFS does not cause the 2540 to 
disable its cache.  In fact, it is clear that this cache is quite 
valuable to ZFS write performance when NFS is involved. I am able to 
obtain 90MB/second NFS write performance from a single NFS client and 
using the 2540.

Due to the inherent design of ZFS, it is not necessary for RAID writes 
to be synchronized as they must be for traditional mirroring or RAID5. 
If there is a power loss or crash, ZFS will discover where it left 
off, and bring all redundant copies to a coherent state.  The 2540's 
cache will help protect against losing data if there is a power fail.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS sharing question.

2008-05-16 Thread Barth Weishoff
Hello.

   Anyone out there remember the -d option for share?  How do you set the share 
description using the zfs set commands, or is it even possible?

Thanks!

-B
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -p gives errors on Linux w/ NFS-mounted ZFS

2008-05-16 Thread Marcelo Leal
Hello all,
 I'm having the same problem here, any news?
 I need to use ACL's on the GNU/Linux clients. I'm using nfsv3, and on the 
GNU/Linux servers that feature was working, i think we need a solution for 
solaris/opensolaris. Now, with the dmm project, how we can start a migration 
process, if we can not provide the same services on the target machine?

 Thanks a lot for your time!

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Vincent Fox
I run 3510FC and 2540 units in pairs.   I build 2 5-disk RAID5 LUNs in each 
array, with 2 disks as global spares.  Each array has dual controllers and I'm 
doing multipath.

Then from the server I have access to 2 LUNs from 2 arrays, and I build a ZFS 
RAID-10 set from these 4 LUNs being sure each mirror pair is constructed with 
LUNs from both arrays.

Thus I can survive a complete failure of one array and multiple other failures 
and keep on trucking.  

Performance is quite good since using this in /etc/system:
set zfs:zfs_nocacheflush = 1
And since recent ZFS patches for 10u4 which fixed FSYNC performance issues my 
arrays and servers are hardly breaking a sweat.

I very much like that the arrays can handle lower-level problems for me like 
sparing and ZFS ensures correctness on top of that.

This is for Cyrus mail-stores so availability and correctness are paramount in 
case you are wondering if all this belt  suspenders paranoia is worthwhile.

If/when ZFS acquires a method to ensure that spare#1 in chassis#1 only gets 
used to replace failed disks in chassis#1 then I'll reconsider my position.  
Currently though there is no mechanism to ensure this so I could easily see a 
spare being pulled from the other chassis and leaving me with an undesirable 
dependency if I were doing ZFS with JBOD.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Bob Friesenhahn
On Fri, 16 May 2008, Vincent Fox wrote:

 If/when ZFS acquires a method to ensure that spare#1 in chassis#1 
 only gets used to replace failed disks in chassis#1 then I'll 
 reconsider my position.  Currently though there is no mechanism to 
 ensure this so I could easily see a spare being pulled from the 
 other chassis and leaving me with an undesirable dependency if I 
 were doing ZFS with JBOD.

Good point!  However, I think that the spare is only used until the 
original is re-constructed, so its usage should not be very long.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Lori Alt
Actually, I only meant that zfs boot was integrated
into build 90.  I don't know about the improved
write throttling.

I will check into why there was no mention of this
on the heads up page.

Lori

Andrew Pattison wrote:
 Were both of these items (ZFS boot install support and improved write 
 throttling) integrated into build 90? I don't see any mention of this 
 on the Nevada head up page.

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 5:21 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 It has been integrated into Nevada build 90.

 Lori

 andrew wrote:

 What is the current estimated ETA on the integration of
 install support for ZFS boot/root support to Nevada?

 Also, do you have an idea when we can expect the improved ZFS
 write throttling to integrate?

 Thanks

 Andrew.
   This message posted from opensolaris.org
 http://opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  





 -- 
 Andrew Pattison
 andrum04 at gmail dot com 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Lori Alt

Clarifying further:  the install support for zfs root file
systems went into build 90, but because the current install
code is closed source, the effect of that integration will not be
seen until the build 90 SXCE is released.  At that time,
installs will show a screen that give the user an opportunity
to choose between installing a ufs or a zfs root on the system
(ufs is still the default). 

Lori


Lori Alt wrote:
 Actually, I only meant that zfs boot was integrated
 into build 90.  I don't know about the improved
 write throttling.

 I will check into why there was no mention of this
 on the heads up page.

 Lori

 Andrew Pattison wrote:
   
 Were both of these items (ZFS boot install support and improved write 
 throttling) integrated into build 90? I don't see any mention of this 
 on the Nevada head up page.

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 5:21 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 It has been integrated into Nevada build 90.

 Lori

 andrew wrote:

 What is the current estimated ETA on the integration of
 install support for ZFS boot/root support to Nevada?

 Also, do you have an idea when we can expect the improved ZFS
 write throttling to integrate?

 Thanks

 Andrew.
   This message posted from opensolaris.org
 http://opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  





 -- 
 Andrew Pattison
 andrum04 at gmail dot com 
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Lori Alt
Install of a zfs root can only be done with the tty-based installer
or with Jumpstart.  I will make sure that instructions for both
are made available by the time that SXDE build 90 is
released.

For an attractive, easy-to-use installer that is designed from
the outset to install systems with zfs root pools, we'll all
have to wait for Caiman:

http://www.opensolaris.org/os/project/caiman/

Lori

Andrew Pattison wrote:
 Just to clarify, if I run the old Java-based installer for Solaris 
 Express on build 90, it will allow me to install Solaris to a ZFS pool?

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 9:03 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 Actually, I only meant that zfs boot was integrated
 into build 90.  I don't know about the improved
 write throttling.

 I will check into why there was no mention of this
 on the heads up page.

 Lori

 Andrew Pattison wrote:

 Were both of these items (ZFS boot install support and
 improved write throttling) integrated into build 90? I don't
 see any mention of this on the Nevada head up page.

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 5:21 PM, Lori Alt [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:



It has been integrated into Nevada build 90.

Lori

andrew wrote:

What is the current estimated ETA on the integration of
install support for ZFS boot/root support to Nevada?

Also, do you have an idea when we can expect the
 improved ZFS
write throttling to integrate?

Thanks

Andrew.
  This message posted from opensolaris.org
 http://opensolaris.org
http://opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
 mailto:zfs-discuss@opensolaris.org
 mailto:zfs-discuss@opensolaris.org
 mailto:zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





 -- 
 Andrew Pattison
 andrum04 at gmail dot com





 -- 
 Andrew Pattison
 andrum04 at gmail dot com 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Brian Hechinger
On Fri, May 16, 2008 at 02:32:34PM -0600, Lori Alt wrote:
 Install of a zfs root can only be done with the tty-based installer
 or with Jumpstart.  I will make sure that instructions for both
 are made available by the time that SXDE build 90 is
 released.

Will the tty or jumpstart based installed be able to do an Upgrade install
over an older ZFS Root system?

 For an attractive, easy-to-use installer that is designed from
 the outset to install systems with zfs root pools, we'll all
 have to wait for Caiman:

We don't need no stinkin' attractive, easy-to-use installers. ;)

 http://www.opensolaris.org/os/project/caiman/

Is the tty version going to stick around?  Or will Caiman have a tty
version?

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Vincent Fox
So it's pushed back to build 90 now?

There was an announcement the other day that build 88 was being skipped and 
build 89 would be the official release with ZFS boot.

Not a big deal but someone should do an announcement about the change.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread andrew
By my calculations that makes the possible release date for ZFS boot installer 
support around the 9th June 2008. Mark that date in your diary!

Cheers

Andrew.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Zlotnick Fred
The issues with CIFS is not just complexity; it's the total amount
of incompatible change in the kernel that we had to make in order
to make the CIFS protocol a first class citizen in Solaris.  This
includes changes in the VFS layer which would break all S10 file
systems.  So in a very real sense CIFS simply cannot be backported
to S10.

-- Fred

On May 16, 2008, at 3:06 PM, Paul B. Henson wrote:

 On Thu, 15 May 2008, Robin Guo wrote:

  The most feature and bugfix so far towards Navada 87 (or 88? ) will
 backport into s10u6. It's about the same (I mean from outside  
 viewer, not
 inside) with openSolaris 05/08, but certainly, some other features as
 CIFS has no plan to backport to s10u6 yet, so ZFS will has fully  
 ready
 but no effect on these kind of area. That depend on how they co- 
 operate.

 Yah, I've heard that the CIFS stuff was way too many changes to  
 backport,
 guess that is going to have to wait until Solaris 11.

 So, from a feature perspective it looks like S10U6 is going to be in  
 pretty
 good shape ZFS-wise. If only someone could speak to (perhaps under the
 cloak of anonymity ;) ) the timing side :). Given U5 barely came  
 out, I
 wouldn't expect U6 anytime soon :(.

 Thanks...


 -- 
 Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/ 
 ~henson/
 Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
 California State Polytechnic University  |  Pomona CA 91768
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Fred Zlotnick
Senior Director, Solaris NAS
Sun Microsystems, Inc.
[EMAIL PROTECTED]
x81142/+1 650 352 9298






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Lori Alt
Brian Hechinger wrote:
 On Fri, May 16, 2008 at 02:32:34PM -0600, Lori Alt wrote:
   
 Install of a zfs root can only be done with the tty-based installer
 or with Jumpstart.  I will make sure that instructions for both
 are made available by the time that SXDE build 90 is
 released.
 

 Will the tty or jumpstart based installed be able to do an Upgrade install
 over an older ZFS Root system?
   
Sadly, no.  You can bfu to the Solaris build 88
bits, but it's probably best to reinstall with the new
installer when it becomes available.

   
 For an attractive, easy-to-use installer that is designed from
 the outset to install systems with zfs root pools, we'll all
 have to wait for Caiman:
 

 We don't need no stinkin' attractive, easy-to-use installers. ;)

   
 http://www.opensolaris.org/os/project/caiman/
 

 Is the tty version going to stick around?  Or will Caiman have a tty
 version?

   
The tty installer you will see in Build 90 will be
going away at some point.  It's clunky and written
for the days when a 400 MB disk was considered
really big.  It's very ufs-centric and didn't adapt
well to the idea of storage pools. 

I can't say for sure what Caiman will provide but I notice
this among its goals, as specified on its OpenSolaris
community web page:

*  Updated and simplified graphical and text user interfaces
which carry Sun's current branding

- Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Adam Leventhal
On Fri, May 16, 2008 at 03:12:02PM -0700, Zlotnick Fred wrote:
 The issues with CIFS is not just complexity; it's the total amount
 of incompatible change in the kernel that we had to make in order
 to make the CIFS protocol a first class citizen in Solaris.  This
 includes changes in the VFS layer which would break all S10 file
 systems.  So in a very real sense CIFS simply cannot be backported
 to S10.

However, the same arguments were made explaining the difficulty backporting
ZFS and GRUB boot to Solaris 10.

Adam

-- 
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Nicolas Williams
On Fri, May 16, 2008 at 02:19:29PM -0600, Lori Alt wrote:
 Clarifying further:  the install support for zfs root file
 systems went into build 90, but because the current install
 code is closed source, the effect of that integration will not be
 seen until the build 90 SXCE is released.  At that time,
 installs will show a screen that give the user an opportunity
 to choose between installing a ufs or a zfs root on the system
 (ufs is still the default). 

I'm hoping that the build 90 WOS images, when they are posted, will be
enough.

More importantly, I'm hoping that I can get OpenSolaris 2008.05 and
Solaris Nevada build 90 to co-exist on the same root pool.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Nicolas Williams
On Fri, May 16, 2008 at 01:59:56PM -0700, Vincent Fox wrote:
 So it's pushed back to build 90 now?

Evidently, but build 90 is closed, and the bits are in.  The WOS images
for build 90 are not out yet, but that's a matter of time; the bits are
in.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q: ZFS Boot Install / LU support ETA?

2008-05-16 Thread Prabahar Jeyaram
The write throttling improvement is in build 87.

--
Prabahar.

Lori Alt wrote:
 Actually, I only meant that zfs boot was integrated
 into build 90.  I don't know about the improved
 write throttling.

 I will check into why there was no mention of this
 on the heads up page.

 Lori

 Andrew Pattison wrote:
   
 Were both of these items (ZFS boot install support and improved write 
 throttling) integrated into build 90? I don't see any mention of this 
 on the Nevada head up page.

 Thanks

 Andrew.

 On Fri, May 16, 2008 at 5:21 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 It has been integrated into Nevada build 90.

 Lori

 andrew wrote:

 What is the current estimated ETA on the integration of
 install support for ZFS boot/root support to Nevada?

 Also, do you have an idea when we can expect the improved ZFS
 write throttling to integrate?

 Thanks

 Andrew.
   This message posted from opensolaris.org
 http://opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  





 -- 
 Andrew Pattison
 andrum04 at gmail dot com 
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panics solaris while switching a volume to read-only

2008-05-16 Thread Prabahar Jeyaram
The fix is already in Solaris 10 U6. A patch for S10U5 will only be 
available when S10U6 is released.

--
Prabahar.

Veltror wrote:
 Is there any possibility that the psarc 2007/567 can be made as a patch to 
 Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible 
 but since all storage on production machines is on EMC Symmetrix with 
 back-end mirroring, this panic is a showstopper for us.  Or is it so 
 intertwined that a back port of this PSARC to U5 is out of the question.
 
 Thanks
 
 
 Roman
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread James C. McPherson
Bob Friesenhahn wrote:
 On Fri, 16 May 2008, James C. McPherson wrote:
 3) I've read that it's best practice to create the RAID set utilizing
 Hardware RAID utilities vice using ZFS raidz.  Any wisdom on this?
 You've got a whacking great cache in the ST2540, so you might as
 well make use of it.
 
 Exporting each disk as a LUN for use by ZFS does not cause the 2540 to 
 disable its cache.  In fact, it is clear that this cache is quite 
 valuable to ZFS write performance when NFS is involved. I am able to 
 obtain 90MB/second NFS write performance from a single NFS client and 
 using the 2540.
 
 Due to the inherent design of ZFS, it is not necessary for RAID writes 
 to be synchronized as they must be for traditional mirroring or RAID5. 
 If there is a power loss or crash, ZFS will discover where it left 
 off, and bring all redundant copies to a coherent state.  The 2540's 
 cache will help protect against losing data if there is a power fail.

Hi Bob,
You've made an assumption about what I wrote. That assumption
is incorrect. Kenny, in addition, did not say that he was or
was not going to do what you suggested, and I suggested to him
that he go and look into the ZFS Best Practices wiki to get
some ideas.

I'm very, very well aware of the design and behaviour of ZFS, I
have been using it since the build it was first integrated.

I am also quite well aware of the design and behaviour of the
raid engine in the ST2530.


Please re-read my email to Kenny, and don't put in words that
I didn't write.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread Bob Friesenhahn
On Sat, 17 May 2008, James C. McPherson wrote:

 Bob Friesenhahn wrote:
 On Fri, 16 May 2008, James C. McPherson wrote:
 3) I've read that it's best practice to create the RAID set utilizing
 Hardware RAID utilities vice using ZFS raidz.  Any wisdom on this?
 You've got a whacking great cache in the ST2540, so you might as
 well make use of it.

 Hi Bob,
 You've made an assumption about what I wrote. That assumption
 is incorrect. Kenny, in addition, did not say that he was or

My assumption based on your You've got a whacking great cache in the 
ST2540, so you might as well make use of it. was that it was intended 
to imply that if the Hardware RAID utilities were not used that the 
2540's NV write-cache would not be available/useful.

 I am also quite well aware of the design and behaviour of the
 raid engine in the ST2530.

Since there seems to be no specification of the internal architecture 
of the 2530 and 2540 (quite odd for a Sun product!), perhaps you can 
create a whitepaper which describes this architecture so that Sun 
customers can better understand how to use the product.

I have nothing but praise for the two Sun engineers who helped me 
understand and optimize for the 2540 back in February.  Most Sun 
engineers on this list are very helpful and we are very thankful for 
their kind assistance.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-16 Thread James C. McPherson
Bob Friesenhahn wrote:
 On Sat, 17 May 2008, James C. McPherson wrote:
 
 Bob Friesenhahn wrote:
 On Fri, 16 May 2008, James C. McPherson wrote:
 3) I've read that it's best practice to create the RAID set utilizing
 Hardware RAID utilities vice using ZFS raidz.  Any wisdom on this?
 You've got a whacking great cache in the ST2540, so you might as
 well make use of it.

 Hi Bob,
 You've made an assumption about what I wrote. That assumption
 is incorrect. Kenny, in addition, did not say that he was or
 
 My assumption based on your You've got a whacking great cache in the 
 ST2540, so you might as well make use of it. was that it was intended 
 to imply that if the Hardware RAID utilities were not used that the 
 2540's NV write-cache would not be available/useful.

Indeed. And there is absolutely no justification for that assumption.
I had hoped that the following sentence suggesting a perusal of the
Best Practices would have made it clear that it is indeed possible
(I would say, recommended) to maximise the usage of a cache and ZFS'
specific design features.


 I am also quite well aware of the design and behaviour of the
 raid engine in the ST2530.
 
 Since there seems to be no specification of the internal architecture of 
 the 2530 and 2540 (quite odd for a Sun product!), perhaps you can create 
 a whitepaper which describes this architecture so that Sun customers can 
 better understand how to use the product.

I'm not the person to do that, but I will forward your suggestion
on to somebody who is better placed to do so.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] openSolaris ZFS root, swap, dump

2008-05-16 Thread Paul B. Henson

Historically I've used hardware raid1 for the boot disks on my servers.
With the availability of ZFS root, I want to explore making the two
underlying drives directly available to the operating system and create a
ZFS mirror to avail of error detection and self-healing.

The current openSolaris installer, when given an entire disk, partitions
it into two slices: s0 for the ZFS pool, and s1 for swap.

Once the system is installed, I could add the second disk as a mirror,
giving redundancy for the pool, but that would leave swap as a single point
of failure.

I investigated using a ZFS zvol for swap, which appears to be a viable
option, but based on what I read you could not use a ZFS zvol for a dump
device.

The dumpadm man page under openSolaris says:

For systems with a ZFS root file system, dedicated ZFS volumes are used
for swap and dump areas. For further information about setting up a dump
area with ZFS, see the ZFS Administration Guide.

However, the actual documentation
ahttp://docs.sun.com/app/docs/doc/819-5461/gaypf?l=en' says Using a ZFS
volume as a dump device is not supported.

That is Solaris 10 documentation, though. Can openSolaris actually use a
ZFS zvol for dump?

A somewhat related question, I was trying to determine the best way to
convert the default install into the disk layout I want.

For ZFS root, is it required to have a partition and slices? Or can I just
give it the whole disk and have it write an EFI label on it?

I know you cannot remove disks currently from a pool, but you can swap out
devices, correct? So can I swap in the second disk (either the whole disk,
or the overlap partition of a legacy labeled disk) to replace the initial
installation disk, and then attach the original disk again as the second
half of the mirror (again either the entire disk, or the overlap slice)?


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] openSolaris ZFS root, swap, dump

2008-05-16 Thread Richard Elling
Paul B. Henson wrote:
 Historically I've used hardware raid1 for the boot disks on my servers.
 With the availability of ZFS root, I want to explore making the two
 underlying drives directly available to the operating system and create a
 ZFS mirror to avail of error detection and self-healing.

 The current openSolaris installer, when given an entire disk, partitions
 it into two slices: s0 for the ZFS pool, and s1 for swap.
   

This is because OpenSolaris 2008.05 is based on NV b86 which does not
have the fix for

 5008936 ZFS and/or zvol should support dumps
 5070124 dumpadm -d /dev/... does not enforce block
 device requirement for savecore
 6633197 zvol should not permit newfs or createpool while
 it's in use by swap or dump
 

which were integrated into NV b88 as part of ZFS boot support,
http://opensolaris.org/os/community/on/flag-days/pages/2008041103

As they say, timing is everything...
 -- richard
 Once the system is installed, I could add the second disk as a mirror,
 giving redundancy for the pool, but that would leave swap as a single point
 of failure.

 I investigated using a ZFS zvol for swap, which appears to be a viable
 option, but based on what I read you could not use a ZFS zvol for a dump
 device.

 The dumpadm man page under openSolaris says:

 For systems with a ZFS root file system, dedicated ZFS volumes are used
 for swap and dump areas. For further information about setting up a dump
 area with ZFS, see the ZFS Administration Guide.

 However, the actual documentation
 ahttp://docs.sun.com/app/docs/doc/819-5461/gaypf?l=en' says Using a ZFS
 volume as a dump device is not supported.

 That is Solaris 10 documentation, though. Can openSolaris actually use a
 ZFS zvol for dump?

 A somewhat related question, I was trying to determine the best way to
 convert the default install into the disk layout I want.

 For ZFS root, is it required to have a partition and slices? Or can I just
 give it the whole disk and have it write an EFI label on it?

 I know you cannot remove disks currently from a pool, but you can swap out
 devices, correct? So can I swap in the second disk (either the whole disk,
 or the overlap partition of a legacy labeled disk) to replace the initial
 installation disk, and then attach the original disk again as the second
 half of the mirror (again either the entire disk, or the overlap slice)?


   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss