Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Robert Milkowski
Hello ozan, Friday, November 3, 2006, 3:57:00 PM, you wrote: osy for s10u2, documentation recommends 3 to 9 devices in raidz. what is the osy basis for this recommendation? i assume it is performance and not failure osy resilience, but i am just guessing... [i know, recommendation was intended

Re: [zfs-discuss] Default zpool on Thumpers

2006-11-03 Thread Torrey McMahon
Richard Elling - PAE wrote: Robert Milkowski wrote: I almost completely agree with your points 1-5, except that I think that having at least one hot spare by default would be better than having none at all - especially with SATA drives. Yes, I pushed for it, but didn't win. In a perfect

Re: [zfs-discuss] Re: ZFS Performance Question

2006-11-03 Thread Torrey McMahon
Jay Grogan wrote: The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not shairing any ports on the MCdata, but with so much cache we aren't close to taxing the disk. Are you sure? At some point data has to get flushed from the cache to the drives themselves. In most

[zfs-discuss] Re: zfs sharenfs inheritance

2006-11-03 Thread Chris Gerhard
An alternate way will be to use NFSv4. When an NFSv4 client crosses a mountpoint on the server, it can detect this and mount the filesystem. It can feel like a lite version of the automounter in practice, as you just have to mount the root and discover the filesystems as needed. The

Re: [zfs-discuss] Re: zfs sharenfs inheritance

2006-11-03 Thread eric kustarz
Chris Gerhard wrote: An alternate way will be to use NFSv4. When an NFSv4 client crosses a mountpoint on the server, it can detect this and mount the filesystem. It can feel like a lite version of the automounter in practice, as you just have to mount the root and discover the filesystems as

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Richard Elling - PAE
ozan s. yigit wrote: for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no

[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-11-03 Thread Eric Boutilier
For background on what this is, see: http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416 http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200 = zfs-discuss 10/16 - 10/31 = Size of all threads during

[zfs-discuss] zfs receive into zone?

2006-11-03 Thread Jeff Victor
If I add a ZFS dataset to a zone, and then want to zfs send from another computer into a file system that the zone has created in that data set, can I zfs send to the zone, or can I send to that zone's global zone, or will either of those work?

Re: [zfs-discuss] Re: reproducible zfs panic on Solaris 10 06/06

2006-11-03 Thread Mark Maybee
Matthew Flanagan wrote: Matt, Matthew Flanagan wrote: mkfile 100m /data zpool create tank /data ... rm /data ... panic[cpu0]/thread=2a1011d3cc0: ZFS: I/O failure (write on unknown off 0: zio 60007432bc0 [L0 unallocated] 4000L/400P DVA[0]=0:b000:400 DVA[1]=0:120a000:400 fletcher4

[zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread Erik Trimble
I actually think this is an NFSv4 issue, but I'm going to ask here anyway... Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the ZFS property). Default settings in /etc/default/nfs bigbox# share -

Re: [zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread eric kustarz
Erik Trimble wrote: I actually think this is an NFSv4 issue, but I'm going to ask here anyway... Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the ZFS property). Default settings in /etc/default/nfs

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Al Hopper
On Fri, 3 Nov 2006, Richard Elling - PAE wrote: ozan s. yigit wrote: for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended

Re: [zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread Karen Yeung
Don't forget to restart mapid after modifying default domain in /etc/default/nfs. As root, run svcadm restart svc:/network/nfs/mapid. I've run into this in the past. Karen eric kustarz wrote: Erik Trimble wrote: I actually think this is an NFSv4 issue, but I'm going to ask here anyway...

[zfs-discuss] Filebench, X4200 and Sun Storagetek 6140

2006-11-03 Thread Louwtjie Burger
Hi there I'm busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above available for tests, I'm open to suggestions on potential configs that I could run for you. Pop me a mail if you want something specific _or_ you have suggestions

Re: [zfs-discuss] # devices in raidz.

2006-11-03 Thread Richard Elling - PAE
Al Hopper wrote: [1] Using MTTDL = MTBF^2 / (N * (N-1) * MTTR) But ... I'm not sure I buy into your numbers given the probability that more than one disk will fail inside the service window - given that the disks are identical? Or ... a disk failure occurs at 5:01 PM (quitting time) on a

Re: [zfs-discuss] Filebench, X4200 and Sun Storagetek 6140

2006-11-03 Thread Jason J. W. Williams
Hi Louwtjie, Are you running FC or SATA-II disks in the 6140? How many spindles too? Best Regards, Jason On 11/3/06, Louwtjie Burger [EMAIL PROTECTED] wrote: Hi there I'm busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above

Re: [zfs-discuss] zfs receive into zone?

2006-11-03 Thread Matthew Ahrens
Jeff Victor wrote: If I add a ZFS dataset to a zone, and then want to zfs send from another computer into a file system that the zone has created in that data set, can I zfs send to the zone, or can I send to that zone's global zone, or will either of those work? I believe that the 'zfs