Re: [zfs-discuss] Re: ZFS and Sun Cluster....

2006-05-30 Thread Manoj Joseph
Tatjana S Heuser wrote: Is it planned to have the cluster fs or proxy fs layer between the ZFS layer and the Storage pool layer? This, AFAIK, is not the current plan of action. Sun Cluster should be moving towards ZFS as a 'true' cluster filesystem. Not going the 'proxy fs layer' way (PxFS/G

Re: [zfs-discuss] 3510 configuration for ZFS

2006-05-30 Thread Robert Milkowski
Hello grant, Wednesday, May 31, 2006, 4:11:09 AM, you wrote: gb> hi all, gb> I am hoping to move roughly 1TB of maildir format email to ZFS, but gb> I am unsure of what the most appropriate disk configuration on a 3510 gb> would be. gb> based on the desired level of redundancy and usable space,

Re[2]: [zfs-discuss] cluster features

2006-05-30 Thread Robert Milkowski
Hello Joe, Wednesday, May 31, 2006, 12:44:22 AM, you wrote: JL> Well, I would caution at this point against the iscsi backend if you JL> are planning on using NFS. We took a long winded conversation online JL> and have yet to return to this list, but the gist of it is that the JL> latency of iscs

Re: [zfs-discuss] user undo

2006-05-30 Thread Erik Trimble
Nathan Kroenert wrote: Anyhoo - What do you think the chances are that any application vendor is going to write in special handling for Solaris file removal? I'm guessing slim to none, but have been wrong before... Agreed. However, to this I reply: Who Cares? I'm guessing that 99% of the po

[zfs-discuss] 3510 configuration for ZFS

2006-05-30 Thread grant beattie
hi all, I am hoping to move roughly 1TB of maildir format email to ZFS, but I am unsure of what the most appropriate disk configuration on a 3510 would be. based on the desired level of redundancy and usable space, my thought was to create a pool consisting of 2x RAID-Z vdevs (either double parit

Re: [zfs-discuss] ZFS root filesystem and sys-suspend(1M) ?

2006-05-30 Thread Nathan Kroenert
Not X86? :( (Yes - I know there are lots of other things that need to happen first, but :( nonetheless... ) Nathan. On Wed, 2006-05-31 at 01:51, Lori Alt wrote: > Roland Mainz wrote: > > Hi! > > > > > It is our intention to support system suspend on SPARC > when booted off a zfs root file

Re: [zfs-discuss] user undo

2006-05-30 Thread Nathan Kroenert
On Wed, 2006-05-31 at 03:48, Erik Trimble wrote: > (I'm going to combine Constantine & Eric's replies together, so I > apologize for the possible confusion): > Apology accepted. :) Anyhoo - What do you think the chances are that any application vendor is going to write in special handling for S

Re: [zfs-discuss] Life after the pool party

2006-05-30 Thread Marion Hakanson
[EMAIL PROTECTED] said: > Solved.. well at least a work around. > . . . > had to boot another version of Solaris, 9 in this case, and used format -e to > wipe the efi label, so this is a bug, not sure if its a duplicate of one of > the numerous other efi bugs on this list so I will let one of the z

Re: [zfs-discuss] cluster features

2006-05-30 Thread Joe Little
Well, I would caution at this point against the iscsi backend if you are planning on using NFS. We took a long winded conversation online and have yet to return to this list, but the gist of it is that the latency of iscsi along with the tendency for NFS to fsync 3 times per write causes performan

Re[2]: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Robert Milkowski
Hello Anton, Tuesday, May 30, 2006, 9:59:09 PM, you wrote: AR> On May 30, 2006, at 2:16 PM, Richard Elling wrote: >> [assuming we're talking about disks and not "hardware RAID arrays"...] AR> It'd be interesting to know how many customers plan to use raw disks, AR> and how their performance rel

Re: [zfs-discuss] Big IOs overhead due to ZFS?

2006-05-30 Thread Robert Milkowski
Hello Robert, Wednesday, May 31, 2006, 12:22:34 AM, you wrote: RM> Hello zfs-discuss, RM> I have an nfs server with zfs as a local file server. RM> System is snv_39 on SPARC. RM> There are 6 raid-z pools (p1-p6). RM> The problem is that I do not see any heavy traffic on network RM> interfaces

[zfs-discuss] ZFS writes something...

2006-05-30 Thread Robert Milkowski
Hello zfs-discuss, I noticed on a nfs server with ZFS that even with atime set to off and clients only reading data (almost 100% reads - except some unlinks()) I still can see some MB/s being written according to zpool iostat. What could be the couse? How can I see what is actually being

[zfs-discuss] Big IOs overhead due to ZFS?

2006-05-30 Thread Robert Milkowski
Hello zfs-discuss, I have an nfs server with zfs as a local file server. System is snv_39 on SPARC. There are 6 raid-z pools (p1-p6). The problem is that I do not see any heavy traffic on network interfaces nor using zpool iostat. However using just old iostat I can see MUCH more traffic going

Re: [zfs-discuss] Re: ZFS mirror and read policy; kstat I/O values for zfs

2006-05-30 Thread Bill Sommerfeld
On Tue, 2006-05-30 at 16:47, Haik Aftandilian wrote: > Technically, this bug should not be marked as fixed. It should be closed > as a dupe or just closed as "will not fix" with a comment indicating it > was fixed by 6410698. In past cases like this, I was told to close it as "unreproduceable" r

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Richard Elling
On Tue, 2006-05-30 at 14:59 -0500, Anton Rang wrote: > On May 30, 2006, at 2:16 PM, Richard Elling wrote: > > > [assuming we're talking about disks and not "hardware RAID arrays"...] > > It'd be interesting to know how many customers plan to use raw disks, > and how their performance relates to h

Re: [zfs-discuss] Re: ZFS mirror and read policy; kstat I/O values for zfs

2006-05-30 Thread Haik Aftandilian
630 is marked as fixed in snv_38. The Evaluation mentions this was fixed as part of the ditto block work. This bug is: 6410698 ZFS metadata needs to be more highly replicated (ditto blocks) (which is also marked as fixed in snv_38). Hope that helps: Neil. Ah, thanks. Technically, this bug

[zfs-discuss] Re: ZFS Web administration interface

2006-05-30 Thread Ron Halstead
I'm glad (I guess) to hear I'm not the only one seeing this. Actually, this post is to get the questions back up to the top of the list. ; ) Ron This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Anton Rang
On May 30, 2006, at 2:16 PM, Richard Elling wrote: [assuming we're talking about disks and not "hardware RAID arrays"...] It'd be interesting to know how many customers plan to use raw disks, and how their performance relates to hardware arrays. (My gut feeling is that a lot of disks on FC p

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Nicolas Williams
On Tue, May 30, 2006 at 02:26:07PM -0500, Anton Rang wrote: > On May 30, 2006, at 12:23 PM, Nicolas Williams wrote: > > >Another way is to have lots of pre-allocated next ubberblock > >locations, > >so that seek-to-one-ubberblock times are always small. Each > >ubberblock > >can point to its

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Anton Rang
On May 30, 2006, at 12:23 PM, Nicolas Williams wrote: Another way is to have lots of pre-allocated next ubberblock locations, so that seek-to-one-ubberblock times are always small. Each ubberblock can point to its predecessor and its copies and list the pre-allocated possible locations of i

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Richard Elling
[assuming we're talking about disks and not "hardware RAID arrays"...] On Tue, 2006-05-30 at 11:43 -0500, Anton Rang wrote: > > Sure, the block size may be 128KB, but ZFS can bundle more than one > > per-file/transaction > > But it doesn't right now, as far as I can tell. The protocol overhead

Re: [zfs-discuss] Re: ZFS mirror and read policy; kstat I/O values for zfs

2006-05-30 Thread Neil Perrin
Haik, 630 is marked as fixed in snv_38. The Evaluation mentions this was fixed as part of the ditto block work. This bug is: 6410698 ZFS metadata needs to be more highly replicated (ditto blocks) (which is also marked as fixed in snv_38). Hope that helps: Neil. Haik Aftandilian wrote On 05

Re: [zfs-discuss] Re: ZFS and Sun Cluster....

2006-05-30 Thread Richard Elling
On Tue, 2006-05-30 at 10:30 -0700, Tatjana S Heuser wrote: > > SunCluster will support ZFS in our 3.2 release of SunCluster, > > via the HAStoragePlus resource type. This support will be for > > failover use only, not scaleable or active-active applications. > > What about quorum reservation in ZF

[zfs-discuss] Re: ZFS mirror and read policy; kstat I/O values for zfs

2006-05-30 Thread Haik Aftandilian
Could we get this bug report updated? Jeff, you marked it as fix delivered but I see no 630 putback in the NV gate history. There are also no diffs on the report and I see no indication as to which putback did fix the problem. Haik This message posted from opensolaris.org ___

Re: [zfs-discuss] Re: ZFS and Sun Cluster....

2006-05-30 Thread Charles Debardeleben
Re: Quorum and ZFS. PGR is a property of the scsi devices in the zpool, not a property of ZFS or the zpool. The same is true for SCSI 2 reserve/release protocol. The PGRE protocol requires a reserved part of the disk. However, this reserved part of the disk is reserved at the "label" level. While i

Re: [zfs-discuss] user undo

2006-05-30 Thread Erik Trimble
(I'm going to combine Constantine & Eric's replies together, so I apologize for the possible confusion): On Tue, 2006-05-30 at 16:50 +0200, Constantin Gonzalez Schmitz wrote: > Hi, > > so we have two questions: > > 1. Is it really ZFS' job to provide an undo functionality? > > 2. If it turns o

[zfs-discuss] Re: ZFS and Sun Cluster....

2006-05-30 Thread Tatjana S Heuser
> SunCluster will support ZFS in our 3.2 release of SunCluster, > via the HAStoragePlus resource type. This support will be for > failover use only, not scaleable or active-active applications. What about quorum reservation in ZFS storage pools. AFAIK ZFS does not support SCSI3 persistent group re

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Nicolas Williams
On Tue, May 30, 2006 at 11:43:41AM -0500, Anton Rang wrote: > There's actually three separate issues here. > > The first is the fixed root block. This one may be a problem, but it > may be easy enough to mark certain logical units in a pool as "no root > block on this device." I don't think that

Re: [zfs-discuss] user undo

2006-05-30 Thread Tim Foster
On Tue, 2006-05-30 at 09:48 -0700, Eric Schrock wrote: > - doesn't work over NFS/CIFS (recycle bin 'location' may not be > accessible on all hosts, or may require cross-network traffic to > delete a file). > - inherently user-centric, not filesystem-centric (location of stored > file depends

Re: [zfs-discuss] cluster features

2006-05-30 Thread Eric Schrock
On Tue, May 30, 2006 at 03:55:09AM -0700, Ernst Rohlicek jun. wrote: > Hello list, > > I've read about your fascinating new fs implementation, ZFS. I've seen > alot - nbd, lvm, evms, pvfs2, gfs, ocfs - and I have to say: I'm quite > impressed! > > I'd set up a few of my boxes to OpenSolaris for s

Re: [zfs-discuss] user undo

2006-05-30 Thread Eric Schrock
On Mon, May 29, 2006 at 11:00:29PM -0700, Erik Trimble wrote: > Once again, I hate to be a harpy on this one, but are we really > convinced that having a "undo" (I'm going to call is RecycleBin from now > on) function for file deletion built into ZFS is a good thing? > > Since I've seen nothing

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Anton Rang
On May 30, 2006, at 11:25 AM, Nicolas Williams wrote: On Tue, May 30, 2006 at 08:13:56AM -0700, Anton B. Rang wrote: Well, I don't know about his particular case, but many QFS clients have found the separation of data and metadata to be invaluable. The primary reason is that it avoids disk seek

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-30 Thread Eric Schrock
On Tue, May 30, 2006 at 11:18:33AM +0100, Darren J Moffat wrote: > > This implies that the differences on disk are sufficient that the zpool > upgrade stuff wouldn't cut it. Right ? That is there was no way to turn > an existing raidz1 into a raidz2. Yes, that is correct. > Is there a new ver

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Nicolas Williams
On Tue, May 30, 2006 at 08:13:56AM -0700, Anton B. Rang wrote: > Well, I don't know about his particular case, but many QFS clients > have found the separation of data and metadata to be invaluable. The > primary reason is that it avoids disk seeks. We have QFS customers who

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Anton Rang
On May 30, 2006, at 10:36 AM, [EMAIL PROTECTED] wrote: That does not answer th equestion I asked; since ZFS is a copy-on- write filesystem, there's no fixed inode location and streaming writes should always be possible. The überblock still must be updated, however. This may not be an issu

Re: [zfs-discuss] ZFS and Sun Cluster....

2006-05-30 Thread Charles Debardeleben
SunCluster will support ZFS in our 3.2 release of SunCluster, via the HAStoragePlus resource type. This support will be for failover use only, not scaleable or active-active applications. It will use the import|export stuff to do its work. It required code modifications to HAStoragePlus due to the

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source "QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Nicolas Williams
On Tue, May 30, 2006 at 06:19:16AM +0200, [EMAIL PROTECTED] wrote: > The requirement is not that inodes and data are separate; the requirement > is a specific upperbound to disk transactions. The question therefor > is not "when will ZFS be able to separate inods and data"; the question > is when

Re: [zfs-discuss] user undo

2006-05-30 Thread Tim Foster
hey All, On Tue, 2006-05-30 at 16:50 +0200, Constantin Gonzalez Schmitz wrote: > - The purpose of any Undo-like action is to provide a safety net to the user >in case she commits an error that she wants to undo. So, what if the user was able to specify which applications they wanted such a sa

Re: [zfs-discuss] ZFS root filesystem and sys-suspend(1M) ?

2006-05-30 Thread Lori Alt
Roland Mainz wrote: Hi! It is our intention to support system suspend on SPARC when booted off a zfs root file system. Lori Will the initial ZFS root filesystem putback include support for system suspend (see sys-suspend(1M)) on SPARC ? Bye, Roland ___

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Casper . Dik
>Well, I don't know about his particular case, but many QFS clients >have found the separation of da ta and metadata to be invaluable. The >primary reason is that it avoids disk seeks. We have QFS cust omers who >are running at over 90% of theoretical bandwidth on a medium-sized set >of FibreChann

[zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-30 Thread Anton B. Rang
Well, I don't know about his particular case, but many QFS clients have found the separation of data and metadata to be invaluable. The primary reason is that it avoids disk seeks. We have QFS customers who are running at over 90% of theoretical bandwidth on a medium-sized set of FibreChannel co

Re: [zfs-discuss] ZFS and Sun Cluster....

2006-05-30 Thread Alexandre CHARTRE - Solaris Sustaining
The current version of Sun Cluster (3.1) has no support for ZFS. You will be able to use ZFS as a failover filesystem with Sun Cluster 3.2 which will be released by the end of this year. alex. Erik Trimble wrote: I'm seriously looking at using the SunCluster software in combination with ZFS

Re: [zfs-discuss] user undo

2006-05-30 Thread Constantin Gonzalez Schmitz
Hi, so we have two questions: 1. Is it really ZFS' job to provide an undo functionality? 2. If it turns out to be a feature that needs to be implemented by ZFS, what is the better approach: Snapshot based or file-based? My personal opinion on 1) is: - The purpose of any Undo-like action is

[zfs-discuss] ZFS and Sun Cluster....

2006-05-30 Thread Erik Trimble
I'm seriously looking at using the SunCluster software in combination with ZFS (either in Sol 10u2, or Nevada). I'm really looking at doing a dual-machine HA setup, probably active-active. How well does ZFS play in a SunCluster? I've looked at the "zfs [import|export]" stuff, and I'm a little co

Re: [zfs-discuss] Re: How's zfs RAIDZ fualt-tolerant ???

2006-05-30 Thread Richard Elling
On Fri, 2006-05-26 at 10:51 -0700, axa wrote: > >raidz is like raid 5, so you can survive the death of one disk, not 2. > >I would recomend you configure the 12 disks into, 2 raidz groups, > >then you can survive the death of one drive from each group. This is > >what i did on my system > > Hi Jam

Re: [zfs-discuss] user undo

2006-05-30 Thread Erik Trimble
Once again, I hate to be a harpy on this one, but are we really convinced that having a "undo" (I'm going to call is RecycleBin from now on) function for file deletion built into ZFS is a good thing? Since I've seen nothing to the contrary, I'm assuming that we're doing this by changing the ac

[zfs-discuss] cluster features

2006-05-30 Thread Ernst Rohlicek jun.
Hello list, I've read about your fascinating new fs implementation, ZFS. I've seen alot - nbd, lvm, evms, pvfs2, gfs, ocfs - and I have to say: I'm quite impressed! I'd set up a few of my boxes to OpenSolaris for storage (using Linux and lvm right now - offers pooling, but no built-in fault-tol

Re: [zfs-discuss] Backup/Restore of ZFS Properties

2006-05-30 Thread Constantin Gonzalez Schmitz
Hi, Yes, a trivial wrapper could: 1. Store all property values in a file in the fs 2. zfs send... 3. zfs receive... 4. Set all the properties stored in that file IMHO 3. and 4. need to be swapped - otherwise e.g. files will not be compressed when restored. hmm, I assumed that the ZFS stream

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-30 Thread Darren J Moffat
Eric Schrock wrote: On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote: On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote: RAID-Z is single-fault tolerant. If if you take out two disks, then you no longer have the required redundancy to maintain your data. Build 42 shou

Re: [zfs-discuss] Re: Has the ST3750640AS Segate 750 GB sata drive been tried with ZFS yet

2006-05-30 Thread Robert Milkowski
Hello Jim, Monday, May 29, 2006, 5:53:13 PM, you wrote: >><[EMAIL PROTECTED]> wrote: >>> Hi, >>> >>> Segate has released the ST3750640AS 750 GB sata drive. >>> I would like to take two of these and plug in to a the Nvidia Nforce 4 >>> Sata I/F an run Zfs on . ( I have installed SXCR build 40