[zfs-discuss] Re: user undo

2006-05-26 Thread Anton B. Rang
Anything that attempts to append characters on the end of the filename will run into trouble when the file name is already at NAME_MAX. One simple solution is to restrict the total length of the name to NAME_MAX, truncating the original filename as necessary to allow appending. This does

[zfs-discuss] Re: How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread axa
raidz is like raid 5, so you can survive the death of one disk, not 2. I would recomend you configure the 12 disks into, 2 raidz groups, then you can survive the death of one drive from each group. This is what i did on my system Hi James , Thank you very much. ;-) I'll configure 2 raidz groups

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread David J. Orman
RAID-Z is single-fault tolerant. If if you take out two disks, then you no longer have the required redundancy to maintain your data. Build 42 should contain double-parity RAID-Z, which will allow you to sustain two simulataneous disk failures without dataloss. I'm not sure if this

[zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Daniel Rock
Hi, after some testing with ZFS I noticed that read requests are not scheduled even to the drives but the first one gets predominately selected: My pool is setup as follows: NAMESTATE READ WRITE CKSUM tpc ONLINE 0 0 0 mirror

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread grant beattie
On Fri, May 26, 2006 at 10:33:34AM -0700, Eric Schrock wrote: RAID-Z is single-fault tolerant. If if you take out two disks, then you no longer have the required redundancy to maintain your data. Build 42 should contain double-parity RAID-Z, which will allow you to sustain two simulataneous

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Ed Nadolski
Gregory Shaw wrote: In recent Linux distributions, when the kernel shuts down, the kernel will force the scsi drives to flush their write cache. I don't know if solaris does the same but I think not, due to the ongoing focus of solaris and disabling write cache. The Solaris sd(7D)

Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread Nicolas Williams
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote: is raidz double parity optional or mandatory? Backwards compatibility dictates that it will be optional. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] ata panic

2006-05-26 Thread Rob Logan
`mv`ing files from a zfs dir to another zfs filesystem in the same pool will panic a 8 sata zraid http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm system with ::status debugging crash dump vmcore.3 (64-bit) from zfs operating system: 5.11 opensol-20060523 (i86pc) panic message:

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Bart Smaalders
Gregory Shaw wrote: I had a question to the group: In the different ZFS discussions in zfs-discuss, I've seen a recurring theme of disabling write cache on disks. I would think that the performance increase of using write cache would be an advantage, and that write cache should be

Re: [zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Matthew Ahrens
On Fri, May 26, 2006 at 09:40:57PM +0200, Daniel Rock wrote: So you can see the second disk of each mirror pair (c4tXd0) gets almost no I/O. How does ZFS decide from which mirror device to read? You are almost certainly running in to this known bug: 630 reads from mirror are not

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Chris Csanady
On 5/26/06, Bart Smaalders [EMAIL PROTECTED] wrote: There are two failure modes associated with disk write caches: Failure modes aside, is there any benefit to a write cache when command queueing is available? It seems that the primary advantage is in allowing old ATA hardware to issue

Re: [zfs-discuss] hard drive write cache

2006-05-26 Thread Neil Perrin
ZFS enables the write cache and flushes it when committing transaction groups; this insures that all of a transaction group appears or does not appear on disk. It also flushes the disk write cache before returning from every synchronous request (eg fsync, O_DSYNC). This is done after writing