Re: [zfs-discuss] Google paper on disk reliability

2007-02-20 Thread Joerg Schilling
Richard Elling [EMAIL PROTECTED] wrote: Link to the paper is http://labs.google.com/papers/disk_failures.pdf As for the spares debate, that is easy: use spares :-) What they missed to say is that you need to access the whole disk frequently enough in order to give SMART the ability to

[zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Roch - PAE
Sorry to insist but I am not aware of a small file problem with ZFS (which doesn't mean there isn't one, nor that we agree on definition of 'problem'). So if anyone has data on this topic, I'm interested. Also note, ZFS does a lot more than VxFS. -r Claude Teissedre writes: Hello Roch,

Re[4]: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-20 Thread Robert Milkowski
Hello Nicholas, Tuesday, February 20, 2007, 12:55:05 AM, you wrote: On 2/19/07,Robert Milkowski[EMAIL PROTECTED] wrote: 5. there's no simple answer to this question as it greatly depends on workload and data. One thing you should keep in mind - Solaris *has* to boot in a 64bit

[zfs-discuss] Re: Re: SPEC SFS benchmark of NFS/ZFS/B56 - please help to improve it!

2007-02-20 Thread Leon Koll
As I understand the issue, a readdirplus is 2X slower when data is already cached in the client than when it is not. Yes, that's the issue. It's not always 2X slower, but ALWAYS SLOWER. My another 2runs on NFS/ZFS show: 1. real 3:14.185 user2.249 sys33.083 2.

Re[2]: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Robert Milkowski
Hello Jeremy, Monday, February 19, 2007, 1:58:18 PM, you wrote: Something similar was proposed here before and IIRC someone even has a working implementation. I don't know what happened to it. JT That would be me. AFAIK, no one really wanted it. The problem that it JT solves can be solved by

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Gary Mills
On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote: Hello Jeremy, Monday, February 19, 2007, 1:58:18 PM, you wrote: Something similar was proposed here before and IIRC someone even has a working implementation. I don't know what happened to it. JT That would be me.

Re: [zfs-discuss] Re: How to backup a slice ? - newbie

2007-02-20 Thread Cindy . Swearingen
Uwe, It was also unclear to me that legacy mounts were causing your troubles. The ZFS Admin Guide describes ZFS mounts and legacy mounts, here: http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view Richard, I think we need some more basic troubleshooting info, such as this mount failure.

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 02/20/2007 08:10:59 AM: On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote: Hello Jeremy, Monday, February 19, 2007, 1:58:18 PM, you wrote: Something similar was proposed here before and IIRC someone even has a working implementation. I

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Gary Mills
On Tue, Feb 20, 2007 at 10:14:24AM -0600, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 02/20/2007 08:10:59 AM: On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote: Hello Jeremy, Monday, February 19, 2007, 1:58:18 PM, you wrote: Something similar was

Re: Re[10]: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-20 Thread eric kustarz
On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote: Hello eric, Wednesday, February 14, 2007, 5:04:01 PM, you wrote: ek I'm wondering if we can just lower the amount of space we're trying ek to alloc as the pool becomes more fragmented - we'll lose a little I/ ek O performance, but it

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Wade . Stuart
There's a fundamental problem with an undelete facility. $ echo FILE $ undelete FILE cannot undelete FILE: file exists Why the assumption that an undelete command would be brain dead -- this IS Unix. =) Seems like a low bar issue, if file exists and

Re: [zfs-discuss] Re: tracking error to file

2007-02-20 Thread eric kustarz
On Feb 18, 2007, at 9:19 PM, Davin Milun wrote: I have one that looks like this: pool: preplica-1 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible.

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
Roch what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're trying to pack a lot files small byte

Re: [zfs-discuss] Re: tracking error to file

2007-02-20 Thread Wade . Stuart
If you run a 'zpool scrub preplica-1', then the persistent error log will be cleaned up. In the future, we'll have a background scrubber to make your life easier. eric Eric, Great news! Are there any details about how this will be implemented yet? I am most curious to how

Re[12]: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-20 Thread Robert Milkowski
Hello eric, Tuesday, February 20, 2007, 5:55:47 PM, you wrote: ek On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote: Hello eric, Wednesday, February 14, 2007, 5:04:01 PM, you wrote: ek I'm wondering if we can just lower the amount of space we're trying ek to alloc as the pool becomes

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
On Feb 20, 2007, at 15:05, Krister Johansen wrote: what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Nathan Kroenert
begin crackly, broken record :) I, for one, would love to have similar functionality that we had in good old netware, where we could 'salvage' deleted files. The concept was that when the files were deleted, they were not actually removed, nor were the all important references to the files

Re: Re[12]: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-20 Thread eric kustarz
ek If you were able to send over your complete pool, destroy the ek existing one and re-create a new one using recv, then that should ek help with fragmentation. That said, that's a very poor man's ek defragger. The defragmentation should happen automatically or at ek least while the pool is

Re[14]: [zfs-discuss] Re: NFS/ZFS performance problems - txg_wait_open() deadlocks?

2007-02-20 Thread Robert Milkowski
Hello eric, Tuesday, February 20, 2007, 11:29:41 PM, you wrote: ek If you were able to send over your complete pool, destroy the ek existing one and re-create a new one using recv, then that should ek help with fragmentation. That said, that's a very poor man's ek defragger. The

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread James Dickens
On 2/20/07, Nathan Kroenert [EMAIL PROTECTED] wrote: begin crackly, broken record :) I, for one, would love to have similar functionality that we had in good old netware, where we could 'salvage' deleted files. The concept was that when the files were deleted, they were not actually removed,

Re: [zfs-discuss] Re: tracking error to file

2007-02-20 Thread eric kustarz
On Feb 20, 2007, at 10:43 AM, [EMAIL PROTECTED] wrote: If you run a 'zpool scrub preplica-1', then the persistent error log will be cleaned up. In the future, we'll have a background scrubber to make your life easier. eric Eric, Great news! Are there any details about how

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-20 Thread Nathan Kroenert
I'd usually agree with that, but - if we have an opportunity to make users love ZFS even more, why not at least investigate it. A perfect example might be exactly what I did on one occasion, where I copied a bunch of photos off a CF card. I then reformatted the CF card, and cleaned up the the

[zfs-discuss] Re: Google paper on disk reliability

2007-02-20 Thread Anton B. Rang
It turns out that even rather poor prediction accuracy is good enough to make a big difference (10x) in the failure probability of a RAID system. See Gordon Hughes Joseph Murray, Reliability and Security of RAID Storage Systems and D2D Archives Using SATA Disk Drives, ACM Transactions on

Re: [zfs-discuss] Google paper on disk reliability

2007-02-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Joerg Schilling wrote: What they missed to say is that you need to access the whole disk frequently enough in order to give SMART the ability to work. I thought modern disks could be instructed to do offline scanning, using any idle time available.