[zfs-discuss] Re: user undo

2006-06-11 Thread can you guess?
Interesting thread - a few comments: Finite-sized validation checksums aren't a 100% solution either, but they're certainly good enough to be extremely useful. NetApp has built a rather decent business at least in part by providing less-than-100% user-level undo-style facilities via snapshots

[zfs-discuss] Re: opensol-20060605 # zpool iostat -v 1

2006-06-11 Thread Rob Logan
a total of 4*64k = 256k to fetch a 2k block. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6437054 perhaps a quick win would be to tell vdev_cache about the DMU_OT_* type so it can read ahead appropriately. it seems the largest losses are metadata. (du,find,scrub/resilver)

[zfs-discuss] disk evacuate

2006-06-11 Thread Gregory Shaw
Pardon me if this scenario has been discussed already, but I haven't seen anything as yet. I'd like to request a 'zpool evacuate pool device' command. 'zpool evacuate' would migrate the data from a disk device to other disks in the pool. Here's the scenario: Say I have a small server

Re: [zfs-discuss] disk evacuate

2006-06-11 Thread Dick Davies
On 11/06/06, Gregory Shaw [EMAIL PROTECTED] wrote: Pardon me if this scenario has been discussed already, but I haven't seen anything as yet. I'd like to request a 'zpool evacuate pool device' command. 'zpool evacuate' would migrate the data from a disk device to other disks in the pool.

Re: [zfs-discuss] disk evacuate

2006-06-11 Thread Eric Schrock
This only seems valuable in the case of an unreplicated pool. We already have 'zpool offline' to take a device and prevent ZFS from talking to it (because it's in the process of failing, perhaps). This gives you what you want for mirrored and RAID-Z vdevs, since there's no data to migrate

Re: [zfs-discuss] Re: user undo

2006-06-11 Thread David Magda
On Jun 11, 2006, at 03:21, can you guess? wrote: My dim recollection is that TOPS-10 implemented its popular (but again 100%) undelete mechanism using the same kind of 'space- available' approach suggested here. It did, however, support explicit 'delete - I really mean it' facilities to

[zfs-discuss] ZFS + Raid-Z pool size incorrect?

2006-06-11 Thread Nathanael Burton
I'm seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of zpool status shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn't expected behaviour is it? When I create a mirrored volume in ZFS everything is as one

Re: [zfs-discuss] disk evacuate

2006-06-11 Thread Gregory Shaw
Yes, if zpool remove works like you describe, it does the same thing. Is there a time frame for that feature? Thanks! On Jun 11, 2006, at 10:21 AM, Eric Schrock wrote: This only seems valuable in the case of an unreplicated pool. We already have 'zpool offline' to take a device and