> Yeah, I ran into that in my testing, too.  I suspect
> it's something
> that will come up in testing a LOT more than in real
> production use.

I disagree.  I can see lots of situations where you want to attach new storage 
and
remove or retire old storage from an existing pool.  It would be great if ZFS 
could 
accept a "remove <vdev>" command, migrate any existing data off that vdev onto
the rest of the pool, and remove the vdev from the pool completely. 

Imagine an aging disk shelf that you're using as your zpool... it's about to 
croak, 
so you buy a replacement, add it to the system, and add the new shelf to the 
pool. 
If you can simply remove the old shelf's vdev, you have practically no downtime.
(Or truly no downtime, if you can hot-attach the new storage) 

If you must freeze the filesystems, dump/restore it to a separate location, and 
move the new pool to the old location, it's a significantly more disruptive 
event. 

This would also allow you to fix mistakes like creating a vdev with the wrong 
devices.
In my case, I made a typo and created a raidz with mixed disk sizes instead of 
using
all the same size disks... I'm now stuck with using half the capacity of the 
disks in that
vdev unless I completely destroy the pool, and maybe I have people ticked off 
at me 
because I've just wasted half their money and I can't afford to blow away the 
zpool and
start over again. 

You *can* replace individual devices in the vdev, but I haven't tested whether 
or 
not the raidz grows to use the full size of the disks once I replace all the 36 
GB drives
with 73 GB drives.  I suspect not. 

Fortunately, this is not a production system, so I can nuke it, but the 
flexibility to 
remove vdevs without nuking the whole zpool would be a Very Good Thing. 

BP
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to