> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of bear
> 
> [b]Short Version[/b]
> I used zpool add instead of zpool replace while trying to move drives
> from an si3124 controller card.  I can backup the data to other drives
> and destroy the pool, but would prefer not to since it involved around
> 4 tb of data and will take forever.
> [b]zpool add mypool c4t2d0[/b]
> instead of
> [b]zpool replace mypool c2t1d0 c4t2d0[/b]

Yeah ... Unfortunately, you cannot remove a vdev from a pool once it's been
added.  So ...  

Temporarily, in order to get c4t2d0 back into your control for other
purposes, you could create a sparse file somewhere, and replace this device
with the sparse file.  This should be very fast, and should not hurt
performance, as long as you haven't written any significant amount of data
to the pool since adding that device, and won't be writing anything
significant until after all is said and done.  Don't create the sparse file
inside the pool.  Create the sparsefile somewhere in rpool, so you don't
have a gridlock mount order problem.

Rather than replacing each device one-by-one, I might suggest creating a new
raidz2 on the new hardware, and then use "zfs send | zfs receive" to
replicate the contents of the first raid set to the 2nd raid set...  Then,
just destroy (or export, or unmount) the first raid set, while changing the
mountpoint of the 2nd raid set.  (And export/import or unmount/mount.)

since you have data that's mostly not changing, the send/receive method
should be extremely efficient.  You do one send/receive, and you don't even
have to follow up with any incrementals later...

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to