2012-05-16 6:18, Bob Friesenhahn wrote:
You forgot IDEA #6 where you take advantage of the fact that zfs can be
told to use sparse files as partitions. This is rather like your IDEA #3
but does not require that disks be partitioned.

This is somewhat the method of making "missing devices" when creating
a ZFS pool (i.e. 8+1(+2missing) as in my earlier mail).

This opens up many possibilities. Whole vdevs can be virtualized to
files on (i.e. moved onto) remaining physical vdevs.

This is a nifty idea in general, but this pool's practice keeps it
quite full - about 100Gb free by df/zfs list account, however with
zfs reserved space the figure jumps to 740Gb free in zpool list
reports (hopefully that's what leaves the system performing quite
well despite the full fragmented pool).

> Then the drives
freed up can be replaced with larger drives and used to start a new
pool. It might be easier to upgrade the existing drives in the pool
first so that there is assured to be vast amounts of free space and the
drives get some testing. There is not initially additional risk due to
raidz1 in the pool since the drives will be about as full as before.

Your idea actually evolved for me into another (#7?), which
is simple and apparent enough to be ingenious ;)
DO use the partitions, but split the "2.73Tb" drives into a
roughly "2.5Tb" partition followed by a "250Gb" partition of
the same size as vdevs of the original old pool. Then the
new drives can replace a dozen of original small disks one
by one, in a one-to-one fashion resilvering, with no worsening
of the situation in regard of downtime or original/new pools'
integrity tradeoffs (in fact, several untrustworthy old disks
will be replaced by newer ones).

When the new dozen of disks is in place, the complete 8+3
new pool can be created with no compromises, and old data
migrated onto it, and then the old pool can be destroyed
after everything has been checked to be properly accessible.
The remaining 250Gb disks can be repurposed, while the
tailing partitions on new disks can join the big pool by
autoexpansion (i.e. remove second partitions, expand first
partitions in the label table, autoexpand pool - did that
a few times on other occasions).

In fact, this scenario seems like the best of all worlds
to me now, unless someone talks me out of this with some
pretty good reasoning. So thanks for keeping the dialog
and thought-flow going :)

I am not sure what additional risks are involved due to using files.

Well, ZFS docs and blogs pose files as a testing technique
more than one inclined for production, due to possible
issued between ZFS and disks brought in by the filesystem
underneath. I believe the same reasoning should apply to
other similar methods though, like iSCSI from remote
storage, or lofi-devices, or SVM as I thought of (ab)using
in this migration.

Thanks,
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to