2012-06-15 16:14, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector,
though, and a suitable external cabinet for connecting one extra disk.
How would I go about migrating/expanding the root pool to the larger
disks so I can then use the larger disks for booting?
I have no extra machine to use.
I think this question was recently asked and discussed on another list;
my suggestion would be more low-level than that suggested by others:
0) Boot from a LiveCD/LiveUSB so that your rpool's environment
doesn't change during the migration, and so that you can
ultimately rename your new rpool to its old name.
It is not fatal if you don't use a LiveMedia environment,
but it can be problematic to rename a running rpool, and
some of your programs might depend on its known name as
recorded in some config file or service properties.
1) Break the existing mirror, reducing it to a single-disk pool
2) Install the new disk, slice it, create an "rpool2" on it.
NOTE that you might not want all 2TB to be the "rpool2",
but rather you might dedicate several tens of GBs to
a root-pool partition or slice, and store the rest as a
data pool - perhaps implemented with different choices
on caching, dedup, etc.
NOTE also that you might need to apply some tricks to
enforce that the new pool uses ashift=12 if that (4KB)
is your hardware native sector size. We had some info
recently on the mailing lists and carried that over to
3) # zfs snapshot -r rpool@20120615-preMigration
4) # zfs send -R rpool@20120615-preMigration | \
zfs recv -vFd rpool2
NOTE this assumes you do want the whole old rpool into rpool2.
If you decide you want something on a data pool, i.e. the
"/export/*" datasets - you'd have to make that pool and send
the datasets there in a similar manner, and send the root pool
datasets not in one recursive command, but in several sets i.e.
for rpool/ROOT and rpool/swap and rpool/dump in the default
5) # zpool get all rpool
# zpool get all rpool2
Compare the pool settings. Carry over the "local" changes with
# zpool set property=value rpool2
You'll likely change bootfs, failmode, maybe some others.
6) installgrub onto the new disk so it becomes bootable
7) If you're on live media, try to rename the new "rpool2" to
become "rpool", i.e.:
# zpool export rpool2
# zpool export rpool
# zpool import -N rpool rpool2
# zpool export rpool
8) Reboot, disconnecting your remaining old disk, and hope that
the new pool boots okay. It should ;)
When it's ok, attach the second new disk to the system and
slice it similarly (prtvtoc|fmthard usually helps, google it).
Then attach the new second disk's slices to your new rpool
(and data pool if you've made one), installgrub onto the second
disk - and you're done.
zfs-discuss mailing list