Hi Evan.
(CC caiman-discuss as it may be useful to others...)
On 05/10/10 12:08 PM, Evan Layton wrote:
On 5/10/10 12:53 PM, Jack Schwartz wrote:
Hi Evan.
I successfully borrowed space from a user data zpool to give more space
to the root zpool on my home PC. As we talked, I used a second disk to
migrate everything over. I plan to use the original disk as a mirror
once all is set up on the second disk. I used beadm to move the root
partition as you suggested. I'm almost done; I only have to install zfs
dump area, which should be easy but I didn't know how big to make it. I
used zfs send and receive to move the data partition.
Below are the steps I took. There may be better ways to do things, but
these steps worked for me. Hopefully this feedback is helpful for you as
beadm continues to evolve.
Moving the root pool involved a little more than just running beadm.
Here is what I had to do:
1) Create the new zpool first.
This involved running fdisk manually to size the partition properly,
running format to set up slice 0 on that partition, and then running
zpool create.
NOTE when running format the importance of starting slice 0 at cyl 1,
not cyl 0, else there is overlap with the MBR which will render the
system unbootable after the first boot (as zfs will overwrite the MBR
with its own stuff).
This is expected as beadm will not create a pool for you. It expects
that the pool already exists.
OK.
2) cp -rp /rpool/* /rpoolnew
Without doing this first, beadm gives a warning that it cannot find
menu.lst, and then errs out.
I note the distinction between the files under /rpool and the other zfs
filesystems under /rpool. The cp brought over only the files (and there
were only a few of them).
Was this SPARC of X86 and what build are you on?
X86. The build is 128a.
This should not have been necessary since beadm should create a
menu.lst for you if one doesn't exist.
It probably tried but my guess is that it couldn't find the
/rpool/boot/grub directory. The cp resolved this issue. An interesting
experiment would have been to not copy over menu.lst and see if a new
one would have been created, given directory structures in place. My
hunch is that the missing target directory is why the menu.lst creation
failed.
3) Run beadm -p rpoolnew newbe
4) installgrub -mf /boot/grub/stage1 /boot/grub/stage2 /dev/dsk/c7t1d0s0
... else the MBR on the second disk won't be equipped for system
bootstrap.
beadm activate should do this for you.
Agreed, but it didn't.
5) Move /export/home from the old to new rpool, if desired. I didn't
need to do this as I didn't want /export/home. If you want it, you can
use zfs send and zfs receive to do the move.
Correct beadm will not copy over the shared datasets so this must be
done manually.
6) Decommission the old rpool.
Boot -m milestone=none, and then zpool export rpool. This has to be done
before any zfs stuff is active, which is why milestone=none. Exporting
rpool decommissions all zfs items below it in its tree, such as swap and
dump. I had to reboot immediately after exporting.
Is it possible to do step 7 before step 6 and run dumpadm to reset the
dumpdevice and then add the new swap space using the swap command? I
know you'll have to reboot later anyway but maybe that could limit the
number of times a reboot is needed.
I don't think I would migrate a dump device. I need to set up a new
dump device under the rpoolnew. After creating the dump device, I need
to see what/if anything, I need to do with dumpadm to pick it up.
In order to decommission the old area, I need the kernel to let go of
the old swap and dump areas. I did a boot to milestone=none to
facilitate this.
7) Create new swap and dump areas.
The old swap and dump went away when I decommissioned the old rpool.
I didn't write these down, but I think the commands were...
zfs create -V rpoolnew/swap
zfs create -V rpoolnew/dump
I think that's correct.
OK.
8) I also moved my user data partition as well, using zfs send and
receive. Old: zfstank. New: userdata
zpool create userdata /dev/rdsk/c7t1d0p2
zfs snapshot -r zfst...@snapshot1
zfs send -R zfst...@snapshot1 | zfs receive -F -d userdata
Note after the send/receive, there are two pools which vie for the same
mountpoints...
Then reboot with milestone=none and zfs export zfstank. I think I also
turned off canmount from zfstank beforehand, but I'm not sure that was
necessary. Reboot immediately after exporting.
- - -
Still to do:
- When I boot, grub shows me two entries. I think one is intended for
the original disk and the other for the new disk. Both entries point to
the new disk, however, IIRC. I'll remove one of the entries when I
finish.
Another interesting thing is that beadm list showed both BEs listing the
R=reboot flag.
This is because you copied the menu.lst from the old pool. Running
beadm activate should have fixed this for you but it sounds like there
was another problem with getting the menu.lst created when you did the
"beadm create -p" It would be really helpful to know what caused that
to fail but it sounds like it may be too late. if there is any way you
can re-run the "beadm create -p" with BE_PRINT_ERR=true set that would
help narrow down why this failed.
If I have to do this again I will set BE_PRINT_ERR. I know beadm tried
to set up a new grub menu (or at least there was a message saying it was
trying to) before it said it failed.
- Repartition the original disk to be the same as the active one.
- Mirror the root pool and the userdata pool.
- installgrub on the mirrored root pool.
- Do the pkg image-update, which was why I wanted to do all of this in
the first place...
Hopefully this was successful! :-)
I'll get to this, hopefully before the weekend.
Thanks,
Jack
Hopefully this is helpful to you. I left a few details out; please let
me know if you have any questions or want more information.
Thanks,
Jack
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss