Hello,

Since I've got my disk partitioning sorted out now, I want to move my BE
from the old disk to the new disk.

I created a new zpool, named RPOOL for distinction with the existing
"rpool".
I then did lucreate -p RPOOL -n new95

This completed without error, the log is at the bottom of this mail.

I have not yet dared to run luactivate. I also have not yet dared set the
ACTIVE flag on any partitions on the new disk (I had some interesting times
with that previously).  Before I complete these steps to set the active
partition and run luactivate, I have a few questions:

1. I somehow doubt that the lucreate process installed a boot block on the
new disk...  How can I confirm this?  Or is luactivate supposed to do this?
2. There are a number of open issues still with ZFS root.  I saw some notes
pertaining to leaving the first cylinder of the disk out from the root pool
slice.  What is that all about?
3. I have a remnant of the lucreate process in my mounts ... (which
prevents, for example lumount and previously caused problems with
luactivate)
4. I see the vdev for dump got created in the new pool, but not for swap?
Is this to be expected?
5. There were notes about errors which were recorded in /tmp/lucopy.errors
... I've rebooted my machine since, so I can't review those any more....  I
guess I need to run the lucreate again to see if it happens again and to be
able to read those logs before they get lost again.
6. Since SHARED is an entirely independent pool, and since the purpose of
this lucreate is to move root from one disk to another, I don't see why
lucreate needed to make snapshots of the zone!
7. Despite the messages that the grub menu have been distributed and
populated successfully, the new boot environment have not been added to the
grub menu list.  My experience though is that this happens during
luactivate, so I'm not concerned about this just yet.


Below is some bits showing the current status of the system:

$ zfs list -r RPOOL
NAME               USED  AVAIL  REFER  MOUNTPOINT
RPOOL             7.97G  24.0G  26.5K  /RPOOL
RPOOL/ROOT        6.47G  24.0G    18K  /RPOOL/ROOT
RPOOL/ROOT/new95  6.47G  24.0G  6.47G  /.alt.new95
RPOOL/dump        1.50G  25.5G    16K  -
/RPOOL/boot/grub $
/RPOOL/boot/grub $
/RPOOL/boot/grub $ lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
snv_94                     yes      no     no        yes    -
snv_95                     yes      yes    yes       no     -
new95                      yes      no     no        yes    -
/RPOOL/boot/grub $ luumount new95
ERROR: boot environment <new95> is not mounted


$ zfs list -r RPOOL
NAME               USED  AVAIL  REFER  MOUNTPOINT
RPOOL             7.97G  24.0G  26.5K  /RPOOL
RPOOL/ROOT        6.47G  24.0G    18K  /RPOOL/ROOT
RPOOL/ROOT/new95  6.47G  24.0G  6.47G  /.alt.new95
RPOOL/dump        1.50G  25.5G    16K  -


$ lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
snv_94                     yes      no     no        yes    -
snv_95                     yes      yes    yes       no     -
new95                      yes      no     no        yes    -

Thank you,
  _Johan

========
For what it is worth, below is the log of the lucreate session.
/dev/dsk $ zpool create -f RPOOL c0d0s0
/dev/dsk $ timex lucreate -p RPOOL -n new95
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <snv_95> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <new95>.
Source boot environment is <snv_95>.
Creating boot environment <new95>.
Creating file systems on boot environment <new95>.
Creating <zfs> file system for </> in zone <global> on <RPOOL/ROOT/new95>.
Populating file systems on boot environment <new95>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
WARNING: The file </tmp/lucopy.errors.3488> contains a list of <2>
potential problems (issues) that were encountered while populating boot
environment <new95>.
INFORMATION: You must review the issues listed in
</tmp/lucopy.errors.3488> and determine if any must be resolved. In
general, you can ignore warnings about files that were skipped because
they did not exist or could not be opened. You cannot ignore errors such
as directories or files that could not be created, or file systems running
out of disk space. You must manually resolve any such problems before you
activate boot environment <new95>.
Creating shared file system mount points.
Creating snapshot for <SHARED/zones/sp1> on <SHARED/zones/[EMAIL PROTECTED]>.
Creating clone for <SHARED/zones/[EMAIL PROTECTED]> on <SHARED/zones/sp1-new95>.
Creating compare databases for boot environment <new95>.
Creating compare database for file system </>.
Updating compare databases on boot environment <new95>.
Updating compare databases on boot environment <snv_94>.
Making boot environment <new95> bootable.
Updating bootenv.rc on ABE <new95>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE
<snv_94> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <new95> in GRUB menu
Population of boot environment <new95> successful.
Creation of boot environment <new95> successful.

real       35:48.77
user        2:38.00
sys         6:12.22



-- 
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke

Afrikaanse Stap Website: http://www.bloukous.co.za

My blog: http://initialprogramload.blogspot.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to