Hi Karl,

Manually cloning the root pool is difficult. We have a root pool recovery procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery

The problem is that the amount system-specific info stored in the root
pool and any kind of device differences might be insurmountable.

Solaris 10 ZFS/flash archive support is available with patches but not
for the Nevada release.

The ZFS team is working on a split-mirrored-pool feature and that might
be an option for future root pool cloning.

If you're still interested in a manual process, see the steps below attempted by another community member who moved his root pool to a
larger disk on the same system.

This is probably more than you wanted to know...

Cindy



# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
Set the bootfs property on the root pool BE.
# zpool set bootfs=altrpool/ROOT/zfsBE altrpool
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

On 09/24/09 10:06, Karl Rossing wrote:
I would like to clone the configuration on a v210 with snv_115.

The current pool looks like this:

-bash-3.2$ /usr/sbin/zpool status pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to /tmp/a 
so that I can make the changes I need prior to removing the drive and putting 
it into the new v210.

I supose I could lucreate -n new_v210, lumount new_v210, edit what I need to, 
luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 and then 
luactivate the original boot environment.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to