I've attached another (whole not slice) disk to rpool after installation to 
mirror the original primary one, but it get EFI label witch I guess was wrong 
from me. After hard reset caused by power outage I got cycling reboots with 
zfs_page_fault panic.
Then I booted from liveCD and I messed up my rpool (when swapping physically 
HDDs) and ended with just the disk with EFI label in rpool. I was unable to 
attach the original primary disk to mirror the data back, because even both 
HDDs are same the layout was different (EFI label vs. Partition0 100% of HDD 
type Solaris2). The nonEFI disk had at least swap partition still defined and 
I've been attaching just slice 0.

1. Then I created zpool named bootpool from slice 0 of original installation 
rpool disk leaving the orignal partition(solraris2) and VTOC.
2. I replicated all rpool filesets to bootpool using zfs send and zfs receive 
pipe
3. modified bootpool/boot/grub/menu.lst bootfs entry to point it to 
bootpool/ROOT/opensolaris
4. set bootpool/ROOT/opensolaris as legacy mountpoint and tried to mount it, 
all other original FSs are mounted OK to non-legacy mountpoint under 
/bootpool/...

With this I can load kernel and boot archive manualy from grub or using the 
modified menu entry start the boot process, but system panics in early phase 
with:
cannot mount root on /ramdisk:a

I tried to
1. removed /mnt/etc/zfs/zpool.cache
2. bootadm update-archive -R /mnt when booted from liveCD and had 
bootpool/ROOT/opensolaris mounted on /mnt
3. verified there is no zpool.cache file in boot archive

but still a get panic on:
cannot mount root on /ramdisk:a

This behavior is similar to the situation when booted liveCD you mount the boot 
zpool (rpool) and export it after umount, but I never exported bootpool.

Any idea what to set/check?
Is here a way how to fix this installation somehow?

Thanks&Regards
 
 
This message posted from opensolaris.org

Reply via email to