On Thu, Jun 26, 2008 at 10:18 AM, Ethan Quach <[EMAIL PROTECTED]> wrote:
> > Henry Jen wrote: > >> Hi, >> >> I try to do image-update from opensolaris 2008.05(nv86), but the command >> failed, and the cause seems to be cannot create/copy BE. >> $ pfexec pkg image-update >> pkg: unable to create an auto snapshot. pkg recovery is disabled. >> pkg: image-update cannot be done on live image >> >> The last couple lines of image-update -v are: >> >> pkg:/[EMAIL PROTECTED],5.11-0.86:20080426T174943Z -> pkg:/[EMAIL PROTECTED] >> ,5.11-0.91:20080613T174335Z >> None -> pkg:/[EMAIL PROTECTED],5.11-0.91:20080613T182501Z >> None -> pkg:/[EMAIL PROTECTED],5.11-0.91:20080613T174340Z >> None -> pkg:/[EMAIL PROTECTED],5.11-0.91:20080613T182505Z >> None >> pkg: unable to create BE None >> pkg: attempt to mount opensolaris failed. >> pkg: image-update cannot be done on live image >> > > Looks like it fails to create a snapshot of the running BE, which > causes all the other errors coming from pkg. Unfortunately, the > version of beadm in 2008.05 doesn't allow for any workarounds for > providing better error or debug messages from beadm. > > Can you provide the output of 'zfs list', and 'beadm list' > $ zfs list NAME USED AVAIL REFER MOUNTPOINT osol 5.12G 3.87G 19K /osol osol/ROOT 5.11G 3.87G 18K /osol/ROOT osol/ROOT/opensolaris 5.11G 3.87G 5.04G legacy osol/ROOT/[EMAIL PROTECTED] 77.0M - 5.04G - pool 28.4G 7.98G 18K /pool pool/ROOT 5.10G 7.98G 18K legacy pool/ROOT/opensolaris-1 5.10G 7.98G 5.04G /mnt pool/ROOT/[EMAIL PROTECTED] 66.5M - 5.04G - pool/home 22.6G 7.98G 19K /export/home pool/home/henryjen 22.6G 7.98G 13.0G /export/home/henryjen pool/home/henryjen/foss 2.88G 7.98G 2.88G /export/home/henryjen/foss pool/home/henryjen/prj 6.68G 7.98G 6.68G /export/home/henryjen/prj pool/zones 743M 7.98G 20K /export/zones pool/zones/vm1 743M 7.98G 743M /export/zones/vm1 $ beadm list BE Active Active on Mountpoint Space Name reboot Used ---- ------ --------- ---------- ----- opensolaris yes no legacy 5.11G opensolaris-1 no yes - 5.10G > > Does <pool>/boot/grub/menu.lst exist? If not, see below. > No. > >> It might worth note that I installed OpenSolaris in a partition with a >> process roughly described at >> http://blogs.sun.com/slowhog/entry/install_opensolaris_side_by_side. >> >> I did some experiment on beadm command, and found that I cannot use the >> same pool, but has to copy the BE to another pool. I.e, 'pfexec beadm create >> test' says cannot create, but 'pfexec beadm create -p pool create test' >> succeed. However, I cannot activate the new BE on other pool: >> >> $ pfexec beadm activate opensolaris-1 >> beadm: Unable to activate opensolaris-1 >> >> Any ideas? >> > > What device are you actually booting from? Not sure. But I guess you arr right that it is booting from the UFS slice. The GRUB menu does match the /boot/grub/menu.lst on the slice, which is c4d0s0 in my case. Here is the list of partitions: partition> print Current partition table (original): Total disk cylinders available: 7293 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1262 - 2454 9.14GB (1193/0/0) 19165545 1 swap wu 3 - 68 517.72MB (66/0/0) 1060290 2 backup wm 0 - 7292 55.87GB (7293/0/0) 117162045 3 unassigned wm 69 - 1261 9.14GB (1193/0/0) 19165545 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wm 2455 - 7292 37.06GB (4838/0/0) 77722470 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wu 1 - 2 15.69MB (2/0/0) 32130 > From the instructions > listed in that blog, I don't see an installgrub ever done on the > slice containing the pool you created for OpenSolaris. This tells > me that you're still booting the UFS slice with your existing LU BE. > While this works in allowing you to choose to boot the OpenSolaris > BE, beadm won't be happy because it expects the OpenSolaris BE to be > in bootable root pool with its own menu.lst In a bootable root pool > the menu.lst is stored in "pool dataset" of the pool. If you're > currently booted to your OpenSolaris BE, try this: > > Mount the slice containing your UFS BE somewhere, e.g. /mnt > > Copy the menu.lst from that slice into the pool dataset: > > cp /mnt/boot/grub/menu.lst <pool>/boot/grub/menu.lst > > Install grub into the slice used for you pool. (This will make your > pool the default device that's booted, no longer your UFS slice, but > you can still boot your UFS BE by selecting it from the menu) > > installgrub /boot/grub/stage1 /boot/grub/stage2 <vdev_used_for_pool> > > I 'cp /mnt/boot/grub/menu.lst osol/boot/grub/menu.lst' and did 'installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4d0s3', after that, beadm list now gives: $ beadm list BE Active Active on Mountpoint Space Name reboot Used ---- ------ --------- ---------- ----- opensolaris yes yes legacy 5.11G opensolaris-1 no yes - 5.10G After reboot, GRUB does not show up, and boot into OpenSolaris with a keystroke. > Now, depending on what your menu entry looks like for your OpenSolaris > BE, beadm should start to work. As long as your menu entry contains > the 'bootfs' directive with the correct dataset value, beadm should be > happy. > > Let me know if this works. > Thanks, I am doing image-update now. Will see if grub works later. > > > >> PS. After destroy the /opt zfs filesystem, beadm create now simply segment >> fault. Is there an assumption there must be a /opt filesystem? >> > > No, there's no assumption that there must be any subordinate file > systems. I have an instance of OpenSolaris installed with just a > root file system and things work fine. If you send me the core > file, I'll take a look. Attached is two of them. Thanks for helping, Ethan. Cheers, Henry
_______________________________________________ pkg-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/pkg-discuss
