Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread Ben Middleton
Hi,

As a related issue to this (specifically CR 6884728) - any ideas how I should 
go about removing the old BE? When I attempt to run ludelete I get the 
following:

$ lustatus
Boot Environment   Is   Active ActiveCanCopy  
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
10_05-09   yes  no noyes- 
10_10-09   yes  yesyes   no -  


$ ludelete 10_05-09

System has findroot enabled GRUB
Checking if last BE on any disk...
ERROR: cannot mount '/.alt.10_05-09/var': directory is not empty
ERROR: cannot mount mount point /.alt.10_05-09/var device 
rpool/ROOT/s10x_u7wos_08/var
ERROR: failed to mount file system rpool/ROOT/s10x_u7wos_08/var on 
/.alt.10_05-09/var
ERROR: unmounting partially mounted boot environment file systems
ERROR: No such file or directory: error unmounting rpool/ROOT/s10x_u7wos_08
ERROR: cannot mount boot environment by name 10_05-09
ERROR: Failed to mount BE 10_05-09.
ERROR: Failed to mount BE 10_05-09.
cat: cannot open /tmp/.lulib.luclb.dsk.2797.10_05-09
ERROR: This boot environment 10_05-09 is the last BE on the above disk.
ERROR: Deleting this BE may make it impossible to boot from this disk.
ERROR: However you may still boot solaris if you have BE(s) on other disks.
ERROR: You *may* have to change boot-device order in the BIOS to accomplish 
this.
ERROR: If you still want to delete this BE 10_05-09, please use the force 
option (-f).
Unable to delete boot environment.


My  zfs setup now shows this:

NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 11.4G  4.26G  39.5K  /rpool
rpool/ROOT9.15G  4.26G18K  legacy
rpool/ROOT/10_10-09   9.14G  4.26G  4.04G  /
rpool/ROOT/10_10...@10_10-09  2.39G  -  4.10G  -
rpool/ROOT/10_10-09/var   2.71G  4.26G  1.18G  /var
rpool/ROOT/10_10-09/v...@10_10-09  1.53G  -  2.11G  -
rpool/ROOT/s10x_u7wos_08  17.4M  4.26G  4.10G  /.alt.10_05-09
rpool/ROOT/s10x_u7wos_08/var  9.05M  4.26G  2.11G  /.alt.10_05-09/var
rpool/dump1.00G  4.26G  1.00G  -
rpool/export  74.6M  4.26G19K  /export
rpool/export/home 74.5M  4.26G21K  /export/home
rpool/export/home/admin   65.5K  4.26G  65.5K  /export/home/admin
rpool/swap   1G  4.71G   560M  -


It seems that the ludelete script reassigns the mountpoint for the BE to be 
deleted  - but falls foul of the /var mount underneath the old BE. I tried 
lumounting the old BE and checked the /etc/vfstab - but there are no extra zfs 
entries in there.

I'm just looking for a clean way to remove the old BE, and then remove the old 
snapshot without interfering with Live Upgrade from working in the future.

Many thanks,

Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread Kurt Schreiner
Hi,

On Wed, Oct 28, 2009 at 01:55:57AM -0700, Ben Middleton wrote:
 
 As a related issue to this (specifically CR 6884728) - any ideas how
 I should go about removing the old BE?

I haven'd tried this on U8, but maybe the following hack I did on
sxce_125 to be able to go back to older BEs will work on U8, too?

-1014: diff -u /usr/lib/lu/lulib{.ori,}
--- /usr/lib/lu/lulib.ori   Thu Oct 22 22:42:19 2009
+++ /usr/lib/lu/lulib   Sat Oct 24 01:21:41 2009
@@ -236,6 +236,7 @@
start=`echo $blob | /usr/bin/grep -n $lgzd_pool | head -2 | tail +2 
| cut -d: -f1`
start=`expr $start + 1`
echo $blob | tail +$start | awk '{print $1}' | while read dev; do
+   dev=`echo $dev | sed 's/mirror.*/mirror/'`
if [ -z $dev ]; then
   continue;
elif [ $dev = errors: ]; then

With this one-line hack luactivate, lucreate and ludelete (that's
what I just tested) are working again...

Kurt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread Ben Middleton
 + dev=`echo $dev | sed 's/mirror.*/mirror/'`

Thanks for the suggestion Kurt. However, I'm not running a mirror on that pool 
- so am guessing this won't help in my case.

I'll try and pick my way through the lulib script if I get any time.

Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread Jens Elkner
On Wed, Oct 28, 2009 at 01:55:57AM -0700, Ben Middleton wrote:
Hi,
  
 $ ludelete 10_05-09
 
 System has findroot enabled GRUB
 Checking if last BE on any disk...
 ERROR: cannot mount '/.alt.10_05-09/var': directory is not empty
 ERROR: cannot mount mount point /.alt.10_05-09/var device 
 rpool/ROOT/s10x_u7wos_08/var
 ERROR: failed to mount file system rpool/ROOT/s10x_u7wos_08/var on 
 /.alt.10_05-09/var
 ERROR: unmounting partially mounted boot environment file systems
...
 rpool/ROOT/s10x_u7wos_08  17.4M  4.26G  4.10G  /.alt.10_05-09
 rpool/ROOT/s10x_u7wos_08/var  9.05M  4.26G  2.11G  /.alt.10_05-09/var

luumount /.alt.10_05-09
mount -p | grep /.alt.10_05-09
# if it lists something (e.g. tmp, swap, etc) reboot first and than:

zfs set mountpoint=/mnt rpool/ROOT/s10x_u7wos_08
zfs mount rpool/ROOT/s10x_u7wos_08
rm -rf /mnt/var/* /mnt/var/.???*
zfs umount /mnt

# now that should work
lumount 10_05-09 /mnt
luumount /mnt

# if not, send the output of mount -p | grep ' /mnt'

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-28 Thread dick hoogendijk

Ben Middleton wrote:

I'm just looking for a clean way to remove the old BE, and then remove the old 
snapshot without interfering with Live Upgrade from working in the future.
  

Remove the right line from /etc/lutab
Remove the ICF.number and INODE.number where number is the same as 
the line in /etc/lutab from the /etc/lu directory. You'll notice that 
with lustatus the BE is gone.

Remove the ZFS datasets and snapshots for the BE you just deleted.

I've done this hack in the past quite some times and it always worked fine.
It's not supported by SUN though.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.03 b125
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-20 Thread Renil Thomas
Were you able to get more insight about this problem ?
U7 did not encounter such problems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-20 Thread Philip Brown
Quote: cindys 
3. Boot failure from a previous BE if either #1 or #2 failure occurs.

#1 or #2 were not relevant in my case.  Just found I could not boot into old u7 
be. I am happy with workaround as shinsui points out, so this is purely for 
your information.

Quote: renil82
U7 did not encounter such problems.

my problem occurred from lu 07 to 08. 
again only for information purposes as workaround is sufficient.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Cindy Swearingen


We are working on evaluating all the issues and will get problem
descriptions and resolutions posted soon. I've asked some of you to
contact us directly to provide feedback and hope those wheels are
turning.

So far, we have these issues:

1. Boot failure after LU with a separate var dataset.
This is CR 6884728.
2. LU failure after s10u8 LU with zones.
3. Boot failure from a previous BE if either #1 or #2 failure
occurs.

If you have a support contract, the best resolution is to open a service 
ticket so these issues can be escalated.


If not, feel free to contact me directly with additional symptoms and/or
workarounds.

Thanks,

Cindy

On 10/17/09 09:24, dick hoogendijk wrote:

On Sat, 2009-10-17 at 08:11 -0700, Philip Brown wrote:

same problem here on sun x2100 amd64


It's a bootblock issue. If you really want to get back to u6 you have to
installgrub /boot/grub/stage1 /boot/grub/stage2 from th update 6 image
so mount it (with lumount or easier, with zfs mount) and make sure you
take the stage1 stage2 from this update.
***WARNING*** adter doing so, you're u6 will boot, but you're u8 will
not. In activating update 8 all GRUB items are synced. That way all BE's
are bootable. That's the way it's supposed to be. Maybe something went
wrong and only the new u8 BE has the understanding of the new
bootblocks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Paul B. Henson
On Sat, 17 Oct 2009, dick hoogendijk wrote:

 It's a bootblock issue. If you really want to get back to u6 you have to
 installgrub /boot/grub/stage1 /boot/grub/stage2 from th update 6 image
 so mount it (with lumount or easier, with zfs mount) and make sure you
 take the stage1 stage2 from this update. ***WARNING*** adter doing so,
 you're u6 will boot, but you're u8 will not. In activating update 8 all
 GRUB items are synced. That way all BE's are bootable. That's the way
 it's supposed to be. Maybe something went wrong and only the new u8 BE
 has the understanding of the new bootblocks.

I restored the U6 grub, and sure enough, I was able to boot my U6 BE again.
However, I was also still able to boot the U8 BE. Thanks much, I'll pass
this info on to my open support ticket and see what they have to say.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-17 Thread Philip Brown
same problem here on sun x2100 amd64

i started with a core installation of u7 with the only patches applied as 
outlined in live upgrade doco 206844 ( 
http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1 ).

also as stated in doco: 
pkgrm SUNWlucfg SUNWluu SUNWlur
and then from 10/9 dvd
pkgadd -d  SUNWlucfg SUNWlur SUNWluu

more info in attached zfsinfo.txt
-- 
This message posted from opensolaris.org

Last login: Fri Oct 16 14:47:14 2009 from 192.168.1.64
Sun Microsystems Inc.   SunOS 5.10  Generic January 2005
[phi...@unknown] [3:16pm] [~]  zpool status
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.   

action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the 

pool will no longer be accessible on older software versions.   

 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c0t1d0s0  ONLINE   0 0 0

errors: No known data errors
[phi...@unknown] [3:17pm] [~] # lufslist -n s10x_u7wos_08
   boot environment name: s10x_u7wos_08

Filesystem  fstypedevice size Mounted on  Mount Options
---   --- --
/dev/zvol/dsk/rpool/swap swap536870912 -   -
rpool/ROOT/s10x_u7wos_08 zfs 522009600 /   -
rpool   zfs  155414159360 /rpool  -
rpool/exportzfs  152577344512 /export -
rpool/export/home   zfs  152577325056 /export/home-
[phi...@unknown] [3:17pm] [~] # luactivate s10x_u7wos_08
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE sol-10-u8-x86

Setting failsafe console to ttya.
Generating boot-sign for ABE s10x_u7wos_08
Generating partition and slice information for ABE s10x_u7wos_08
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

 mount -Fzfs /dev/dsk/c0t0d0s0 /mnt

3. Run luactivate utility with out any arguments from the Parent boot 
environment root slice, as shown below:

 /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

**

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File /etc/lu/installgrub.findroot propagation successful
File /etc/lu/stage1.findroot propagation successful
File /etc/lu/stage2.findroot propagation successful
File /etc/lu/GRUB_capability propagation successful
Deleting stale GRUB loader from all BEs.
File /etc/lu/installgrub.latest deletion successful
File /etc/lu/stage1.latest deletion successful
File /etc/lu/stage2.latest deletion successful
Activation of boot environment s10x_u7wos_08 successful.
[phi...@unknown] [3:17pm] [~] # lufslist -n s10x_u7wos_08
   boot environment name: s10x_u7wos_08
   This boot environment will be active on next system boot.

Filesystem  fstypedevice size Mounted on  Mount Options
---   --- --
/dev/zvol/dsk/rpool/swap swap536870912 -   -
rpool/ROOT/s10x_u7wos_08 zfs 522009600 /   -
rpool   zfs  155414215168 /rpool  -
rpool/exportzfs  152577344512 /export -
rpool/export/home   zfs  152577325056 /export/home-

[phi...@unknown] [3:18pm] [~] # lustatus
Boot Environment   Is   Active ActiveCanCopy  
Name

Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-17 Thread dick hoogendijk
On Sat, 2009-10-17 at 08:11 -0700, Philip Brown wrote:
 same problem here on sun x2100 amd64

It's a bootblock issue. If you really want to get back to u6 you have to
installgrub /boot/grub/stage1 /boot/grub/stage2 from th update 6 image
so mount it (with lumount or easier, with zfs mount) and make sure you
take the stage1 stage2 from this update.
***WARNING*** adter doing so, you're u6 will boot, but you're u8 will
not. In activating update 8 all GRUB items are synced. That way all BE's
are bootable. That's the way it's supposed to be. Maybe something went
wrong and only the new u8 BE has the understanding of the new
bootblocks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-16 Thread Paul B. Henson

I used live upgrade to update a U6+lots'o'patches system to vanilla U8. I
ran across CR 6884728, which results in extraneous lines in vfstab
preventing successful boot. I logged in with maintainence mode and deleted
those lines, and the U8 BE came up ok. I wasn't sure if there were any
other problems from that, so I tried to activate and boot back into my
previous U6 BE. That now fails with this error:

  ***
  *  This device is not bootable!   *
  *  It is either offlined or detached or faulted.  *
  *  Please try to boot from a different device.*
  ***


NOTICE:
spa_import_rootpool: error 22

Cannot mount root on /p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@0,0:a
fstype zfs

panic[cpu0]/thread=fbc283a0: vfs_mountroot: cannot mount root

I can still boot fine into the new U8 BE, but so far have found no way to
recover and boot into my previously existing U6 BE.

I booted both BE's in verbose mode, the working one:

SunOS Release 5.10 Version Generic_141445-09 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
[...]
sd44 at marvell88sx3: target 7 lun 0
sd44 is /p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@7,0
/p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@7,0 (sd44) online
root on ospool/ROOT/s10u8 fstype zfs

and the failing one:

SunOS Release 5.10 Version Generic_141415-10 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
[...]
sd44 at marvell88sx3: target 7 lun 0
sd44 is /p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@7,0
/p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@7,0 (sd44) online

NOTICE:
spa_import_rootpool: error 22

Cannot mount root on /p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@0,0:a
fstype zfs



/p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@0,0:a is c3t0d0, which is part
of my root pool:

NAME  STATE READ WRITE CKSUM
ospoolONLINE   0 0 0
  mirror  ONLINE   0 0 0
c3t0d0s0  ONLINE   0 0 0
c3t4d0s0  ONLINE   0 0 0


Any idea what's going on? Why is the U6 BE trying to mount a disk partition
instead of the appropriate zfs filesystem? Here's the grub config if that
helps:

#- patch-20090907 - ADDED BY LIVE UPGRADE - DO NOT EDIT  -

title patch-20090907
findroot (BE_patch-20090907,0,a)
bootfs ospool/ROOT/patch-20090907
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive

title patch-20090907 failsafe
findroot (BE_patch-20090907,0,a)
bootfs ospool/ROOT/patch-20090907
kernel /boot/multiboot -s
module /boot/x86.miniroot-safe

#- patch-20090907 -- END LIVE UPGRADE 
#- s10u8 - ADDED BY LIVE UPGRADE - DO NOT EDIT  -

title s10u8
findroot (BE_s10u8,0,a)
bootfs ospool/ROOT/s10u8
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive

title s10u8 failsafe
findroot (BE_s10u8,0,a)
bootfs ospool/ROOT/s10u8
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe

#- s10u8 -- END LIVE UPGRADE 


Thanks for any help...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-16 Thread Jens Elkner
On Fri, Oct 16, 2009 at 07:36:04PM -0700, Paul B. Henson wrote:
 
 I used live upgrade to update a U6+lots'o'patches system to vanilla U8. I
 ran across CR 6884728, which results in extraneous lines in vfstab
 preventing successful boot. I logged in with maintainence mode and deleted

Haveing a look at http://iws.cs.uni-magdeburg.de/~elkner/luc/solaris-upgrade.txt
shouldn't hurt ;-)

 those lines, and the U8 BE came up ok. I wasn't sure if there were any
 other problems from that, so I tried to activate and boot back into my
 previous U6 BE. That now fails with this error:
 
   ***
   *  This device is not bootable!   *
   *  It is either offlined or detached or faulted.  *
   *  Please try to boot from a different device.*
   ***
 
 
 NOTICE:
 spa_import_rootpool: error 22
 
 Cannot mount root on /p...@1,0/pci1022,7...@4/pci11ab,1...@1/d...@0,0:a
 fstype zfs
 
 panic[cpu0]/thread=fbc283a0: vfs_mountroot: cannot mount root
 
 I can still boot fine into the new U8 BE, but so far have found no way to
 recover and boot into my previously existing U6 BE.

Hmm - haven't done thumper upgrades yet, but on sparc there is no 
problem to boot into the old BE as long as the zpool hasn't been 
upgraded to U8's v15. So first thing to check is, whether the pool
is still at =v10 (U7 used v10, not sure about U6).
  
Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss