This is still bugging me.
At home:
Upgrading b103 -> b115.
jana:~> zpool status
pool: jana
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
jana ONLINE 0 0 0
c1d0s0 ONLINE 0 0 0
errors: No known data errors
pool: raid
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scrub: scrub completed after 1h40m with 0 errors on Mon May 18 17:50:12 2009
config:
NAME STATE READ WRITE CKSUM
raid ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
errors: No known data errors
jana:~> zfs list
NAME USED AVAIL REFER MOUNTPOINT
jana 10.1G 62.7G 40K /jana
jana/ROOT 7.10G 62.7G 18K legacy
jana/ROOT/snv_103 6.99G 62.7G 6.95G /
jana/dump 1.00G 62.7G 1.00G -
jana/swap 2G 64.7G 16K -
raid 878G 952G 67.1K /raid
raid/applications 17.7G 952G 9.41G /raid/applications
raid/backup 75.4G 952G 36.7K /raid/backup
raid/backup/bender 32.1G 952G 32.1G /raid/backup/bender
raid/backup/betty 8.15G 952G 8.15G /raid/backup/betty
raid/backup/holly 8.34G 952G 8.34G /raid/backup/holly
raid/backup/jana 2.18G 952G 331M /raid/backup/jana
raid/backup/zoe 24.7G 952G 24.7G /raid/backup/zoe
raid/database 11.7G 952G 11.7G /raid/database
raid/drivers 2.31G 952G 1.54G /raid/drivers
raid/dvds 2.87G 952G 2.87G /raid/dvds
raid/ebooks 3.70G 952G 2.75G /raid/ebooks
raid/emulators 1.72G 952G 1.72G /raid/emulators
raid/fonts 729K 952G 729K /raid/fonts
raid/forex 1.73G 952G 1.69G /raid/forex
raid/games 24.4G 952G 22.6G /raid/games
raid/home 61.2G 952G 32.0K /raid/home
raid/home/bridget 9.28G 952G 8.05G /raid/home/bridget
raid/home/martin 52.0G 952G 34.0G /raid/home/martin
raid/management 375K 952G 37.5K /raid/management
raid/movies 15.5G 952G 15.3G /raid/movies
raid/music 169M 952G 169M /raid/music
raid/operating_systems 80.7G 952G 38.0G /raid/operating_systems
raid/people 91.7M 952G 87.4M /raid/people
raid/photos 6.51G 952G 6.39G /raid/photos
raid/pictures 9.05G 952G 8.93G /raid/pictures
raid/software 28.7G 952G 22.1G /raid/software
raid/temp 8.34G 952G 6.72G /raid/temp
raid/tv_shows 451G 952G 451G /raid/tv_shows
jana:~>
jana:/etc# lofiadm -a
/raid/operating_systems/Solaris/SXCE/sol-nv-b115-x86-dvd.iso
/dev/lofi/1
jana:/etc# mount -o ro -F hsfs -o ro /dev/lofi/1 /mnt
jana:/etc# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
snv_103 yes yes yes no -
jana:/etc#
jana:/etc# zfs snapshot -r jana/ROOT/snv_103 at preupgrade
jana:/etc# zfs list -t all -r jana
NAME USED AVAIL REFER MOUNTPOINT
jana 9.96G 62.9G 40K /jana
jana/ROOT 6.95G 62.9G 18K legacy
jana/ROOT/snv_103 6.95G 62.9G 6.95G /
jana/ROOT/snv_103 at preupgrade 0 - 6.95G -
jana/dump 1.00G 62.9G 1.00G -
jana/swap 2G 64.9G 16K -
jana:/etc# /mnt/Solaris_11/Tools/Installers/
liveupgrade20* solarisn*
jana:/etc# /mnt/Solaris_11/Tools/Installers/liveupgrade20
...
jana:/etc# lucreate -n snv_115
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <snv_103> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <snv_115>.
Source boot environment is <snv_103>.
Creating boot environment <snv_115>.
Cloning file systems from boot environment <snv_103> to create boot environment
<snv_115>.
Creating snapshot for <jana/ROOT/snv_103> on <jana/ROOT/snv_103 at snv_115>.
Creating clone for <jana/ROOT/snv_103 at snv_115> on <jana/ROOT/snv_115>.
Setting canmount=noauto for </> in zone <global> on <jana/ROOT/snv_115>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE
<snv_115> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <snv_115> in GRUB menu
Population of boot environment <snv_115> successful.
Creation of boot environment <snv_115> successful.
jana:/etc# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
snv_103 yes yes yes no -
snv_115 yes no no yes -
jana:/etc# zfs list -t all -r jana
NAME USED AVAIL REFER MOUNTPOINT
jana 9.99G 62.9G 40K /jana
jana/ROOT 6.98G 62.9G 18K legacy
jana/ROOT/snv_103 6.98G 62.9G 6.95G /
jana/ROOT/snv_103 at preupgrade 33.9M - 6.95G -
jana/ROOT/snv_103 at snv_115 83.5K - 6.95G -
jana/ROOT/snv_115 169K 62.9G 6.95G /tmp/.alt.luupdall.22680
jana/dump 1.00G 62.9G 1.00G -
jana/swap 2G 64.9G 16K -
jana:/etc#
jana:/etc# luupgrade -u -n snv_115 -s /mnt
System has findroot enabled GRUB
No entry for BE <snv_115> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
52479 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_11/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <snv_115>.
Checking for GRUB menu on ABE <snv_115>.
Saving GRUB menu on ABE <snv_115>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <snv_115>.
Performing the operating system upgrade of the BE <snv_115>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 2% completed
...
And its running fine. Without needing to do a 'zpool export raid' to export my
data pool.
Yet at work, on ANY of our servers, it fails. Every time.
This output is for our production server zeus, though the problem also happens
to athena and artemis which I posted about in the past. I upgraded them
yesterday using the 'zpool export' work around I discovered so I can't provide
that output now, but here it is from our production server.
(I'm masking the pool name as it is a very unique name and is the name of our
company and I do not want any of this showing up in searches :).
zeus:~# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
snv_101 yes yes yes no -
snv_115 yes no no yes -
zeus:~# zpool status
pool: **datapool
state: ONLINE
scrub: resilver completed after 42h18m with 0 errors on Thu Jun 18 05:28:49
2009
config:
NAME STATE READ WRITE CKSUM
**datapool ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0 2.05G resilvered
c1t8d0 ONLINE 0 0 0 1.98G resilvered
c0t1d0 ONLINE 0 0 0 610G resilvered
c1t9d0 ONLINE 0 0 0 1.78G resilvered
c0t2d0 ONLINE 0 0 0 2.05G resilvered
c1t10d0 ONLINE 0 0 0 1.98G resilvered
c0t3d0 ONLINE 0 0 0 1.90G resilvered
c1t11d0 ONLINE 0 0 0 1.78G resilvered
c0t4d0 ONLINE 0 0 0 2.05G resilvered
c1t12d0 ONLINE 0 0 0 1.98G resilvered
c0t5d0 ONLINE 0 0 0 1.90G resilvered
c1t13d0 ONLINE 0 0 0 1.78G resilvered
c0t6d0 ONLINE 0 0 0 2.06G resilvered
c1t14d0 ONLINE 0 0 0 1.97G resilvered
c0t7d0 ONLINE 0 0 0 1.89G resilvered
c1t15d0 ONLINE 0 0 0 1.77G resilvered
errors: No known data errors
pool: zeus
state: ONLINE
scrub: scrub completed after 0h12m with 0 errors on Tue Jun 16 11:21:18 2009
config:
NAME STATE READ WRITE CKSUM
zeus ONLINE 0 0 0
mirror ONLINE 0 0 0
c4d0s0 ONLINE 0 0 0
c5d0s0 ONLINE 0 0 0
errors: No known data errors
zeus:~#
So lets try to upgrade:
zeus:~# luupgrade -u -n snv_115 -s /mnt
System has findroot enabled GRUB
No entry for BE <snv_115> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
52479 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_11/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <snv_115>.
Checking for GRUB menu on ABE <snv_115>.
Saving GRUB menu on ABE <snv_115>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <snv_115>.
Performing the operating system upgrade of the BE <snv_115>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
ERROR: Installation of the packages from this media of the media failed;
pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Generating upgrade actions
WARNING: SUNWlang-en depends on SUNWlang-enUS, which is not selected
ERROR: No upgradeable file systems found at specified mount point.
Restoring GRUB menu on ABE <snv_115>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successful.
Configuring failsafe for system.
Failsafe configuration is complete.
The Solaris upgrade of the boot environment <snv_115> failed.
Installing failsafe
Failsafe install is complete.
zeus:~#
The problem is clearly "ERROR: No upgradeable file systems found at specified
mount point."
But WHY ? What is it about the datapool that is confusing it? The large number
of filesystems we have (over 200) and snapshots (over a million) ??
The ONLY work around that I know of is to 'zpool export datapool' being
whatever the datapool is on the server in question. This worked to upgrade both
athena and artemis from snv_101 to snv_103, and yesterday from snv_103 to
snv_115.
This bug has been here a while now, should it be reported? Am I simply doing
something wrong? Exporting the pool on the backup hosts is fine but NOT fine on
our production server...
Note: I AM CCing this posting to the 'install' forums as someone suggested they
might better know what the problem is.
--
This message posted from opensolaris.org