Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Enda O'Connor wrote: Hi I installed zfs root with separate /var on SUNWCall and when I run lucreate/luactivate followed by shutdown -y -g0 -i6, system goes into maintenance mode due to LU not handling the seperate /var The /var is the supported one inside the root dataset dataos/ROOT/solaris10_6 4.64M 25.7G 3.36G / dataos/ROOT/solaris10_6/var 564K 25.7G 76.1M /var I have logged CR 6891469 cannot boot a new zfs based BE created using latest Live Upgrade patch when it has seperate /var for this issue. All my machines (ZFS root) have separate /var datasets. I have upgraded all of them to Solaris 10 U8 last week without any issue... except the machines with Solaris Zones (same problem documented 3 weeks ago or so). I don't have a maintenance contact with Sun, but I would like to know about progress resolving this. I was wondering if I could be in the loop. My zones machines are waiting this to be solved... :-(. No system upgrade (to Solaris 10 U8), even holding back regular patching because I can not create new BE's :-(. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSt8+oZlgi5GaxT1NAQJUTAP/c2SrQ2FqVcTZKJDF13/g8TQTUdMp+qAX afwVClTBIjm9Km9SojPdazy5I/Oqb/KWYoTmCWin14rJjSd6JETO1iE/iKol6hXA ZSIqRKe6/k/jBGGAgNs4UIHC5SdGPLeOmApNq67iZa0JB/P5w+Pvc0t2x8ez10uN cwd1ohjH+W8= =5lXJ -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
Hi I installed zfs root with separate /var on SUNWCall and when I run lucreate/luactivate followed by shutdown -y -g0 -i6, system goes into maintenance mode due to LU not handling the seperate /var The /var is the supported one inside the root dataset dataos/ROOT/solaris10_6 4.64M 25.7G 3.36G / dataos/ROOT/solaris10_6/var 564K 25.7G 76.1M /var I have logged CR 6891469 cannot boot a new zfs based BE created using latest Live Upgrade patch when it has seperate /var for this issue. Enda Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 sriman wrote: I am suspecting that your child datasets inside the zone are creating problem. I could not look into this today. Tomorrow, I will create a similar configuration for myself and try to reproduce the issue. Will get back to you tomorrow. Any progress?. I have stopped patching my systems because I can not create new BE's :-(. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSsyu85lgi5GaxT1NAQIw9gP/dqqZyYX2q9TOV/a3O3Ph5IOSLJ6WEcCz UHUPql+QqQiwp1n9ZLE2/YO7a/sKMxRIcua/qijWTEbVLV7CwnJj9Eb/7X3+K6M0 Vf0+j1koxVZf5H3nSwSLcdQHN7hpngA9HuELleQt5ZIaKPSDweKa2El8su8yIlFU H44vs0j9BNs= =qGWl -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
LiveUpgrade in s10u8 is broken imho. My s10 with a couple of sparse zones cannot be upgraded too with exactly the same errors. Mind you: my zones ARE part of the BE-path! so, that makes no difference. I then used the LU packages from s10u7 to lucreate and luupgrade to s10u8 which worked very well. The luactivate did not however. :-( Maybe I should have installed the newer LU packages and used the latest luactivate. I may test this. Still, I very disappointed that this upgrade with zones is so difficult, while it should be very easy on ZFS. Hope it will be resolved. -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
LiveUpgrade in s10u8 is broken imho. My s10 with a couple of sparse zones cannot be upgraded too w ith exactly the same errors. Mind you: my zones ARE part of the BE-path! so, that makes no differen ce. I then used the LU packages from s10u7 to lucreate and luupgrade to s10u8 which worked very wel l. The luactivate did not however. :-( Maybe I should have installed the newer LU packages and used the latest luactivate. I may test this. Still, I very disappointed that this upgrade with zones is so difficult, while it should be very easy on ZFS. Hope it will be resolved. We certainly tested this and we're certain it worked; not using s10u8 upgrade tools may cause certain issues. How did the upgrade fail? How did luactivate fail? Did the command fail or did rebooting (init 6) failed? If the latter, try running /etc/init.d/lu stop and catch the errors. Casper ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 12:29:55 +0200 casper@sun.com wrote: LiveUpgrade in s10u8 is broken imho. My s10 with a couple of sparse zones cannot be upgraded too w ith exactly the same errors. Mind you: my zones ARE part of the BE-path! so, that makes no differen ce. I then used the LU packages from s10u7 to lucreate and luupgrade to s10u8 which worked very wel l. The luactivate did not however. :-( Maybe I should have installed the newer LU packages and used the latest luactivate. I may test this. Still, I very disappointed that this upgrade with zones is so difficult, while it should be very easy on ZFS. Hope it will be resolved. We certainly tested this and we're certain it worked; not using s10u8 upgrade tools may cause certain issues. I think you misunderstood me. I'm sure you tested it, but it does NOT work. The LU packages from s10u8 fail in every aspect. Lucreate can't even create a new BE so what luupgrade does cannot be tested. Nor can I test what luactivate (from u8) does. Other persons report errors on this program too. Only after finding out the U8 packages don't work, I reinstalled the LU packages from S10u7 and created and upgraded a new BE with them. That finished without any errors! Everything went the way it should. (at least that can be read in the /var/sadm/system/... files) It was only after I also used luactivate (from s10u7) that things got screwed up and left me with an unbootable system. Turned out luactivate had ruined the bootbloks, so an installgrub solved this. But since I got some attention (for which I am very grateful!) I will start over and try to catch the errors on screen. It may help. How did the upgrade fail? As said, the upgrade went very well on an ABE created with lucreate/luupgrade from s10u7. Lucreate from s10u8 fails. I'll send the screens later on. How did luactivate fail? Did the command fail or did rebooting (init 6) failed? As I said, I can understand that luactivate from an older version might things bad. It did in my case. Luactivate from the new version is to be tested on an updated ABE yet. I'll come back on it. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 14:22:23 +0200 dick hoogendijk d...@nagual.nl wrote: I'll come back on it. Here it is. It's a long message, but worth analyzing I hope. arwen# zfs list NAME MOUNTPOINT rpool /rpool rpool/ROOTlegacy rpool/ROOT/daffy / rpool/ROOT/da...@goofy- rpool/ROOT/daffy/zones/zones rpool/ROOT/daffy/zo...@daffy - rpool/ROOT/daffy/zo...@goofy - rpool/ROOT/daffy/zones/midgard-daffy /zones/midgard-daffy rpool/ROOT/daffy/zones/shire-daffy/zones/shire-daffy rpool/ROOT/daffy/zones/yanta-daffy/zones/yanta-daffy rpool/ROOT/goofy / rpool/ROOT/goofy/zones/zones rpool/ROOT/goofy/zones/midgard/zones/midgard rpool/ROOT/goofy/zones/midg...@goofy - rpool/ROOT/goofy/zones/shire /zones/shire rpool/ROOT/goofy/zones/sh...@goofy- rpool/ROOT/goofy/zones/yanta /zones/yanta rpool/ROOT/goofy/zones/ya...@goofy- rpool/dump- rpool/export /export rpool/export/home /export/home rpool/swap- arwen# lofiadm -a /export/iso/s10u8.iso /dev/lofi/1 arwen# mount -F hsfs -o ro /dev/lofi/1 /iso arwen# cd /iso/Solaris_10/Tools/Installers arwen# ./liveupgrade20 arwen# umount /iso arwen# lofiadm -d /dev/lofi/1 ===---===LU packages from S10u8=== arwen# lustatus Boot Environment Is Active ActiveCanCopy Name Complete NowOn Reboot Delete Status -- -- - -- -- daffy yes no noyes- goofy yes yesyes no - arwen# lucreate -n s10u8 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment goofy file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment s10u8. Source boot environment is goofy. Creating boot environment s10u8. Cloning file systems from boot environment goofy to create boot environment s10u8. Creating snapshot for rpool/ROOT/goofy on rpool/ROOT/go...@s10u8. Creating clone for rpool/ROOT/go...@s10u8 on rpool/ROOT/s10u8. Setting canmount=noauto for / in zone global on rpool/ROOT/s10u8. Creating snapshot for rpool/ROOT/goofy/zones on rpool/ROOT/goofy/zo...@s10u8. Creating clone for rpool/ROOT/goofy/zo...@s10u8 on rpool/ROOT/s10u8/zones. Setting canmount=noauto for /zones in zone global on rpool/ROOT/s10u8/zones. Creating snapshot for rpool/ROOT/goofy/zones/shire on rpool/ROOT/goofy/zones/sh...@s10u8. Creating clone for rpool/ROOT/goofy/zones/sh...@s10u8 on rpool/ROOT/s10u8/zones/shire-s10u8. cannot mount 'rpool/ROOT/s10u8/zones/shire-s10u8': legacy mountpoint use mount(1M) to mount this filesystem ERROR: Failed to mount dataset rpool/ROOT/s10u8/zones/shire-s10u8 legacy is not an absolute path. Creating snapshot for rpool/ROOT/goofy/zones/midgard on rpool/ROOT/goofy/zones/midg...@s10u8. Creating clone for rpool/ROOT/goofy/zones/midg...@s10u8 on rpool/ROOT/s10u8/zones/midgard-s10u8. cannot mount 'rpool/ROOT/s10u8/zones/midgard-s10u8': legacy mountpoint use mount(1M) to mount this filesystem ERROR: Failed to mount dataset rpool/ROOT/s10u8/zones/midgard-s10u8 legacy is not an absolute path. Creating snapshot for rpool/ROOT/goofy/zones/yanta on rpool/ROOT/goofy/zones/ya...@s10u8. Creating clone for rpool/ROOT/goofy/zones/ya...@s10u8 on rpool/ROOT/s10u8/zones/yanta-s10u8. cannot mount 'rpool/ROOT/s10u8/zones/yanta-s10u8': legacy mountpoint use mount(1M) to mount this filesystem ERROR: Failed to mount dataset rpool/ROOT/s10u8/zones/yanta-s10u8 legacy is not an absolute path. WARNING: split filesystem / file system type zfs cannot inherit mount point options - from parent filesystem / file type - because the two file systems have different types. Saving existing file /boot/grub/menu.lst in top level dataset for BE daffy as mount-point//boot/grub/menu.lst.prev. Saving existing file /boot/grub/menu.lst in top level dataset for BE s10u8 as mount-point//boot/grub/menu.lst.prev. File /boot/grub/menu.lst propagation successful Copied GRUB menu from PBE to ABE No entry for BE s10u8 in GRUB menu Population of boot environment s10u8 successful. Creation of boot environment s10u8 successful. As you can see there are some errors, BUT the BE creation is succesful. This does not feel right, but OK. I will go ahead since a zfs list seems to be OK too. arwen# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool /rpool rpool/ROOT
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 12:29:55 +0200 And this is bad too: arwen# luumount s10u8 ERROR: umount: /a/var/run busy ERROR: cannot unmount /a/var/run ERROR: failed to unmount /a/var/run ERROR: cannot unmount '/a': Device busy ERROR: cannot unmount rpool/ROOT/s10u8 ERROR: failed to unmount /a ERROR: cannot unmount boot environment - all 2 file systems remain mounted So, a system reboot is needed to get rid of this mess. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
dick hoogendijk skrev: On Sun, 11 Oct 2009 12:29:55 +0200 And this is bad too: arwen# luumount s10u8 ERROR: umount: /a/var/run busy ERROR: cannot unmount /a/var/run ERROR: failed to unmount /a/var/run ERROR: cannot unmount '/a': Device busy ERROR: cannot unmount rpool/ROOT/s10u8 ERROR: failed to unmount /a ERROR: cannot unmount boot environment - all 2 file systems remain mounted So, a system reboot is needed to get rid of this mess. Have you tried luumount -f? Nevada is also suffering from this lately. After a few lumount/luumount -f cycles of all the BE:s something appears to happen that clears this up. I have no idea what it is that causes this. ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 15:54:00 +0200 Thomas Törnblom thomas.tornb...@sun.com wrote: So, a system reboot is needed to get rid of this mess. Have you tried luumount -f? Yes, and this time it worked. So this message should not have been written. It's not the main message though. ;-) I hope they fix the broken lu package from s10u8 or at least know a workaround that works. I don't want to mess with my main server too much. I could not even ludelete the newly created ABE (and the ludelete from u7 also did not do the job). So, again, manual labour and some zfs destroy commands. Plus of course, get rid of the old (lu) data in /etc/lutab and /etc/lu. But the ABE has disappeared ;-) I will start testing again if some good advice is available. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
dick hoogendijk skrev: On Sun, 11 Oct 2009 15:54:00 +0200 Thomas Törnblom thomas.tornb...@sun.com wrote: So, a system reboot is needed to get rid of this mess. Have you tried luumount -f? Yes, and this time it worked. So this message should not have been written. It's not the main message though. ;-) I hope they fix the broken lu package from s10u8 or at least know a workaround that works. I don't want to mess with my main server too much. I could not even ludelete the newly created ABE (and the ludelete from u7 also did not do the job). So, again, manual labour and some zfs destroy commands. Plus of course, get rid of the old (lu) data in /etc/lutab and /etc/lu. But the ABE has disappeared ;-) I will start testing again if some good advice is available. Another problem that annoys the hell out of me is CR 6822727. About 80% of the times I create a new BE in preparation to do an upgrade I end up with having the zone copied into a new separate dataset, besides having the old cloned dataset around also. I expect that nothing special should be made about the zone when it is part of the global zone and gets cloned for free when the GZ is cloned. I manually clean up the mess before doing the upgrade. What is strange is that this does not happen everytime. Occasionally it does the right thing. ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
I think you misunderstood me. I'm sure you tested it, but it does NOT work. The LU packages from s10u8 fail in every aspect. Lucreate can't even create a new BE so what luupgrade does cannot be tested. Nor can I test what luactivate (from u8) does. Other persons report errors on this program too. So what is the error? Casper ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 12:29:55 +0200 And this is bad too: arwen# luumount s10u8 ERROR: umount: /a/var/run busy ERROR: cannot unmount /a/var/run ERROR: failed to unmount /a/var/run ERROR: cannot unmount '/a': Device busy ERROR: cannot unmount rpool/ROOT/s10u8 ERROR: failed to unmount /a ERROR: cannot unmount boot environment - all 2 file systems remain mounted Hm, I've seen those issues before; I'm not sure what keeps /var/run unmountable. I'm not even sure why the current liveupgrade also mounts the tmpfs directories and the non-essential zfs filesystems. Casper ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
Another problem that annoys the hell out of me is CR 6822727. About 80% of the times I create a new BE in preparation to do an upgrade I end up with having the zone copied into a new separate dataset, besides having the old cloned dataset around also. Yeah, that's a bad issue and it sometimes it is difficult fix it all up, including the orphaned snapshots. Casper ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
On Sun, 11 Oct 2009 17:33:12 +0200 casper@sun.com wrote: Hm, I've seen those issues before; I'm not sure what keeps /var/run unmountable. I'm not even sure why the current liveupgrade also mounts the tmpfs directories and the non-essential zfs filesystems. I just read in another thread about LU and the NVidea driver the following information: John Martin john.m.mar...@sun.com wrote: Following the instructions in the Solaris 10 10/09 Installation Guide: http://docs.sun.com/app/docs/doc/821-0438?l=en I installed LU patch 121431-43 (S10U7 has 121431-27) and installed the LU software using the S10U8 installer. This might explain the errors I've written about. I checked my LU patch version and it's on 121431-37 (!) I'm NOT allowed to download the latest (adviced) patch: Patch IR CR RSB Age Synopsis 121431 37 43---38 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch Looking for 121431-43 (1/1) Trying https://sunsolve.sun.com/ (1/1) Failed Failed (patch not found) -- Download Summary: 1 total, 0 successful, 0 skipped, 1 failed This should not be happening if this patch is NEEDED for upgrading. But again, it may not really be needed and than it's normal that it's not avail. I can not be sure. I'm no insider. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.02 b123 + All that's really worth doing is what we do for others (Lewis Carrol) ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
So what is the error? Casper, I posted the lucreate error and the extract from a debug run up thread. The system I was attempting to upgrade was a fresh install of update 7. lucreate/delete worked fine before installing the update 8 LU packages. Ian. -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
I see a similar error attempting to create a BE in update 7 with one zone (common) after adding the update 8 LU packages: Creating clone for rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 on rpool/ROOT/10u8/zoneRoot/common-10u8. cannot mount 'rpool/ROOT/10u8/zoneRoot/common-10u8': legacy mountpoint use mount(1M) to mount this filesystem ERROR: Failed to mount dataset rpool/ROOT/10u8/zoneRoot/common-10u8 From the log: COMMAND=/sbin/zfs clone rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 rpool/ROOT/10u8/zoneRoot/common-10u8 + gettext Executing ZFS clone command: %s. + /etc/lib/lu/luprintf -lp2D - Executing ZFS clone command: %s. /sbin/zfs clone rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 rpool/ROOT/10u8/zoneRoot/common-10u8 luclonefs: DEBUG(*): Executing ZFS clone command: /sbin/zfs clone rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 rpool/ROOT/10u8/zoneRoot/common-10u8. + gettext Creating clone for %s on %s. + /etc/lib/lu/luprintf -lp1 Creating clone for %s on %s. rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 rpool/ROOT/10u8/zoneRoot/common-10u8 luclonefs: Creating clone for rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 on rpool/ROOT/10u8/zoneRoot/common-10u8. + /sbin/sh -c /sbin/zfs clone rpool/ROOT/10u7ZFSa/zoneRoot/com...@10u8 rpool/ROOT/10u8/zoneRoot/common-10u8 ERRMSG= + [ 0 -ne 0 ] + /etc/lib/lu/luprintf -lp2D - %s luclonefs: DEBUG(*): + lulib_dataset_mounted rpool/ROOT/10u8/zoneRoot/common-10u8 + [ -x /sbin/zfs ] + /sbin/zfs get -Ho value mounted rpool/ROOT/10u8/zoneRoot/common-10u8 is_mounted=no + [ 0 -ne 0 -o no = no ] + return 0 + [ 0 -eq 1 ] + /sbin/zfs get -Ho value mountpoint rpool/ROOT/10u7ZFSa/zoneRoot/common src_mntprop=/zoneRoot/common + [ /zoneRoot/common = legacy ] + /sbin/zfs get -Ho value mountpoint rpool/ROOT/10u7ZFSa/zoneRoot/common src_mountpoint=/zoneRoot/common + [ /zoneRoot/common != / ] + [ -f /zoneRoot/common/lu_moved ] abe_ds=rpool/ROOT/10u7ZFSa/zoneRoot/common-10u8 abe_mountpoint=/zoneRoot/common-10u8 + [ rpool/ROOT/10u8/zoneRoot/common-10u8 = rpool/ROOT/10u7ZFSa/zoneRoot/common-10u8 ] + return 0 + /sbin/zfs set zpdata:rbe=10u8 rpool/ROOT/10u8/zoneRoot/common-10u8 + /sbin/zfs set zpdata:zn=common rpool/ROOT/10u8/zoneRoot/common-10u8 + echo /zoneRoot/common + sed s:^//:/: pbe_rawzp=/zoneRoot/common + zfs get -Ho value mountpoint rpool/ROOT/10u7ZFSa/zoneRoot/common mount_prop=/zoneRoot/common + [ /zoneRoot/common = legacy ] + /sbin/zfs get -Ho value mountpoint rpool/ROOT/10u8/zoneRoot/common-10u8 newpath=legacy newrawpath=legacy + /sbin/zfs mount rpool/ROOT/10u8/zoneRoot/common-10u8 cannot mount 'rpool/ROOT/10u8/zoneRoot/common-10u8': legacy mountpoint use mount(1M) to mount this filesystem + [ 1 -ne 0 ] + gettext Failed to mount dataset %s + /etc/lib/lu/luprintf -Eelp2 Failed to mount dataset %s rpool/ROOT/10u8/zoneRoot/common-10u8 luclonefs: ERROR: Failed to mount dataset rpool/ROOT/10u8/zoneRoot/common-10u8 + [ -n rpool/ROOT/10u7ZFSa/zoneRoot/common ] + /sbin/zfs set canmount=noauto rpool/ROOT/10u8/zoneRoot/common-10u8 + zonecfg -R /.alt.tmp.b-Mdh.mnt -z common set -F zonepath=legacy legacy is not an absolute path. + read zonename + rm -f /tmp/.luclonefs.28833.dslist + [ 0 = 1 ] + /usr/lib/lu/luumount -f -i /etc/lu/ICF.2 -- This message posted from opensolaris.org ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 sriman wrote: I am suspecting that your child datasets inside the zone are creating problem. I could not look into this today. Tomorrow, I will create a similar configuration for myself and try to reproduce the issue. Will get back to you tomorrow. Any progress?. I have stopped patching my systems because I can not create new BE's :-(. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSsyu85lgi5GaxT1NAQIw9gP/dqqZyYX2q9TOV/a3O3Ph5IOSLJ6WEcCz UHUPql+QqQiwp1n9ZLE2/YO7a/sKMxRIcua/qijWTEbVLV7CwnJj9Eb/7X3+K6M0 Vf0+j1koxVZf5H3nSwSLcdQHN7hpngA9HuELleQt5ZIaKPSDweKa2El8su8yIlFU H44vs0j9BNs= =qGWl -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 sriman narayana bhavanam - Sun Microsystems - Bangalore India wrote: It looks like you provided me the data after deleting the newly created BE. None of the clones created during lucreate are showing up in 'zfs list' output. Even /etc/lu/ICF.2 file is missing. Yes, Sorry. I deleted the BE because it didn't work. I recreate it now: [r...@stargate-host etc]# lustatus Boot Environment Is Active ActiveCanCopy Name Complete NowOn Reboot Delete Status - -- -- - -- -- Solaris10u7yes yesyes no - [r...@stargate-host etc]# lucreate -n Solaris10u7-20090926 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment Solaris10u7 file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment Solaris10u7-20090926. Source boot environment is Solaris10u7. Creating boot environment Solaris10u7-20090926. Cloning file systems from boot environment Solaris10u7 to create boot environment Solaris10u7-20090926. Creating snapshot for datos/ROOT/Solaris10u7 on datos/ROOT/solaris1...@solaris10u7-20090926. Creating clone for datos/ROOT/solaris1...@solaris10u7-20090926 on datos/ROOT/Solaris10u7-20090926. Setting canmount=noauto for / in zone global on datos/ROOT/Solaris10u7-20090926. Creating snapshot for datos/ROOT/Solaris10u7/var on datos/ROOT/Solaris10u7/v...@solaris10u7-20090926. Creating clone for datos/ROOT/Solaris10u7/v...@solaris10u7-20090926 on datos/ROOT/Solaris10u7-20090926/var. Setting canmount=noauto for /var in zone global on datos/ROOT/Solaris10u7-20090926/var. Creating snapshot for datos/zones/stargate on datos/zones/starg...@solaris10u7-20090926. Creating clone for datos/zones/starg...@solaris10u7-20090926 on datos/zones/stargate-Solaris10u7-20090926. Creating snapshot for datos/zones/babylon5 on datos/zones/babyl...@solaris10u7-20090926. Creating clone for datos/zones/babyl...@solaris10u7-20090926 on datos/zones/babylon5-Solaris10u7-20090926. WARNING: split filesystem / file system type zfs cannot inherit mount point options - from parent filesystem / file type - because the two file systems have different types. Saving existing file /boot/grub/menu.lst in top level dataset for BE Solaris10u7-20090926 as mount-point//boot/grub/menu.lst.prev. File /boot/grub/menu.lst propagation successful Copied GRUB menu from PBE to ABE No entry for BE Solaris10u7-20090926 in GRUB menu Population of boot environment Solaris10u7-20090926 successful. Creation of boot environment Solaris10u7-20090926 successful. [r...@stargate-host etc]# lustatus Boot Environment Is Active ActiveCanCopy Name Complete NowOn Reboot Delete Status - -- -- - -- -- Solaris10u7yes yesyes no - Solaris10u7-20090926 yes no noyes- [r...@stargate-host etc]# luactivate Solaris10u7-20090926 System has findroot enabled GRUB Generating boot-sign, partition and slice information for PBE Solaris10u7 A Live Upgrade Sync operation will be performed on startup of boot environment Solaris10u7-20090926. ERROR: unable to mount zones: zoneadm: zone 'stargate': zone root /datos/zones/stargate-Solaris10u7-20090926/root is reachable through /datos/zones/stargate/root/.alt.tmp.b-IRc.mnt zoneadm: zone 'stargate': call to zoneadmd failed ERROR: unable to mount zone stargate in /.alt.Solaris10u7-20090926 ERROR: unmounting partially mounted boot environment file systems ERROR: No such file or directory: error unmounting datos/ROOT/Solaris10u7-20090926 ERROR: cannot mount boot environment by name Solaris10u7-20090926 ERROR: Unable to determine the configuration of the target boot environment Solaris10u7-20090926. # cat /etc/lu/ICF.* [r...@stargate-host etc]# cat /etc/lu/ICF.* Solaris10u7:-:/dev/zvol/dsk/datos/swap:swap:67108864 Solaris10u7:/:datos/ROOT/Solaris10u7:zfs:0 Solaris10u7:/var:datos/ROOT/Solaris10u7/var:zfs:0 Solaris10u7:/home:datos/home:zfs:0 Solaris10u7:/datos:datos:zfs:0 Solaris10u7:/usr/local:datos/usr_local:zfs:0 Solaris10u7-20090926:-:/dev/zvol/dsk/datos/swap:swap:67108864 Solaris10u7-20090926:/:datos/ROOT/Solaris10u7-20090926:zfs:0 Solaris10u7-20090926:/datos:datos:zfs:0 Solaris10u7-20090926:/home:datos/home:zfs:0 Solaris10u7-20090926:/usr/local:datos/usr_local:zfs:0 Solaris10u7-20090926:/var:datos/ROOT/Solaris10u7-20090926/var:zfs:0 # cat /etc/vfstab #device device mount FS fsckmount mount #to mount to fsck point typepassat boot options # fd -
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Jesus Cea wrote: Another point... I map /datos/zones/ZONE/dataset into the ZONE using dataset name=datos/zones/ZONE/dataset/ (zonecfg), but I am thinking this is not necessary because the dataset is already a child of the ZONE root. So... Do I actually need to map the dataset into the zone, if the dataset is already a child of the ZONE root dataset?. I created a new zone and tested this... You must map it to the zone, or you can not see any dataset inside the zone. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSr1hMZlgi5GaxT1NAQKyEQP/YoWIK6x1vBotqnUvMJoTc1gTMd9HKCkS Q72LljR0Tu0JNQxu6Jf3ti7XFLp7ZteAKNNOu46hXCxA86HMomQhX8GYaCo6rLh6 R9Hrv9OAu6+rkRpgetfiISrVBQu8ifYIjWSk/tv6Co4aK9N3jZxlXPFGNaNlEmJM L1zxA0u5zZo= =uin3 -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
Hi This one is unclear to me, you have latest x86 LU patch. coudl you run the luactivate in debug mode export LU_DEBUG_OVERRIDE=20 for a start and send the output on, should have some data of interest to indicate the issue. Enda Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Mark J Musante wrote: PS: If the zones datasets must be children of the current BE dataset, that SHOULD be documented clearly in the manual!. The u7 version of LU should support zones outside of the BE dataset hierarchy. The latest doc pointer I have is this: http://docs.sun.com/app/docs/doc/819-5461/gigek?a=view So... What can I do?. Do you want me to check/test anything?. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSrpNcJlgi5GaxT1NAQK8rQP8CpkOYTmzLwg2HDehLyw+0Yj1hxvkoJdp mdajxD1Ux/rRscQEzy/wAC162b1zoHLLlDnrKTZP3nrWwYFbcVqZXaRIbojoyQO8 xtCsXXgvonozIWrfTU10m9PNkAjJH0PkWx8iDwGgR8fmQ5gwoLdGrHbrykoV32JU wxGFIGGCZwo= =BvTc -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
Hi Coudl I see debug from luactivate, I cannot see anything obvious in lucreate, but this time cut and paste to a text file and attach it, as it easier to work with after. Enda Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I have a warning when creating the BE. Maybe it is a hint: [r...@stargate-host /]# lucreate -n Solaris10u7-20090923 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment Solaris10u7 file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment Solaris10u7-20090923. Source boot environment is Solaris10u7. Creating boot environment Solaris10u7-20090923. Cloning file systems from boot environment Solaris10u7 to create boot environment Solaris10u7-20090923. Creating snapshot for datos/ROOT/Solaris10u7 on datos/ROOT/solaris1...@solaris10u7-20090923. Creating clone for datos/ROOT/solaris1...@solaris10u7-20090923 on datos/ROOT/Solaris10u7-20090923. Setting canmount=noauto for / in zone global on datos/ROOT/Solaris10u7-20090923. Creating snapshot for datos/ROOT/Solaris10u7/var on datos/ROOT/Solaris10u7/v...@solaris10u7-20090923. Creating clone for datos/ROOT/Solaris10u7/v...@solaris10u7-20090923 on datos/ROOT/Solaris10u7-20090923/var. Setting canmount=noauto for /var in zone global on datos/ROOT/Solaris10u7-20090923/var. Creating snapshot for datos/zones/stargate on datos/zones/starg...@solaris10u7-20090923. Creating clone for datos/zones/starg...@solaris10u7-20090923 on datos/zones/stargate-Solaris10u7-20090923. WARNING: split filesystem / file system type zfs cannot inherit mount point options - from parent filesystem / file type - because the two file systems have different types. Saving existing file /boot/grub/menu.lst in top level dataset for BE Solaris10u7-20090923 as mount-point//boot/grub/menu.lst.prev. File /boot/grub/menu.lst propagation successful Copied GRUB menu from PBE to ABE No entry for BE Solaris10u7-20090923 in GRUB menu Population of boot environment Solaris10u7-20090923 successful. Creation of boot environment Solaris10u7-20090923 successful. I don't understand the warning. Any pointer [r...@stargate-host /]# zfs get all datos/zones/stargate NAME PROPERTY VALUE SOURCE datos/zones/stargate type filesystem - datos/zones/stargate creation Tue Jul 28 1:01 2009 - datos/zones/stargate used 12.4G - datos/zones/stargate available618G - datos/zones/stargate referenced 606M - datos/zones/stargate compressratio1.74x - datos/zones/stargate mounted yes- datos/zones/stargate quotanone default datos/zones/stargate reservation none default datos/zones/stargate recordsize 128K default datos/zones/stargate mountpoint /datos/zones/stargate inherited from datos datos/zones/stargate sharenfs offlocal datos/zones/stargate checksum on default datos/zones/stargate compression gzip-9 inherited from datos/zones datos/zones/stargate atimeon default datos/zones/stargate devices on default datos/zones/stargate exec on default datos/zones/stargate setuid on default datos/zones/stargate readonly offdefault datos/zones/stargate zonedoffdefault datos/zones/stargate snapdir hidden default datos/zones/stargate aclmode groupmask default datos/zones/stargate aclinherit restricted default datos/zones/stargate canmount on default datos/zones/stargate shareiscsi offdefault datos/zones/stargate xattron default datos/zones/stargate copies 1 default datos/zones/stargate version 3 - datos/zones/stargate utf8only off- datos/zones/stargate normalizationnone - datos/zones/stargate casesensitivity sensitive - datos/zones/stargate vscanoffdefault datos/zones/stargate nbmand offdefault datos/zones/stargate sharesmb offdefault datos/zones/stargate refquota none default datos/zones/stargate refreservation none default
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Enda O'Connor wrote: Hi Coudl I see debug from luactivate, I cannot see anything obvious in lucreate, but this time cut and paste to a text file and attach it, as it easier to work with after. OK. I pasted it inline for Google benefice, and because a lot of mailing lists strip attachments. Pasting 1 MB of text in small chunks was a bit time consuming :). Studying the debug info carefully for a couple of hours (ugh!) I see this: [...] DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /dev/fd: block fd char - not valid DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /proc: block /proc char - not valid DEBUG(16511/create_abe_vfstab): CLI: swap device preserved /dev/zvol/dsk/datos/swap: used in both boot environments DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /devices: block /devices char - not valid DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /etc/dfs/sharetab: block sharefs char - not valid DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /system/contract: block ctfs char - not valid DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /system/object: block objfs char - not valid DEBUG(16511/create_abe_vfstab): CLI: mount point preserved /tmp: block swap char - not valid DEBUG(16511/create_abe_vfstab): UTL: execute command: /sbin/zfs zfs get - -Ho value mountpoint datos/ROOT/Solaris10u7-20090924 DEBUG(16511/create_abe_vfstab): UTL: command /sbin/zfs executed: pid 16513 errno 0x status 0x final status 0x output legacy WARNING: split filesystem / file system type zfs cannot inherit mount point options - from parent filesystem / file type - because the two file systems have different types. [...] The char - not valid is suspicious. Also, reading the warning message carefully, it seems to indicate that the filesystem is of ZFS type, but the parent filesystem is of type -. Type -???. Uhm... That seems to indicate some kind of issue with /etc/vfstab. In fact, some lines later I read: + gettext Differences between old and new vfstab files:\n**\n%R\n** + /etc/lib/lu/luprintf -lp2D - Differences between old and new vfstab files: ** %R ** luedvfstab: DEBUG(*): Differences between old and new vfstab files: ** 0a1 #live-upgrade:Thu Sep 24 12:45:48 CEST 2009 updated boot environment Solaris10u7-20090924 11a13,14 datos/ROOT/Solaris10u7-20090924 - / zfs 1 no - datos/ROOT/Solaris10u7-20090924/var - /varzfs 1 no - ** My current /etc/vfstab doesn't contain any reference to / or /var, because they are managed by ZFS infraestructure: [r...@stargate-host tmp]# cat /etc/vfstab #device device mount FS fsckmount mount #to mount to fsck point typepassat boot options # fd - /dev/fd fd - no - /proc - /proc proc- no - /dev/zvol/dsk/datos/swap- - swap- no - /devices- /devicesdevfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs- /system/contractctfs- no - objfs - /system/object objfs - no - swap- /tmptmpfs - yes - If I change the /etc/vfstab by hand to add the references to / and /var, I can create the new BE and mount it without any warning/error!!!: [r...@stargate-host tmp]# cat /etc/vfstab #device device mount FS fsckmount mount #to mount to fsck point typepassat boot options # datos/ROOT/Solaris10u7 - / zfs 1 no - datos/ROOT/Solaris10u7/var - /varzfs 1 no - fd - /dev/fd fd - no - /proc - /proc proc- no - /dev/zvol/dsk/datos/swap- - swap- no - /devices- /devicesdevfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs- /system/contractctfs- no - objfs - /system/object objfs - no - swap- /tmptmpfs - yes - Now I create the new BE: [r...@stargate-host tmp]# lucreate -n Solaris10u7-20090924 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment Solaris10u7 file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I have added a new zone (babylon5) to the machine. So, the machine now have two non-global zones. sriman wrote: Hi, We need to few details to arrive at exact root cause. Could you please send us the following. 1. Contents of ICF files #cat /etc/lu/ICF.* [r...@stargate-host zones]# cat /etc/lu/ICF.* Solaris10u7:-:/dev/zvol/dsk/datos/swap:swap:67108864 Solaris10u7:/:datos/ROOT/Solaris10u7:zfs:0 Solaris10u7:/var:datos/ROOT/Solaris10u7/var:zfs:0 Solaris10u7:/home:datos/home:zfs:0 Solaris10u7:/datos:datos:zfs:0 Solaris10u7:/usr/local:datos/usr_local:zfs:0 2. Contents of /etc/vfstab #cat /etc/vfstab [r...@stargate-host zones]# cat /etc/vfstab #device device mount FS fsckmount mount #to mount to fsck point typepassat boot options # fd - /dev/fd fd - no - /proc - /proc proc- no - /dev/zvol/dsk/datos/swap- - swap- no - /devices- /devicesdevfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs- /system/contractctfs- no - objfs - /system/object objfs - no - swap- /tmptmpfs - yes - 3. Output of following zfs command #zfs list -Ho name,mountpoint,canmount,zpdata:rbe,zpdata:zn [r...@stargate-host zones]# zfs list -Ho name,mountpoint,canmount,zpdata:rbe,zpdata:zn datos /datos on - - da...@20090716-18:57- - - - da...@20090716-23:57- - - - da...@20090801-00:50- - - - da...@20090826-01:58- - - - da...@20090922-21:16- - - - datos/ROOT legacy on - - datos/r...@20090716-18:57 - - - - datos/r...@20090716-23:57 - - - - datos/r...@20090801-00:50 - - - - datos/r...@20090826-01:58 - - - - datos/r...@20090922-21:16 - - - - datos/ROOT/Solaris10u7 / noauto - - datos/ROOT/solaris1...@20090716-18:57 - - - - datos/ROOT/solaris1...@20090716-23:57 - - - - datos/ROOT/solaris1...@20090801-00:50 - - - - datos/ROOT/solaris1...@20090826-01:58 - - - - datos/ROOT/solaris1...@20090922-21:16 - - - - datos/ROOT/Solaris10u7/var /varnoauto - - datos/ROOT/Solaris10u7/v...@20090716-18:57 - - - - datos/ROOT/Solaris10u7/v...@20090716-23:57 - - - - datos/ROOT/Solaris10u7/v...@20090801-00:50 - - - - datos/ROOT/Solaris10u7/v...@20090826-01:58 - - - - datos/ROOT/Solaris10u7/v...@20090922-21:16 - - - - datos/dump - - - - datos/home /home on - - datos/h...@20090922-21:16 - - - - datos/swap - - - - datos/usr_local /usr/local on - - datos/usr_lo...@20090716-23:57 - - - - datos/usr_lo...@20090801-00:50 - - - - datos/usr_lo...@20090826-01:58 - - - - datos/usr_lo...@20090922-21:16 - - - - datos/zones /datos/zonesnoauto - - datos/zo...@20090801-00:50 - - - - datos/zo...@20090826-01:58 - - - - datos/zo...@20090922-21:16 - - - - datos/zones/babylon5/datos/zones/babylon5 on Solaris10u7 babylon5 datos/zones/babylon5/dataset/datos on Solaris10u7 babylon5 datos/zones/babylon5/dataset/home /home on Solaris10u7 babylon5 datos/zones/babylon5/dataset/home/zope /home/zope on Solaris10u7 babylon5 datos/zones/babylon5/dataset/home/zope/zope-instance /home/zope/zope-instanceon Solaris10u7 babylon5 datos/zones/babylon5/dataset/home/zope/zope-instance/log /home/zope/zope-instance/logon Solaris10u7 babylon5 datos/zones/stargate/datos/zones/stargate on Solaris10u7 stargate datos/zones/starg...@20090801-00:50 - - Solaris10u7 stargate datos/zones/starg...@20090826-01:58 - - Solaris10u7 stargate datos/zones/starg...@20090922-21:16 - - Solaris10u7 stargate datos/zones/stargate/dataset/datos on Solaris10u7 stargate datos/zones/stargate/data...@20090801-00:50 - - Solaris10u7 stargate datos/zones/stargate/data...@20090826-01:58 - - Solaris10u7 stargate datos/zones/stargate/data...@20090922-21:16 - - Solaris10u7 stargate datos/zones/stargate/dataset/correo noneon Solaris10u7 stargate datos/zones/stargate/dataset/cor...@20090801-00:50 - -
Re: [zones-discuss] Strange error with ZFS Live Upgrade and Zones
Hi What rev of 121430/121431 (SPARC/x86 ) is applied, there are a lot of fixes in later rev's ( 42 is latest ) for zones on zfs. Could I see zfs list zonecfg info on stargate. Enda Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I am trying to do a live upgrade of a Solaris 10 U7 with zones, and I am finding some errors. Any suggestion is welcomed. I do a lucreate correctly. But when I try to do a luactivate, I find this: [r...@stargate-host /]# luactivate Solaris10u7-20090922_2 System has findroot enabled GRUB Generating boot-sign, partition and slice information for PBE Solaris10u7 ERROR: unable to mount zones: zoneadm: zone 'stargate': zone root /datos/zones/stargate-Solaris10u7-20090922_2/root is reachable through /datos/zones/stargate/root/.alt.tmp.b-ox.mnt zoneadm: zone 'stargate': call to zoneadmd failed ERROR: unable to mount zone stargate in /.alt.tmp.b-T5.mnt ERROR: unmounting partially mounted boot environment file systems ERROR: cannot mount boot environment by icf file /etc/lu/ICF.2 ERROR: Unable to mount the boot environment Solaris10u7-20090922_2. I see some of the filesystems mounted, so I try to unmount them: [r...@stargate-host /]# luumount Solaris10u7-20090922_2 ERROR: No such file or directory: error unmounting /.alt.tmp.b-T5.mnt/var/run ERROR: umount: /.alt.tmp.b-T5.mnt/var/run busy ERROR: cannot unmount /.alt.tmp.b-T5.mnt/var/run ERROR: failed to unmount /.alt.tmp.b-T5.mnt/var/run ERROR: cannot fully unmount boot environment - 1: file systems remain mounted Reading the Solaris 10 documentation I can not see any relevant detail, except that I created my zones under datos/zones (a ZFS dataset) instead of a child of datos/ROOT/Solaris10u7, as showed in every example. I don't know if this is relevant or not. In any case, the zones datasets are snapshottedcloned correctly. If that is the problem, I think I can do a rename of the dataset to move it to the right place, and edit the config files by hand to reflect the new location (/etc/zones). But first I need to know if this is actually the issue. I think that having the zones under datos/zones, being an absolute path independent of the BE (Boot Environment), could be the problem. Could you confirm it?. This is a production machine, so I would like to solve this without disturbing it too much (I can mess a bit with it if necessary). Thanks for any help. PS: If the zones datasets must be children of the current BE dataset, that SHOULD be documented clearly in the manual!. - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/ . _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBSrlMP5lgi5GaxT1NAQKz4gP/V77F5p5+ToJlkWcSblbZLKkiESxPB/2O nEG4aGo7SVwcuhbjs/gQ6eZNbgb9SxSqUEwdyxGACij4sIqYVkM+ajfXzPVRAEkN S9zLqC67uThCvkBqkfikp34I6/hBa+bPyhIvrzYHafCoQDJviwnjI6pbPontwRU3 qO0LaLtwsrQ= =77AS -END PGP SIGNATURE- ___ zones-discuss mailing list zones-discuss@opensolaris.org -- Enda O'Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718 ___ zones-discuss mailing list zones-discuss@opensolaris.org