Hello James et all,

  Yes, I updated LU packages via Solaris_*/Tools/Install/liveupgrade20
script on both of my test systems which I was updating yesterday (one
snv_89 -> snv_103, another sol10u4 -> sol10u6; both with UFS mirrored
roots).

  Interestingly, Solaris 10 update went more clean than the Opensolaris
counterpart. At least I did not encounter the described problem with
ZFS mountpoints trashed by names of its child-ZFSes; neither while it
was initially updating, nor now - after update (i'm double-checking
while writing this message).

  While describing the problem for the initial posting, my Opensolaris
test system was still running snv_89, although with updated LiveUpgrade
packages. I did explicitly check the problem:
1) lumount'ed the new BE root (snv_103),
2) removed the erroneous mountpoints inside it,
3) luumount'ed the new BE root
4) mounted the new BE root directly as in
   mount /dev/md/dsk/d1 /mnt/test
5) checked that the ZFS base dirs are empty as I left them
6) umounted /mnt/test
7) lumount'ed the new BE root again, and saw that it contains
   the erroneous mountpoints again

  NOTE: now that I fully wrote this message, I became uncertain
  whether step 7 was only lumounting, or luactivate was also
  involved. The latter still certainly causes the bug, as I
  checked now in snv_103.

  These mountpoints did prevent the updated system from booting
properly (see second post), so LiveUpgrade and unattended reboot
are definitely not an option, in this case. This is a bad thing,
IMHO.

  I ran "zfs umount -a", removed the mountpoints and then ran
"zfs mount -a; svcadm clear filesystem/local". Afterwards the
system worked like a charm.

  Now, as the running system is all snv_103, I can no longer
reproduce the problem exactly as I described it. At least,
lumount'ing the old BE does not create the stray mountpoints.
However, luactivate'ing it did create these mountpoints.
So the bug is not all stumped on in the OpenSolaris tree,
it crawled and hid somewhere ;)

  And this does not happen on the sol10u6 system, neither
with lumount nor luactivate of the old BE.

James Carlson wrote:
> Jim Klimov writes:
>> It creates mountpoints for my ZFS hierarchy, even though it doesn't mount the
>> filesystems.
>
> Did you upgrade the LU packages to the newest versions before creating
> the new BE?
>
> For what it's worth, this sounds like CR 6376420, an old problem.
>
>> For example, my distribution ISO images are in pool/export/ftp
>> ZFS tree, where pool/export is explicitly mounted as /export and 
>> pool/export/ftp
>> inherits a mountpoint as /export/ftp.
>
> Yep; that'll trigger the above bug.
>
>> I wonder if this blatant creation of the tree of mountpoints can be disabled 
>> for good? I tried removing them all, but they re-appear on next lumount.
>
> lumount shouldn't create these things; I don't know of any cases where
> that happens.  lucreate and lumake do create them, though, so after an
> upgrade, it's sufficient to mount up the new boot environment once and
> rmdir the stray mount points.
>
> Yes, ZFS does insist on non-empty status, and it does so on all
> versions.  It's an intentional feature.
That's okay. IMHO other subsystems should not trash such mountpoints ;)

> (I haven't seen this LU bug in quite a while ... but then I switched
> over to a ZFS root some time ago.)
I'll see later if similar things happen to our ZFS-root OpenSolaris machine,
but I won't promise any timeframe.

Thanks,
Jim
-- 
This message posted from opensolaris.org

Reply via email to