Alas, not.

What I get is (I don't know how to copy it, so I used pencil and paper):
ZFS 1/6 cannot mount '/export': directory is not empty (6/6)

/usr/bin/zfs mount -a failed exit status 1

svc:/system/filesystem/local:default: /lib/svc/method/fs-local failed

system /filesystem/local:default failed


And when I boot to nv99 failsafe, I get
Searching for installed .. ROOT/nv_101 only!
But, from nv101:
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
nv_99                      yes      no     no        yes    -         
nv_101                     yes      yes    yes       no     -  

I have definitively not removed anything, nor deactivated nv_101!!

After 6-8 luupgrades, and meticulously following the recipe above, I seem to 
have hit a snag, another time. luupgrade is really crappy!  :(

Whatever, this is what I get on nv_99; maybe it can help to recover the 
situation:

# cat /.alt.nv_99/root/vfstab 
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/zvol/dsk/rpool/swap        -       -       swap    -       no      -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -
# cat /.alt.nv_99/root/mount  
rpool/ROOT/nv_99                /
rpool/export/home               /export/home
rpool                           /rpool
# cat /.alt.nv_99/root/df_h    
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/nv_99       128G  10.0G    48G    18%    /
/devices                 0K     0K     0K     0%    /devices
/dev                     0K     0K     0K     0%    /dev
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   6.9G   308K   6.9G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1    58G  10.0G    48G    18%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   6.9G     0K   6.9G     0%    /tmp
swap                   6.9G    20K   6.9G     1%    /var/run
rpool/export/home      128G    60G    48G    56%    /export/home
rpool                  128G    41K    48G     1%    /rpool

Again, luupgrade went through without any trouble, not a single error message, 
no crash, everything in 'order'.
One can only hope and pray, that the future live-upgrade system will provide 
for 2 completely independent instances.

Any hint on how to recover my beloved and only working nv_99 is wellcome!

Uwe
-- 
This message posted from opensolaris.org

Reply via email to