Trying to luupgrade (sincerely, after an aborted attempt because luupdate 
validly couldn't mount another path)

Creating upgrade profile for BE <snv_117>.
WARNING: The root <rpool/ROOT/snv_117> for BE <snv_117> is already mounted to 
</a>.
ERROR: mount point </a> is already in use, mounted on <rpool/ROOT/snv_117>
ERROR: cannot use mount point </a> to mount icf file 
</tmp/.luupgrade.beicf.19048>
ERROR: cannot mount boot environment by icf file </tmp/.luupgrade.beicf.19048>
cat: cannot open /tmp/.luupgrade.tmp.19048: No such file or directory
ERROR: Unable to mount ABE disk slices: < >.
ERROR: Unable to mount the BE <snv_117>.


That is, it tries to mount rpool/ROOT/snv_117 on /a but can't, because the same
rpool/ROOT/snv_117 is already mounted on /a. MPoint busy, abort.

Now, I understand the data-must-be-secure=always-ask-admin policy leaving 
nothing to chance. But this particular case seems a little bit off.

Eh, I'm still under the influence of ludelete destroying datasets of local 
zones.
Without asking. Recursively with all the snapshots. In many (not all) versions 
I've seen - without cloning the zone root to per-BE clone.

NB: snv_117 liveupgrade did not make such clones, and did not list zoneroots
into otherwise huge /etc/lu/ICF.2, and still looks for zone mount points due 
to /etc/zones/index file pointers during luupgrade run.

All in all, it does make updates more simple on one hand, but more tricky and
often risky of irrecoverable dataset loss on the other. Certainly does take a 
lot
of time to triple-check everything and redo half by hand. Eh, midnight buzz as
I'm updating another server :(

//Jim
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to