http://defect.opensolaris.org/bz/show_bug.cgi?id=8782


Darren Kenny <dkenny at opensolaris.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |dkenny at opensolaris.org




--- Comment #1 from Darren Kenny <dkenny at opensolaris.org>  2009-05-08 
05:01:10 ---
[ Possible red-herring ]
Hmm, I've seen that message when the netadm and netcfg (and groups too later)
weren't in /etc/passwd after the BFU.

I don't know how exactly it happened for me, but I did the steps:

- lucreate new_be
- lumount new_be
- bfu <NWAM> <mountpoint>
  - acr 
- luumount new_be
- luactivate new_be
- init 6

but then on reboot some of the files in /etc seemed to have reverted back to
the pre-BFU state (in my case it was seen in by the lack of these users in
passwd).

No matter how many times I did this it seemed to happen - I even checked the
files before unmounting the be, and then they looked fine...

I found this very strange, but I was able to "stop" it, by creating a snapshot
of the BE before I umounted it... :

zfs snapshot rpool/ROOT/new_be at pre-reboot

I really don't know if this was just me or what, but it's worked... 

Interestingly I've not seen this behaviour on OpenSolaris and doing a BFU
there, but in the OpenSolaris case I seem to have an issue on reboot where both
network/physical servers are enabled - causing both to go into maintenance... 
(Thankfully pressing ESC will drop out of the graphical boot screen), and I can
log in and fix this. What I don't understand is that prior to the BFU only nwam
was enabled...

-- 
Configure bugmail: http://defect.opensolaris.org/bz/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.

Reply via email to