On 07/07/11 07:31, Matt Keenan wrote:
Hi,
Can I get a code review for bugs :
7048015 Automated Installer should create a separate /var and shared
area
7049157 Text installer should create a separate /var and shared area
7049160 GUI installer should create a separate /var and shared area
http://monaco.sfbay.sun.com/detail.jsf?cr=7048015
Webrev :
https://cr.opensolaris.org/action/browse/caiman/mattman/7048015.7049157.7049160
All three installers need to create /var within the installed BE and
/var/shared globally available. This is achieved by adding two
Filesystem objects to the DESIRED root Zpool object before Target
Instantiation is called. TI will then simply create them.
A new checkpoint "VarSharedDataset" is being created to handle the
additions, and will be called by all three installers.
Hi Matt,
I haven't reviewed the code yet, but wanted first to ask about the
behavior of this with AI. Namely, in how it deals with what may or may
not already be specified in the manifest. I would also like to illicit
input from others on how they think this should behave.
We could strictly require that the name of the datasets that are to be
used for /var and /var/share, to be the particular names we've
hardcoded. But in thinking about it, the name of the dataset is
somewhat irrelevant; what really matters is the dataset's "in_be"
status
and it's mountpoint. As long as the user specified something that
results in having those met, things "should" still be ok. For example,
if what they specified results in the following:
rpool/ROOT/solaris "/"
rpool/ROOT/solaris/foo "/var"
rpool/export "/export"
rpool/export/home "/export/home"
rpool/blah "/var/share"
this could actually be acceptable. Nothing downstream (beadm, pkg, or
the OS) relies on the dataset name coinciding with the mountpoint.
If we were to go this route, then for cases where the user has used up
the preferred dataset names for other purposes, we could perhaps just
generate some other random dataset names and set the right mountpoints.
For example, if what they've specified is:
rpool/ROOT/solaris "/"
rpool/ROOT/solaris/var "/foo"
rpool/export "/export"
rpool/export/home "/export/home"
rpool/VARSHARE "/blah"
we could really just pick some other dataset names and make sure we
have
an in_be /var, and an shared /var/share. i.e. would could add in:
rpool/ROOT/solaris "/"
rpool/ROOT/solaris/var "/foo"
rpool/ROOT/solaris/var_<random> "/var"
rpool/export "/export"
rpool/export/home "/export/home"
rpool/VARSHARE "/blah"
rpool/VARSHARE_<random> "/var/share"
And I suppose the failure cases would be where they specified any
dataset that is not in_be that has a mountpoint "/var", or if they
specified any dataset that is in_be, but has a mountpoint of
"/var/share", then we fail.
What do people think, do we need to be this flexible in AI?