On 22/01/2010 4:39 PM, Ethan Quach wrote:
> Hi Darren,
>
> Thanks for providing your data ...
>
> Darren Reed wrote:
>> Looking at the web page for the AI, it seems the
>> requirements are a bit thin:
>>
>> http://hub.opensolaris.org/bin/view/Project+caiman/AI_Reqs_Final
>
> Thanks for pointing this out. I will look into this; something may
> have gone awry in the website transition.
>
>>
>> So let me expand on this what my requirements are:
>> * support the installation of multiple instances of OS.
>> If each slice (c0t0d0s0, c0t0d0s3, etc) is its own ZFS root volume
>> and each one has its own install of OS, that is fine. Being able
>> to preserve slices is important. With jumpstart, I can specify
>> which slices to preserve and where to mount specific UFS slices.
>> Preservation of this type of feature is advantageous.
>
> Preservation of existing slices is something already supported in
> AI. We don't have support for specifying where to mount preserved
> UFS slices in the newly installed instance though. How badly is that
> really wanted as a spec *with* the install? Seems it could easily be an
> post installation thing.

So the difference here is between:
if it is post install but before the reboot, editing of the manifest is 
required;
if it is post install and part of a special IPS package then I need to 
rebuild
that package if I change the install partition.
As a post install action, a lot more effort/work is required on the 
administration side.

I believe that UFS is still supported, just not as the root filesystem, 
right?

If I can use AI to specify an NFS partition to mount at first boot, 
shouldn't
it be possible to support UFS too?


>> * if I'm going to have multiple ZFS pools per disk (because OS can
>> only be installed into a ZFS volume and I want to initialise the
>> volume on each install) then I need to be able to give each ZFS
>> pool its own name. Requiring them to all be "root" or something
>> like that will get in the way. 
>
> I agree AI should allow being able to specify name of the pool.
>
> I think support for this is going to fall out of implementing multiple
> disk (and pool) support for AI (bug 6185). Feel free to file a separate
> specific request for this now if you'd like.
>
>> Using multiple ZFS filesystems
>> inside a single pool is a step backwards.
>
> Can you elaborate a bit on why this is the case for you?

See below.


>> * in addition to the above, it is desirable to be able to share a
>> filesystem slice amongst all OS installations. For example, if I
>> have build#120 on s0, build#130 on s3, I might have STC checked
>> out in s4, meaning I can easily run the same test suite for both - or
>> if I need to make changes to the test suite, I am always using the
>> test suite from "/testing" but / changes between builds 120 and 130.
>> Checking the test suite out multiple times (once for each install)
>> is annoying because I then have to keep test suite changes in sync
>> when it comes to verification of those changes and bugs being fixed
>> (or present.)
>
> This seems equivalent to having a couple of BEs on your system,
> and a shared area. I think with the addition of support for multiple
> disks/pools, and each of the three said components could be separate
> disks or pools, that would meet this requirement.
>
> But can you describe why it wouldn't be desirable if the three said
> components are in the same pool?

Because I want to limit the amount of disk space each has available to it.

I'd rather a particular filesystem fill and cause itself problems rather 
than
it grow unbounded, causing me to need to keep things clean.

In a practical example, I'd rather /var/crash from one install did not 
intrude
on the ability of another install to write out a crash dump.

If setting a ZFS quota on a filesystem were possible from AI then the 
picture
is further muddied.

What I like about the current method is that I partition the disk once, 
setting aside
disk space for each compartment and I can then forget about the actual 
sizes.
So if I need to replace a disk, when the new disk is put in, it gets 
partitioned
once and after that newfs runs on which ever partition the new install 
goes - this
is determined by which partitions are "preserve" and which are not.

The ZFS model of one big pool requires me to keep the sizes around somewhere
as part of the active jumpstart configuration (assuming that I use 
quotas), which
binds the AI configureation more to the hardware than I do with 
jumpstart now.

If it is a pool per slice in a partition then I don't need to worry 
about setting
quotas for each ZFS filesystem (I'm not even sure AI allows that at 
present.)

I suspect that there are also mental hurdles of it being preferable to give
each install its own pool, so that if there are ZFS bugs or problems that
cause the pool to become corrupted, only a single slice is lost.

To give another example of why I might prefer to create multiple ZFS pools
on an installation disk, consider the case where I've got multiple disks 
but I
want to force everything involved in the root filesystem to be on a 
single disk.
The first disk might be 1TB, so having a 1TB root filesystem could easily be
considered too much. Thus I might want to have a 100GB root pool and then
give the other 900GB to a data pool that is later expanded across other 
drives.

>> + these three requirements allow a single machine with a huge disk
>> (200G, etc) to support multiple installations of OS, potentially
>> of different versions, allowing me to switch between them at boot
>> time for the purpose of testing a specific OS version. I do this
>> today, with jumpstart, where I can have s0 being the build#n bits
>> and s3 will be build#n+bfu bits.
>> This is important for doing performance testing as the same hardware
>> is used, as well as minimising work required to retest one version
>> or the other. Isolating each install like this protects each one from
>> others using too much disk space. Below I've included a sample
>> "profile" file that I use for jumpstart installs onto SPARC hardware.
>>
>> * on the server side, I need to support each client being able to have
>> its own configuration. So if there are 10 clients, AI needs to support
>> each client installing 10 different versions of OS (worst case
>> scenario.)
>>
>> * there are two types of files that need to be copied over: a static set
>> (which come from a tar ball) and a more dynamic set that come "live"
>> from the file system. The static set could easily be put into an IPS
>> package and that package added to the manifest which describes the
>> install. The live set, I'm not sure about. 
>
> Can you elaborate a little bit more on what you mean by "live"?

For each host that I jumpstart, I have a  very sparse directory tree, 
something like this:
/export/jumpstart/clientname/root/etc
/export/jumpstart/clientname/root/etc/inet
/export/jumpstart/clientname/root/var/svc/profile

When "finish" runs, it extracts a .tar file that lives in 
/export/jumpstart that contains
stuff that does not change a lot (such as the hostnames listed in 
/etc/inet/hosts.equiv)
whereas the hosts file (/etc/inet/hosts) is under 
/export/jumpstart/clientname/root/etc/inet
because I'm often editing that file. In this case, "live" refers to the 
active filesystem
rather than a "moribund"(?) directory tree wrapt up in a tar file.

>> * various commands are run from the finish script that manipulate the
>> system prior to it rebooting for use. Most of this, I believe, can
>> be put into the start method of the above IPS package that is installed
>> and run once at first boot.
>>
>> Unless I'm mistaken, because there is no longer any equivalent of the
>> jumpstart "finish", preparing a system for network install will now
>> require a custom package for each system to be made. Whereas before I
>> could use "pax" to copy files directly from the NFS partition being
>> used to host the install, this is no longer possible and since that
>> "pax" command brought over files like "/etc/hostname.bge0", 
>> "/etc/inet/hosts",
>
> I'll have to ask why you'd be doing this config here over using
> the sysidcfg file with jumpstart?

The sysidcfg file does not configure extra interfaces post install reboot:
I tried this and despite having 2 or 3 interfaces correctly listed in the
sysidcfg file, only one network interface was ever configured as a
result of the install. I had to drop in my own files in /etc for that to 
occur.
I don't know if this is an install bug or the way it is meant to be.

For example, with the following sysidcfg file:

terminal=vt100
system_locale=C
timezone=US/Pacific
timeserver=localhost
name_service=NIS {domain_name=scdev.sfbay.sun.com}
nfs4_domain=sun.com
network_interface=bge0 {
         primary
         hostname=netvirt-a1
         netmask=255.255.255.0
         default_route=10.5.233.1
         protocol_ipv6=yes
}
network_interface=bge1 {
         ip_address=192.168.1.18
         netmask=255.255.255.0
         protocol_ipv6=yes
         default_route=NONE
}

root_password=*********
security_policy=NONE

when the host reboots, there will not be a /etc/hostname.bge1 file present.

In addition, sysidcfg does not allow IPv6 addresses to be configured 
staticly.
For this I have to install /etc/hostname6.* files - there's no way 
around it,
even for the primary network interface.

To get a fully automated install requires a script to run, currently 
from "finish",
that sets the keyboard type. My hope is that this can be achieved with 
AI and
IPS by installing a custom "finish-install" package that runs early 
enough in the
boot sequence to prevent that question from being asked.

Hope this helps...?

Darren


Reply via email to