Hi Darren,

Thanks for providing your data ...

Darren Reed wrote:
> Looking at the web page for the AI, it seems the
> requirements are a bit thin:
>
> http://hub.opensolaris.org/bin/view/Project+caiman/AI_Reqs_Final

Thanks for pointing this out. I will look into this; something may
have gone awry in the website transition.

>
> So let me expand on this what my requirements are:
> * support the installation of multiple instances of OS.
> If each slice (c0t0d0s0, c0t0d0s3, etc) is its own ZFS root volume
> and each one has its own install of OS, that is fine. Being able
> to preserve slices is important. With jumpstart, I can specify
> which slices to preserve and where to mount specific UFS slices.
> Preservation of this type of feature is advantageous.

Preservation of existing slices is something already supported in
AI. We don't have support for specifying where to mount preserved
UFS slices in the newly installed instance though. How badly is that
really wanted as a spec *with* the install? Seems it could easily be an
post installation thing.

>
> * if I'm going to have multiple ZFS pools per disk (because OS can
> only be installed into a ZFS volume and I want to initialise the
> volume on each install) then I need to be able to give each ZFS
> pool its own name. Requiring them to all be "root" or something
> like that will get in the way. 

I agree AI should allow being able to specify name of the pool.

I think support for this is going to fall out of implementing multiple
disk (and pool) support for AI (bug 6185). Feel free to file a separate
specific request for this now if you'd like.

> Using multiple ZFS filesystems
> inside a single pool is a step backwards.

Can you elaborate a bit on why this is the case for you?

>
> * in addition to the above, it is desirable to be able to share a
> filesystem slice amongst all OS installations. For example, if I
> have build#120 on s0, build#130 on s3, I might have STC checked
> out in s4, meaning I can easily run the same test suite for both - or
> if I need to make changes to the test suite, I am always using the
> test suite from "/testing" but / changes between builds 120 and 130.
> Checking the test suite out multiple times (once for each install)
> is annoying because I then have to keep test suite changes in sync
> when it comes to verification of those changes and bugs being fixed
> (or present.)

This seems equivalent to having a couple of BEs on your system,
and a shared area. I think with the addition of support for multiple
disks/pools, and each of the three said components could be separate
disks or pools, that would meet this requirement.

But can you describe why it wouldn't be desirable if the three said
components are in the same pool?

>
> + these three requirements allow a single machine with a huge disk
> (200G, etc) to support multiple installations of OS, potentially
> of different versions, allowing me to switch between them at boot
> time for the purpose of testing a specific OS version. I do this
> today, with jumpstart, where I can have s0 being the build#n bits
> and s3 will be build#n+bfu bits.
> This is important for doing performance testing as the same hardware
> is used, as well as minimising work required to retest one version
> or the other. Isolating each install like this protects each one from
> others using too much disk space. Below I've included a sample
> "profile" file that I use for jumpstart installs onto SPARC hardware.
>
> * on the server side, I need to support each client being able to have
> its own configuration. So if there are 10 clients, AI needs to support
> each client installing 10 different versions of OS (worst case
> scenario.)
>
> * there are two types of files that need to be copied over: a static set
> (which come from a tar ball) and a more dynamic set that come "live"
> from the file system. The static set could easily be put into an IPS
> package and that package added to the manifest which describes the
> install. The live set, I'm not sure about. 

Can you elaborate a little bit more on what you mean by "live"?

>
> * various commands are run from the finish script that manipulate the
> system prior to it rebooting for use. Most of this, I believe, can
> be put into the start method of the above IPS package that is installed
> and run once at first boot.
>
> Unless I'm mistaken, because there is no longer any equivalent of the
> jumpstart "finish", preparing a system for network install will now
> require a custom package for each system to be made. Whereas before I
> could use "pax" to copy files directly from the NFS partition being
> used to host the install, this is no longer possible and since that
> "pax" command brought over files like "/etc/hostname.bge0", 
> "/etc/inet/hosts",

I'll have to ask why you'd be doing this config here over using
the sysidcfg file with jumpstart?

For AI we do plan on providing system config support which will
include specification of networking, including a static networking
config. But yes, if you do your networking config outside of this
you run into what you're describing above.


thanks,
-ethan

> etc, I cannot wait for NFS to become available after the first boot
> into the fresh install because interfaces are not likely to be
> configured correctly. Of course it would be much better to be able to
> run a script post-install and pre-reboot.
>
> Darren
>
> install_type initial_install
> system_type standalone
> cluster SUNWCall
> partitioning explicit
> filesys c0t0d0s0 existing /aroot preserve
> filesys c0t0d0s1 existing swap
> filesys c0t0d0s3 existing /
> filesys c0t0d0s4 existing /broot
> filesys c0t0d0s5 existing /export preserve
> filesys c0t1d0s0 existing /mroot preserve
> filesys c0t1d0s1 existing
> filesys c0t1d0s3 existing /nroot preserve
> filesys c0t1d0s4 existing /oroot preserve
> filesys c0t1d0s5 existing /export2 preserve
>
> _______________________________________________
> caiman-discuss mailing list
> caiman-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to