> -----Original Message-----
> From: [email protected] [mailto:caiman-discuss-
> [email protected]] On Behalf Of Shawn Walker
> Sent: 11 March 2011 19:32
> To: Robert Milkowski
> Cc: [email protected]
> Subject: Re: [caiman-discuss] Trully hands-free installations
> 
> On 03/11/11 07:59 AM, Robert Milkowski wrote:
> ...
> > 3. pkg performance
> >
> >      The solaris.zlib downloads at about 100MB/s on a GbE network -
good.
> >      However then pkg starts downloading packages and the network
> utilizations
> >      varies between 0.5MB - 30MBs with an average less than a couple of
> MB/s.
> >      I guess the sporadic 15-30MB/s occurrences are for some large
files,
> otherwise
> >      the performance is abysmal and it takes far too long to just
transfer
> packages.
> >      Not to mention that entire process is basically serialized and
doesn't
> make
> >      much use of additional cores on a server. Is there a way for pkg to
> download
> >      multiple files at the same time? This could probably help a little
bit...
> >      It doesn't have to be able to saturate a GbE link but doing less
than 5% is
> far
> >      from being impressive.
> 
> Actually, pkg(1) makes 20 connections to a package server at a time for
> content, so it's only "serial" in the sense that one package is retrieved
at a
> time.
> 
> However, pkg retrieves individual files for a package, not a giant blob.
>   This does mean that transfer time may be slower than if entire packages
> were transferred at a time, but it greatly minimises the amount of bytes
> transferred because of variants, facets, and updates (since only the files
that
> are changed are transferred for updates).

Why not to fetch multiple packages at the same time as well?
Then perhaps there should be an image install concept similar to flash
archives. This could greatly improve performance.
Or maybe pkg/depo should be able to pre-compute images of defined packages
so then instead of transferring file-by-file entire image would be
transferred.
Something like: create a meta package called server-core and then 'pkg
compute-image server-core' which would create a tar-like archive on the
server. Then if a client needs a full copy of the core-server meta-package
it could negotiate with server and transfer pre-computed images instead of a
file-by-file for each package. For ad-hoc package install/upgrade it could
transfer it using the current method. When an image is created one should be
able to specify what should be included in it (what architectures/facets,
etc.).
 
In enterprise installation it is usually more important to get quickly an
initial OS install rather than a single package later on. And if making
installs faster would require to double a repository size it is not really
an issue. Upgrade is usually less of an issue as server is already up and
running and bigger updates would require to be performed on a cloned image
anyway so it doesn't matter that much how fast they are. However initial
deployment/re-install time usually matters much more.

> Another thing to consider is that if you are using pkg.depotd and want
better
> scalability or performance, you could export the repository via an NFS
share
> instead, or place an Apache reverse caching proxy in front of pkg.depotd.

Why would it make things faster? Shouldn't a single depotd be at least as
fast as exporting the same repository over nfs or putting apache in front of
it?
And to be honest - I don't have to deploy proxy servers or go to other means
to get decent (*much* better) performance when using Linux's kickstart to
get an OS installed over a network. It just works. Frankly  AI+pkg should be
able to easily saturate GbE on modern x86 hardware out-of-the-box - if they
can't then they are broken.


> As for use of additional cores, it wouldn't help much.  pkg(1) operations
are
> mostly I/O bound, so multi-threading wouldn't help much (ignoring Python's
> limitations there thanks to the global interpreter lock).

Well, although I haven't really look closer I noticed from time to time that
pkg saturates 100% a single core which at this point probably becomes a
bottleneck.
Although you are probably right that overall most of the time is spent in
i/o.


> > 4. packages install/uninstall sections
> >
> >      In jumpstart if some packages were marked to be uninstalled there
> > would never be installed in the first place. Currently AI+pkg installs
all
> selected packages and then uninstalls packages marked so. Ideally all
> packages to be installed and uninstalled should be passed to pkg at the
same
> time and pkg should come up with the final set of packages to install.
> 
> Indeed, the installer can do this using the '--reject' option to the pkg
install
> subcommand, or the plan_install() reject_list if using the
> pkg.client.api.ImageInterface.


This is good to know, thanks.
I will file in a bug/rfe for AI.


> ...
> > 6. /export/home
> >
> >        How can I prevent it from being created? I don't want to use
autofs for
> /home directories, nor I want to /export, /export/home, /export/home/jack
> to be created. Unconfiguring it during the first boot via smf is rather
silly.
> 
> You're really better off adapting to autofs.  It is the expected standard
for
> home directory management in Solaris.  With that said, if you want to use
it
> locally first comment out /home in /etc/auto_master and then restart the
> automounter ("svcadm restart autofs").

Well, different environments have different requirements and AI should be
flexible here and not enforce things.
There are environments with home directories served over nfs, then others do
it over AFS and yet others want them to be completely local.
Even if autofs is used some environments deploy their own autofs service and
do not use one provided by vendor.
Now I know how to unconfigure and disable autofs - I was rather asking how
to configure AI so it doesn't configure and enable autofs in the first
place.

Right now it takes much too long to fully install OS over network, mostly
due to pkg being slow but also due to AI trying to do too much which needs
to be unconfigured during first boot anyway.
Ideally I should be able to specify what IU want and do not want AI to do


> I'm not certain how you would customise that with the current installer.

Me neither.... (apart from doing it during a first boot) :(






_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to