>2.1.7
>
>I can't understand the lukewarm comparison of Solaris install
>performance with others. "It's the perception that we're slower." It's
>not a perception: -it's reality. And the gap is *huge*. Experience in
>the late S9/S10 beta timeframe was that Solaris is slower than (say)
>RedHat by a factor 5-10.
>
>I tested the "go-faster" package tools in S10 beta. Given that
>comparison with alternative systems indicates that there is a potential
>for a factor 5-10 improvement, and comparing many of the pkg* tools
>with emulations using shell tools indicates a similar deficiency, the
>fact that there was no significant improvement - and in many cases a
>significant regression - was a huge disappointment.
>
>Of course, if Solaris was better packaged into a smaller number of
>packages, this would also have benefits.
The package database flatfile is a serious obstacle to performance.
The file is big and is written to disk one or more times for each
package installation. (And flushed to stable storage and then renamed)
The package database I/O alone accounts for 70+% (i.e., the vast majority!)
or all I/O done during an upgrade or install.
For upgrades, this gets even worse in the presence of unbundled stuff.
The experiment I did consisted of keeping the database in memory
and writing a delta log to the filesystem while "mock package tools"
would perform all updates by calling the particular daemon.
The difference, of course, was between seconds to minutes and
more than an hour (depending on speed of disk, etc). Note that some of
this was on ATA disks with write caches enabled with seriously
increase performance at the cost of reliability.
I think such an implementation is possible without rewriting the package
interfaces and just do the magic under the hood.
The contents file database server I wrote was special purpose; it
has a limited memory footprint (about twice the database).
The one you referred to which was tested in S10 beta was SQL based and
tool about 10 times as much memory (I've witnessed installs on 512
MB systems lasting many additional hours because of the database server
was paging like mad)
After rewriting the contents file backend (too much seems to depend on
the file being there for now), things should be a lot better.
Other low hanging fruits are:
- use of gzip vs bzip2
bzip2 is *so* slow that it actually is a performance bottleneck even on
fast machines; gzip compresses about 10% less but allows you to read data
at DVD speeds; bzip2 does not.
That brings on the issue of DVD installs and lack of streaming; our install
should be able to stream the data; but currently we're hampered by the fact
that the packages are presented in somecases in many small files which make
DVD reading very slow.
>There is another performance bottleneck, which is the SMF manifest
>import. This needs to be sped up or eliminated.
The SMF manifest import situation has somewhat improved in b3x; after
upgrading a later snv build you'll see only the really changed manifests
be reimported rather than all manifest. (b35->b36, e.g., imports something
like 6-10 manifests and not the 100+ you got when going from any build
before s31 to a later build)
This isn't to say that manifest import is not slow. The current
mechanism parses the manifest in svccfg and then hands it to the server
with many door calls; a single door call "here's the parsed manifest,
eat it" which likely fix that handsomly. It is something which needs
fixing.
Casper