On Wed, 1 Dec 2010, Douglas Hubler wrote:
> On Wed, Dec 1, 2010 at 2:53 AM, R P Herrold <[email protected]> wrote:
>> Usually a local 'lookaside cache' is added as a first step
>> when buildsystem speed gets important.
>
> as the first step, mock v0.6 that comes w/centos didn't have a lot of
> the caching plugins, so i pulled a newer version from rpmforge.
The CentOS approach has intentionally not tampered with an
approach that works for the project; the Fedora folks like
churn and tinkering and shifting APIs, of course
I don't know that there is a good solution here -- I run many
boxes that started life in the CentOS series, but add local
convenience archives to avoid stale build environment tools.
Of the 2277 RPMs on my lead workstation, 1099 are me
backporting selected 'sufficiently' stable
I don't have a good suggestion here, as the 'correct' answer
is: it depends on your goals. Developers need (want) fresh
tools and bug fixes; product deployment folks need uniformity
and long life so they may profit-maximize. The LSB seeks to
address that, but having been on that panel for a long time,
no-one [who is a commercial vendor REALLY wants to accept that
hit as a matter of retaining a supposed competitive advanage]
really wants the pain it brings in day to day deployment,
finding it easier to deploy several build infrastructures for
each nominally 'supported' base platform
> I struggle with how to make the output from the last rpm build output
> available as a yum repository to the build of the next rpm. By making
> a yum repo available, the build can pull in what it needs
> automatically reading the BuildRequires tags. I though it would make
> sense to mount a local dir and make that available to the chroot, but
> I'm not sure. Do you have any suggestions?
You raise two issues -- positioning of what I call 'build
fruit', and using the same -=- there is a 'createrepo' step in
there of course, before it may be used.
For the last three working days, I was fighting that four
phase cycle:
set up build environment,
build,
harvest, and
add to the repository
on a complex BR solution. I do 'postioning' with NFS in some
cases. I am still not happy when I am using an 'intermediate
build fruit' store on a remote network, because that last
phase takes over 4 wall clock minutes to refresh (make
coherent, run a unit test suite to audit for known problems,
create repo on) itself. I could do some speedups and
decoupling with my local problem and buildsystem to
parallelize early leaf node production, but that is not
applicable with the sipX code ...
I usually run a small local 'for the complex build batch'
custom package archive usually called 'intermediate' (sorry
about the 'forward reference a moment ago) which gets updated,
to keep the createrepo genration time down, and let the NEVR
computation ordered by yum from librpm 'solve' which of
several possible candidates to install/update. As you pay
that dependency solving, transaction set ordering, and
conflict check computation load over and over building the
next candidate build's chroot, there is a serialization
blocker every build.
One solution to speed that up is to have a complete 'prebuilt'
and periodically updated build chroot master image to clone
and reuse; I don't think mock has gone that way, although we
did it in the cAos pre-CentOS buildsystem discovery days ...
Another way to proceed that I am experimenting with is with
VM's cloned from a master image over and over again for each
new build, and avoiding the chrooting step. I do not have
good scripted control of this ... yet
>> buildroot even faster. [We use a 'distcc' farm for compiles on
>> a different product, at -j8, locally cutting times by a factor
> great, I can see this can scale w/the right setup.
.. at the expense of making sure the build libraries and
compilers all around are in sync TANSTAAFL ... there never
is ;)
-- Russ herrold
_______________________________________________
sipx-dev mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipx-dev/