Hi, Jared
I'm basically fine with the assumptions and idea, and would be up for
hosting a build machine ... I have a few random comments to throw in:
- Another way of thinking about the current setup is that it produces
3 different types of "output":
1. Test/build results (as viewable on the tinderbox status page)
2. Distribution packages (.deb, .dmg, .tar.gz, windows
installer .exe): things that end up on the downloads pages when we do
single builds.
3. Tarballs of builds of the projects we rely on outside of Chandler:
These are created during "full builds", and developers typically copy
them onto the build server via the "Copy tarballs" page (linked from
the tinderbox status page). There, they are available via http when
you do a "make install" inside the chandler project itself. (This
happens at some point during the scripts we run for #2).
Your proposal mostly deals with #1 (the getting rid thereof :D) and
#2. So far as I know, there currently isn't a one-stop "I'd like to
build a .deb on the current platform now" script -- the current one
relies on #3 having been done already. So, some investigation/build
system work is needed to make this work.
- In an ideal world (i.e. one in which setuptools was around when the
build system was built ;) all the projects in #3 would be buildable as
python eggs, and then we'd rely on pypi.org + easy_install/setuptools
to create #3. This would relieve us of the practical burden of storing
(and allowing upload of) binaries for many platforms on
builds.osafoundation.org. I'm not sure if this is really a useful
streamlining of the build process at this point, though. It certainly
would be handy in cases where we're relying on unpatched external
projects and don't need to be building the binary ourselves.
- If I remember right, the build server does cache the source of some
external projects, rather than svn export/checkout those projects
directly. I believe this was done so as not to have our tinderboxes
depend too much on servers other than our own, but in the "slow build"
approach this could probably be done away with.
- As a side note, on the subject of external dependencies and
unpatched external projects, a while back Heikki (and others) had a
look at what we would need to do to get Chandler into the Hardy Heron
Ubuntu repositories. Heikki's writeup can be found at:
http://chandlerproject.org/Projects/UbuntuHardyHeronChandler
- As another side note, Mac OS X supports cross-compiling, i.e. on my
current Leopard Intel machine I can build executables for Mac OS X
10.3, 10.4, 10.5 (as well as intel/ppc). However, tweaking our
Makefiles (and the Makefiles of the projects we build in #3) is
probably a lot of work. It might be worth spending a day or two
looking into this, though.
--Grant
On 16 Jul, 2008, at 18:34, Jared Rhine wrote:
We know the time is soon approaching where we'll need to close the
room where all the OSAF Chandler Desktop tinderbox/build-boxes are
housed. Sniff.
This will have a huge impact on our ability to build Chandler
Desktop. We need a new scheme.
I'm hoping to give us enough time to get through 1.0 release and
maybe a bug-fix release beyond that before the tbox get shut off.
There's not currently a hard date by which we need to shut off the
tinderbox, but we should be kind to our hosts and work towards this
shutdown. Perhaps end of August is a realistic shutdown time.
How could we work towards a realistic post-tbox build scheme for
Chandler Desktop?
A brief IRC discussion today leads to this basic proposal:
* Build official Chandler Desktop releases manually on a Mac laptop
running Parallels and hosted at a developer's house
Specifically:
- Find/allocate a MacBook Pro laptop, with Parallels VM software
- Configure the laptop to be able to build Mac Intel releases
- Install two VMs; one for Windows and one for Ubuntu
- Have new releases (and milestones/RCs) kicked off manually by the
machine owner on all three platforms
This idea is based around these assumptions:
- It's ok to build releases much more slowly. Slow-and-steady
builds off a single machine is better than no builds at all.
- It's ok to kill a couple of the build types. Debug builds, PPC
builds in particular.
- Any volunteers who provide additional platforms, dedicated
machines, or labor would be happily incorporated somehow into a
Desktop builds process. We'd always support pointing to "contrib"
builds if any get created and contribed builds could easily be made
"official".
- Continuous integration testing goes away. No more official OSAF
tbox, unless someone volunteers on one or more platforms.
- We shouldn't invest a ton of labor/capital into preparing a tbox/
build cluster replacement for the old codebase. It may thrive, but
most developers are skeptical. It's most important to have *a* way
to create new releases of the old codebase as opposed to having a
*great* way to create new releases.
- We're ok with some problems if the volunteer's DSL gets knocked
offline, or their cat pees on the laptop under the desk, or
whatever. Heck, the box doesn't even need to be on except for when
it's building software.
- We're preferring options which reuse our existing hardware and
software assets.
- We agree there's no realistic way to move the current tbox cluster
elsewhere. The large masses of full-tower desktop machines, iMacs,
etc, are enough of a burden to volunteer hosts we're not going to
spend much time trying to preserve that cluster.
There are some additional refinements we can envision:
- Host the laptop in a DMZ at someone's house, for network security.
- Arrange for multiple people to have access to box and VMs remotely.
- Provide VNC-based login.
- Automate the build/upload process significantly. Have the VMs
poll for instructions on kicking off a new build. Either HTTP
polling or POP mailbox pulling (ala the way tbox communicates right
now).
- Give the machine a static IP or arrange a dynamic DNS name via
dyndns.org or similar.
If we go with something like this approach, there's some basic work
items we could kick off:
- Identify/allocate a MacBook Pro laptop for the build machine
- Identify a willing volunteer to host the machine and preferably be
willing to kick off builds
- Configure Mac layer to host Mac Intel builds
- Create two VMs (Windows + Ubuntu) and get them ready to build
Chandler Desktop instances
- Start documenting how builds on this machine would be produced
across all platforms. (As well as tricky issues like uploading new
plugin versions, etc).
It'd be great to get some creative alternatives to the above, or to
hear multiple offers of hardware/hosting/etc. The above is
hopefully a realistic, small-scope approach that gives us at least
one post-tbox-cluster option to being able to produce new releases
on the old codebase. Note, we still have a "how to build Desktop"
problem with any new codebases too.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev