On Mar 13, 2018, at 19:14, Rainer Müller wrote:
> On 2018-03-14 00:42, db wrote:
>> On 14 Mar 2018, at 00:22, Mojca Miklavec wrote:
>>> Because someone would need to write the code for an alternative CI
>> Wouldn't self-hosted GitLab CI be good enough?
> Are you going to sponsor a dedicated Mac server for GitLab CI?
> Travis CI is available at no cost and we have no funds to pay for anything.
I was not aware of the existence of GitLab CI, but I haven't done a survey of
CI systems, mainly because we already selected one many years ago: Buildbot.
The problem, to my mind, was not that of selecting which CI system to use, but
the fact that we want to have a virtual machine template that the CI system
should clone, boot up, run a job in, and then delete, or even multiple
templates, one for each OS version we want to test. I see now that GitLab CI
has support for doing that with both VirtualBox and Parallels.
That's good, and I didn't know that it had that feature. The Xserves running
the Buildbot might have room for a couple additional virtual machines,
especially after the duplicate 10.6/10.7/10.8 VMs are removed after we switch
those users from libstdc++ to libc++. But out Buildbot VMs run on VMware ESXi,
so if I were to add builds for PRs to the system, I would of course want to
continue using VMware. It might be possible to add support for VMware to GitLab
CI, but it is probably just as possible to add it to our Buildbot
configuration, and personally I would rather do that and continue to work with
the CI system I'm already familiar with than learn a whole new one, and have to
remain proficient with both. Adding this to our Buildbot config doesn't seem
outside the realm of possibility, but beyond being aware that VMware ESXi has a
command line interface by which VMs can be cloned, started, stopped, and
deleted, I haven't looked into it.
We also have the problem of what exactly to put on the VM templates, and how to
keep them updated. Certainly, a VM template would need to contain an
installation of macOS, and Xcode, and MacPorts, and Java, and the VMware tools,
and the necessary CI software such as Buildbot worker. But one of the reasons
why our Buildbot setup is as fast as it is is that each Buildbot worker keeps
previously-built ports installed (but inactive). So when someone commits a
change to the demeter port, for example, the Buildbot does not have to first
build all of demeter's 430 recursive dependencies; it just has to activate
them. If we instead clone a VM template that doesn't have any ports installed
yet, it will take a lot longer to first install all those dependencies. Best
case, all the dependencies are distributable, were already built on the
post-commit Buildbot workers and uploaded to the packages server, and so they
just need to be downloaded and extracted. Worst case, they aren't distributable
and have to be built from source.
An idea that's been suggested before, which sounds good to me but which has not
been implemented yet, is that in addition to uploading distributable archives
to the public packages server like we already do, we should upload any
non-distributable archives to a private server that only Travis or this new
hypothetical PR build system can access.
We could pre-populate the VM templates with lots of installed but inactive
ports, but in addition to making the templates and their clone(s) take up more
disk space, which is at a premium, the installed ports would quickly become
outdated as port updates are committed. We would need a mechanism for updating
the ports installed on the templates fairly frequently. I like the private
packages server idea better.
Even if we don't want to keep any ports installed on the VM templates, we still
need to update them from time to time, for example to update the OS, Xcode,
Java, etc. With the post-commit Buildbot workers, I do that manually when I
notice an update is needed and the Buildbot is otherwise idle, and that has
been sufficient so far.