Hello CouchDB Devs (+ Gavin from ASF Infra - BIG thanks for your work to
date!)

First, as promised earlier in the week, we have a new flotilla of
CouchDB CI Docker images waiting in the wings on Docker Hub to replace
our current Jenkins build agent images:

couchdbdev/ubuntu-bionic-erlang-20.3.8.22-1
couchdbdev/ubuntu-xenial-erlang-20.3.8.22-1
couchdbdev/arm64v8-debian-stretch-erlang-20.3.8.22-1
couchdbdev/ppc64le-debian-stretch-erlang-20.3.8.22-1
couchdbdev/arm64v8-debian-buster-erlang-20.3.8.22-1
couchdbdev/debian-stretch-erlang-20.3.8.22-1
couchdbdev/debian-buster-erlang-20.3.8.22-1
couchdbdev/centos-6-erlang-20.3.8.22-1
couchdbdev/centos-7-erlang-20.3.8.22-1
couchdbdev/centos-8-erlang-20.3.8.22-1

The extra platforms are only on debian for the moment because I didn't
feel like building all the platforms, and because Debian isn't what IBM
cares most about nowadays. ;) That's also the base image for the Docker
container, so they came first. (There's a problem with Debian Buster and
qemu when it comes to ppc64le; see
https://github.com/apache/couchdb-fauxton/issues/1234 for the details.
This blocked me from updating the Docker container to buster during this
last revision.)

These can't be substituted for our current CI images until Fauxton gets
fixed and rebar.config.script gets updated, see
https://github.com/apache/couchdb-fauxton/pull/1233 for the patch. (This
is also why https://github.com/apache/couchdb-docker/pull/157 is failing
in Travis right now on the dev build.)

There's also been progress on Jenkins replacing Travis, and I wanted to
update everyone on that. There are a few things left before we can get
completely off of Travis:

* Install Jenkins on couchdb-vm2 + setup CouchDB-dedicated build agents

  This had been on hold until the ASF sorted out their approach for
  multi-master Jenkins machines. I've just sat through the demo for
  CloudBees Core, which the ASF hopes to bring in to manage lots of
  Jenkins masters simply. The good news is that each of the Jenkins
  masters it can manage are just plain ol' vanilla Jenkins - so we
  should be able to proceed now setting up our own Jenkins instance.

  I'm very sorry to everyone who's been suffering with subpar (!)
  Travis CI performance for months now; this was the thing holding us
  back, and we should be able to move ahead with our own Jenkins master
  + the IBM donated workers quickly now.

* Add arm64v8 build agents. ARM has offered to donate to us, through
  AWS, 2x a1 instances against which we can run our tests. To save on
  credits, it might be nice to write a first step in the job that uses
  AWS credentials and the aws-cli to spin those instances up, then in
  the cleanup step, spin them down, so we don't waste that donation.
  I've used this in other Jenkins setups, and it's worked extremely
  well, though it adds about 1 minute of startup delay.

* Build a new kerl-based Docker image that can be used to emulate our
  current Travis setup. This shouldn't be too hard to add to the
  couchdb-ci scripts, but since we want to support Erlang 20, 21 and 22,
  it'll take my desktop a few hours to crunch out the build and then
  upload it.

* Decide, as a group, how we're going to proceed with Jenkins jobs.
  We can change the PR-triggered job to build a Jenkinsfile replica of
  our current .travis.yml (ubuntu xenial only, 3 Erlangs), but if we
  replace the current Jenkinsfile with that, I'm afraid our release
  process will break again. One of the original motivating factors
  for moving to Jenkins was to ensure no one broke the release build
  process. I received support from Paul Davis, Robert Newson and Adam
  Kocoloski on this approach when I last brought it up.

  Or, we could merge in the other Erlang tests with the current file,
  which makes each PR "fatter" (it'll be 11 parallel jobs instead of 9,
  if you count adding the ppc64le/arm64v8 jobs) at the cost of busier
  build agents.

  There's other possibilities too - curious to hear your thoughts.

Now the golden goose. Why are we bothering with all of this, aside from
the fact that we have to wait more than an hour, on average, for Travis
these days? Well, one of the other major reasons is that with a build
master running on ASF infrastructure (and, thus, under the control of an
ASF committer), we can safely (and ASF-approvedly!) store credentials
for services like AWS, IBM Cloud, Docker, and even Bintray in that CI
infrastructure. With our own master, that means those credentials aren't
available to the general public, nor any other Apache project (save ASF
Infra, who's always there to help.)

That means that we will be positioned to be able to *automatically
deploy binary convenience packages and Docker images after a release* in
the very near future.

NOTE: the apache-couchdb-#.#.#.tar.gz file is *THE* official Apache
release, and must be cryptographically signed by a PMC member. It cannot
automatically be pushed in this fashion. We should, however, be able to
use a CI-built release tarball (from a special Jenkinsfile, presumably)
for group acceptance testing and manual signing/upload. (The fine print:
we've actually done this for some of the 2.x release cycle already!)

Finally, I'm hoping some other community members will take interest and
help read through, understand, and start taking up some of these tasks.
I've been doing release management for CouchDB for almost all of 2.x,
and probably will continue for 3.x, but I'd like to see more of a team
effort. I have a keen interest in ensuring that work remains a
*community* effort, especially because I fear the erosion of things like
cross-Linux-distro support, binary packages vs. Docker, and so on.
Please, if this interests you at all, speak up - I'll make time to
mentor you on the current process & build system.

I look forward to your thoughts and ideas.

-Joan "really, just a volunteer" Touzet

Reply via email to