Along with maxb and garyvdm I've been updating the bzr beta ppa for our 2.2 release. This is not absolutely the most fun thing I've ever done; in fact it's pretty tedious. Since we're supposed to be making tools to make packaging more productive and fun we could either find some more things here to improve, or perhaps that I'm doing it wrong. This mail has more problems than solutions but it's a start.
It's the kind of tedious work that doesn't require much creative input but is not very automated, seems to have a bunch of snags that make it not trivially automatable, and with long latency, so you can't just sit down and do it then know you're done. It's about 10 minutes of actual thinking spread over a couple of days of spade work. We have about ten packages to rebuild, and we want to do do that across four distroseries (hardy, jaunty, karmic, lucid, maverick). So anything that happens once per package build is quite a lot of work; and there's a fair amount of human-in-the-loop latency per package. Doing them all seems to be multiple days work, even with some amount of automation. Ideally we would update on every release but to be able to do that we have to scale better. Most of what we do is just add a rebuild line into the debian changelog, push, and upload, but things occasionally go wrong per package. This should be very automatable. It can be a bit hard to work out which packaging branch is used for the version in our ppa. Launchpad has a concept of series branches for We can have a convention for plugins that we package, but it's just a convention and because it's evolved over time it's not always consistent. For instance if Gary has uploaded bzr-gtk - 0.99.0-1~hardy4 but it's failing, I can get the source for that package and try to fix it, but I can't easily work out what branch contains that source. (Well, I can guess it's one of the recently updated branches in <https://code.edge.launchpad.net/bzr-gtk> but it's not great.) Some external dependencies of bzr (like subvertpy, testtools, and subunit) are packaged into other ppas. To make the install experience better we copy all the dependency packages into our ppa. This works pretty well. It does introduce another possible level of inconsistency or confusion though: not everything in our ppa will have a branch in the same place, and not everything is meant to be updated in the same way. (bzr packages you should directly upload; subvertpy you should prefer to copy from somewhere else.) Maverick's bzr has its copy of python-configobj cut out, but that library is not shipped on hardy. Dealing with things like this make me wonder a lot whether it's a good use of time to keep supporting many old platforms. I wonder if this is the most complicated ppa, or if people have posted their experiences about managing others elsewhere? The lag seems to come in mostly through Soyuz delays: there's a delay before your package is accepted, then a delay that was running at several hours yesterday before it's actually built or rejected. I do realize that there's a lot of demand for Soyuz build machines and that people are working on improving scheduling and performance. But still it does mean there's this long period where you have mental work-in-progress, and can't wholeheartedly move on to something else. If there's a mistake either in your work, or the scripts, you have to page in the state you had a while ago. The long lag is pretty unreasonable when bzr has barely any compilation steps and builds in well under a minute on a laptop. One approach to the lag in soyuz builds is to test build every package locally, before uploading. This seems like a bit of an unnecessary kludge, but perhaps it's the most sensible thing for now. Since we want to build on various different distroseries we have to use some kind of virtual environment. There are various options including pbuilder, but none that I've found are really plug-and-play. I have a wrapper script passed on by Robert that gives it some better defaults. I think this doesn't yet take into account that some build dependencies might need to come from either the destination or other ppas, but we could possibly add that. It seems like we have to keep track of which builds have passed or failed by collecting a set of nearly-identical mails. There's a "latest updates" portlet in eg <https://edge.launchpad.net/~bzr/+archive/proposed> but it only shows the last 5 which is not really enough (http://pad.lv/620903). Perhaps we could standardize a way to do local test builds in a way as close as possible to what Launchpad does, including how to get bzr-builddeb to use it for a a test build before uploading. Our process at the moment is to upload everything into <https://launchpad.net/~bzr/+archive/+proposed> before then promoting it to the regular archive, so that we don't break dependencies between packages. (Uploading a bzr newer than the plugins support might make them uninstallable, and unlike the main archive(?) ppas don't themselves check for this.) I think the basic approach of using a staging archive is ok. We ought to be able to script something that does both the check and the promotion if it succeeds. I discovered that you can copy binaries from maverick into lucid, assuming it's a pure python package that doesn't need to be rebuilt. This makes things a little easier. We could add a tool to hydrazine to do it. I have previously taken a stab at doing the dependency checks statically by looking at the package dependency information, but it was getting a bit hard. At the moment I'm inclined to have a schroot based on a tgz snapshot, and just actually try installing into that. It's a bit dissatisfying that the automation we have done so far has been a bit hacky and project specific and it won't really help the next team or person that's trying to do ppa backports/updates of some other set of related packages. I'm not sure where those scripts should even live. Perhaps some or even a lot of this could be addressed by giving bzr-builder the ability to build from a tag into a ppa (or perhaps it already does that), and then we can drive most of this by just creating and running the recipes. Thanks for persisting through my braindump (if you did :-), -- Martin -- ubuntu-distributed-devel mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-distributed-devel
