Hi Martin, I think you've identified many issues I've been struggling with as I worked on Python 2.7 support for Maverick.
One of the most frustrating by far, is the buildd lag. I shamefully admit that I was responsible for ddos'ing the build farm on several occasions when I started this journey, through my own bugs and misunderstandings. I'm being a good citizen again, but still it's clear that the build farm can't keep up with the demand. This is good in the sense that the Soyuz folks have built a service that is really excellent and that people are eager to use! Julian also tells me that things are in the works to improve performance. I will send the Soyuz team many cases of e-beer when that work lands. :) I just wanted to point out a few things I've developed along the way. They may be helpful, and/or point the way to better automation tools. I'm happy to collaborate with anybody who wants to improve this stuff. On Aug 20, 2010, at 06:47 PM, Martin Pool wrote: >One approach to the lag in soyuz builds is to test build every package >locally, before uploading. This seems like a bit of an unnecessary >kludge, but perhaps it's the most sensible thing for now. This is what I do now. If it's a relatively fast building package, it's a great way test things out, even if you simultaneously upload to your PPA. (I generally only upload after a successful local build, unless it's a long-building package like Subversion.) The best page I've found is here: https://wiki.ubuntu.com/SecurityTeam/BuildEnvironment and I had a chance to sit down with the security team in Prague to get a better understanding of how they do things and what tools they use. I still currently do not use UMT (it doesn't exactly fit my workflow), but everything else on that page about building up chroots for sbuilds is money. Once your chroots are set up, these are *very* easy to use (though bash history or aliases will help you remember which command uses -c and which uses -d ;). >Since we want to build on various different distroseries we have to use some >kind of virtual environment. There are various options including pbuilder, >but none that I've found are really plug-and-play. I have a wrapper script >passed on by Robert that gives it some better defaults. I think this doesn't >yet take into account that some build dependencies might need to come from >either the destination or other ppas, but we could possibly add that. I also use various VMs to test things out on. I've blogged about this work here: http://www.wefearchange.org/2010/06/experimental-virtual-machines.html Normally, I don't do builds on the VMs because there seems to have some pretty horrible intermittent disk i/o performance lags, though I haven't narrowed it down yet to general kvm/libvirt problems or cow overlay problems in the playground VMs I overlay on the real VM. The one nice thing about doing builds in the VMs though is that I feel it has a better debugging environment, since they are more persistent and easier to inspect than the sbuild environment. These days though I use the chroots more and the VMs less, though the latter comes in handy when testing package installs and post-build-install testing. >It seems like we have to keep track of which builds have passed or >failed by collecting a set of nearly-identical mails. There's a >"latest updates" portlet in eg ><https://edge.launchpad.net/~bzr/+archive/proposed> but it only shows >the last 5 which is not really enough (http://pad.lv/620903). I have a set of scripts I'm using to manage my various PPAs here: https://code.edge.launchpad.net/~barry/+junk/pydeps In particular, during steady state I use resync.py and status.py all the time. Because of the webui timeouts, I find these invaluable for getting the status of the packages in a PPA and for requesting resyncs of outdated packages to their Maverick versions. Of course, my PPAs serve a different purpose than yours, and their dependency setup is kind of complicated (doko/toolchain -> pythoneers/toolchain2.7 -> pythoneers/py27stack4 (main packages) -> pythoneers/py27stack5 (universe packages)). The other downside is that something like resync.py requires an up-to-date local package inventory (apt-get update) to work properly. Improvements to these scripts are welcome, and I'm happy to move these to a better place than my +junk if other folks are interested in hacking on them. >perhaps we could standardize a way to do local test builds in a way as >close as possible to what Launchpad does, including how to get >bzr-builddeb to use it for a a test build before uploading. +1 to that. I use 'bzr bd -S' all the time and 'bzr bd' occasionally. It's just occurred to me that 'bzr bd --builder=<magic>' could probably be used to do the build in the proper sbuild chroot, but it would be nice to be able to configure that persistently in a Bazaar config file, rather than all the memory living in my bash history. >It's a bit dissatisfying that the automation we have done so far has >been a bit hacky and project specific and it won't really help the >next team or person that's trying to do ppa backports/updates of some >other set of related packages. I'm not sure where those scripts >should even live. > >Perhaps some or even a lot of this could be addressed by giving >bzr-builder the ability to build from a tag into a ppa (or perhaps it >already does that), and then we can drive most of this by just >creating and running the recipes. > >Thanks for persisting through my braindump (if you did :-), Thank *you* for starting this thread. I continue to think that udd really is the awesome path forward, and that many of us are chipping away at various edges. We just need to bring some of these things together. -Barry
signature.asc
Description: PGP signature
-- ubuntu-distributed-devel mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-distributed-devel
