On 07/22/2013 03:04 PM, Jouni K. Seppänen wrote: > The following message is a courtesy copy of an article > that has been posted to gmane.comp.python.matplotlib.devel as well. > > Michael Droettboom <mdroe-pfb3ainihtehxe+lvdl...@public.gmane.org> writes: > >> I've started a MEP related to improving our continuous integration >> system for matplotlib. >> >> https://github.com/matplotlib/matplotlib/wiki/Mep19 >> >> Rather than deal to much with implementation at this point, I thought >> it best to start by outlining our requirements. At this point, let's >> just get everything we'd like in, and we can worry about prioritizing >> things later. > Testing all pull requests means that sandboxing must be taken > seriously. Imagine a pull request that sends spam via email or web > forms, or reads the buildslave password and embeds it in the output. > I suppose Travis must handle this somehow, but if we're going to roll > our own, this may need serious thinking.
This should made explicit in the MEP, but I really hope not to our own. I'm a developer, not a sysadmin -- I don't have the skills or time to do this stuff effectively. To address your question: both Travis and ShiningPanda (and I suspect other hosted testing services) fire up temporary virtual machines for each test run. By design, this virtual machine has no sensitive data on it, and thus none to steal in this way. ShiningPanda lets the virtual machine be customized upfront, and then cloned and thrown away on each test run, and is therefore a little more powerful IMHO. > > One thing I would like is to have results from all test cases in a > format that allows them to be compared across the git history and the > build environments, to discover things like "the text tests are failing > with FreeType version X on Python version Y". There's an XUnit XML > plugin for nose, and at least Jenkins has a reporting plugin that can > read that format. Indeed. Testing on a larger matrix of dependencies is something I'd like to do, and the results should be managable by human beings ;) > >> I would particularly like feedback from others who have set up similar >> things. I have some experience with ShiningPanda (a service based on >> Jenkins), and Travis. We used buildbot in the past, but I have little >> direct experience with it. Are there other obvious candidates or >> approaches? > I've used buildbot at work, but with a much smaller range of build > environments. It takes some work to configure but at least the > configuration file is Python, and build steps can run pretty much any > code. The waterfall display you get with the default settings isn't very > much, but e.g. the Chromium project has a useful-looking setup: > > https://chromium-build.appspot.com/p/chromium/console > > Other options include at least CircleCI (a paid service), but I have no > experience with it. > I will add CircleCI to the "to consider" list. Mike ------------------------------------------------------------------------------ See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk _______________________________________________ Matplotlib-devel mailing list Matplotlib-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/matplotlib-devel