Hello all,

Colin Walters [2013-04-28 19:26 -0400]:
> But note that if we're talking about black box testing which is what
> most of GNOME's tests are, the "II" case is effectively a superset of
> all of the other cases; it's the most flexible and powerful.

Agreed, but at the cost of also being the most "expensive" to use.

> You're talking about this "II" case as something "daily".  But I've
> optimized the gnome-ostree build system now to the point where the build
> server consistently gets to running installed tests in qemu 3-4 minutes
> (sometimes less) after a git commit.   This is running in and testing
> the real tree that users will download; it's not a weird partially
> assembled pseudo-environment like jhbuild or a mock/pbuilder root.
> 
> This isn't hypothetical - while we were at the GTK+ hackfest this week
> the gnome-ostree system detected that
> https://git.gnome.org/browse/glib/commit/?id=ddb0ce14215cd62c7a2497d6cf9f2ea63c40ebb5
> broke gnome-shell in minutes.
> 
> And this number is only going to go down; with various optimizations,
> I'm pretty sure I could get a case like a git commit to gnome-shell that
> only modifies a .js file booting within 30 seconds of the push
> notification of the commit.  For example, by reusing a cached copy of
> the previous build, so it's just a matter of "make && make install".
> 
> At that speed, a lot of the other test forms become superfluous.

That turnaround speed is indeed great for doing CI. I do agree that
this is the right thing to do for finding regressions after they
happened (like above GTK commit). But that's not the normal TDD mode
of developing a new feature or a bug fix -- you want to write a test,
then fix the bug/implement the feature with ideally zero overhead for
installing, building packages/snapshots, and booting VMs, and most
importantly you don't want to commit and push until everything is
succeeding again.

> At the moment in GNOME we have a lot of BB, and I'm pushing hard for II.
> Per smcv's comment, I believe projects like Debian could effectively
> reuse II tests in autopkgtest by simply ignoring the source tree aspect
> and running the installed binaries.

Yes, absolutely. The II bit will make things like autopkgtest
dramatically easier, faster, and more reliable.

> > I fully agree, and I don't think there is much reason to drop
> > uninstalled tests in favor of installed ones; In most cases it should
> > be easy to support both. That's the model how development (should)
> > work, after all: You write a test for a new feature, then implement
> > it, and run "make check" (or "sudo make check") until it works; it
> > would be a step backwards to require installing the tests (and
> > clobbering your system) first.
> 
> But what if you had a build system that made it really easy and reasonably
> fast to go from "list of git repositories" to "bootable VM"?  I do, and
> it's changed my whole perspective on development and testing =)

For things like NM or systemd, this is definitively a good thing.
Also, for testing a complete system after making a fundamental
change in e. g. glib. But I consider it too much overhead for the
20 build/test/fix iterations for doing a bug fix or new feature.

> Anyways, as far as an executive summary here:  Module maintainers
> can/should keep supporting BB at their discretion, but are you OK with
> the proposed II model?

Of course; sorry if that didn't come across, I really like the
proposal. I just wanted to point out that it is not really desirable
nor necessary to drop the "make check" (BB) case for this. "make
check" and calling the tests in a VM should ideally call the very same
test suite, just with/without LD_LIBRARY_PATH and friends set.

Thanks!

Martin

-- 
Martin Pitt                        | http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
_______________________________________________
desktop-devel-list mailing list
[email protected]
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Reply via email to