Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-20 Thread Martin Pitt
Jasper St. Pierre [2013-02-12 14:12 -0500]:
 The libXi patches are because Ubuntu ships a libXi package that's marked as
 if it's a new version, but doesn't contain the new XI stuff for some reason:
 
 http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/raring/libxi/raring/view/head:/debian/patches/revert-xi2.3.diff
 
 The easiest fix is to mark the package as an old version of libXi, which
 will cause mutter to understand it has the old libXi, and not attempt to
 use the new libXi features.

For the record, this has been fixed, mutter and its deps are building again.

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-19 Thread Simon McVittie
On 18/02/13 22:34, Martin Pitt wrote:
 Please note that there is no system D-BUS and no default session D-BUS
 running. If you need those, then the tests should start dbus-launch or
 use GTestDBus.

dbus-launch is not particularly suitable for regression tests: if you
use it, you have to kill the resulting dbus-daemon yourself when you're
finished with it. If not using GTestDBus, please use with-session-bus.sh
from telepathy-glib, or review
https://bugs.freedesktop.org/show_bug.cgi?id=39196 so I can add
dbus-run-session(1) to dbus.

smcv
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-19 Thread Tristan Van Berkom
On Tue, Feb 19, 2013 at 10:13 PM, Simon McVittie
simon.mcvit...@collabora.co.uk wrote:
 On 18/02/13 22:34, Martin Pitt wrote:
 Please note that there is no system D-BUS and no default session D-BUS
 running. If you need those, then the tests should start dbus-launch or
 use GTestDBus.

 dbus-launch is not particularly suitable for regression tests: if you
 use it, you have to kill the resulting dbus-daemon yourself when you're
 finished with it. If not using GTestDBus, please use with-session-bus.sh
 from telepathy-glib, or review
 https://bugs.freedesktop.org/show_bug.cgi?id=39196 so I can add
 dbus-run-session(1) to dbus.

This is a very interesting topic (and brings to mind Colin's ideas about
installed tests).

Here's some ideas worth consideration IMHO...

  o Unit Tests with GTestDBus

 GTestDBus is IMO ideal for regression testing (or in-tree unit tests),
 I made a short write-up on this not long ago in my blog[0].

 The idea here is that you want to be absolutely sure that you
 are testing isolated modules and services that are still in-tree,
 you want to test ideally your services alone without clouding
 your results with installed services. If your service relies on
 system installed services, you would ideally want to control
 which specific installed services get to run in your controlled
 D-Bus environment sandbox.

 I.e. if you have services in /usr/share/dbus-1, you dont want those
 mixed in to your sandboxed build path, colliding with services in
 /opt/devel/share/dbus-1/.

 You also probably want to control cases where fallbacks can be
 implemented, if your service/client can run without a complimentary
 service... you want to test a case where your client has access to
 an installed service vs. a case where a fallback is used instead.

 o Now that we are talking about a build server and building 'all-of-gnome'
it becomes interesting to know if a service installed by some dependency
effects any modules which depend on that service in a negative way.

For this case (why I thought about Colin's ideas on installed tests), it
suddenly becomes interesting to have tests which do not use GTestDBus
in a controlled environment, but instead to test the system as a whole,
running only services from ${build_prefix}/share/dbus-1 but certainly
avoiding anything from /usr/share/dbus-1/ (or at least properly prioritizing
the build prefix services over the system installed ones, if we can't avoid
using those).

 o If we have these two theories of testing D-Bus services and clients which
depend on them, we probably want to reuse the unit testing code as much
as possible.

Perhaps some additional features added to GTestDBus could allow us
to run the same tests in both contexts (i.e. the installed test context
with a full gnome environment vs. the isolated in-tree context).

Cheers,
  -Tristan

[0]: 
http://blogs.gnome.org/tvb/2012/12/20/isolated-unit-testing-of-d-bus-services/
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-19 Thread Simon McVittie
On 19/02/13 13:54, Tristan Van Berkom wrote:
 On Tue, Feb 19, 2013 at 10:13 PM, Simon McVittie
 simon.mcvit...@collabora.co.uk wrote:
  GTestDBus is IMO ideal for regression testing (or in-tree unit tests),
  I made a short write-up on this not long ago in my blog[0].

GTestDBus combines two things: setting the D-Bus service activation path
to a private one, and running a new D-Bus session.

It does some of what self-contained regression tests should do, but by
no means all of it - I think regression tests should also consider
setting XDG_RUNTIME_DIR, XDG_DATA_*, XDG_CONFIG_*, DISPLAY (if used),
and HOME.

with-session-bus.sh (and dbus-run-session, if reviewed) specifically
only does the new D-Bus session part, although you can supply a D-Bus
configuration file via --config if you want to change the service
activation path too, and the rest can be done via env(1).

 For this case (why I thought about Colin's ideas on installed tests), it
 suddenly becomes interesting to have tests which do not use GTestDBus
 in a controlled environment, but instead to test the system as a whole,
 running only services from ${build_prefix}/share/dbus-1 but certainly
 avoiding anything from /usr/share/dbus-1/ (or at least properly 
 prioritizing
 the build prefix services over the system installed ones, if we can't 
 avoid
 using those).

This should still use a new D-Bus session (and probably XDG_RUNTIME_DIR
and HOME) for the tests, but could set XDG_DATA_*, XDG_CONFIG_* to look
in ${build_prefix} before /usr.

 Perhaps some additional features added to GTestDBus could allow us
 to run the same tests in both contexts (i.e. the installed test context
 with a full gnome environment vs. the isolated in-tree context).

In my experience, regression tests for D-Bus components usually need
controllable mock versions of various D-Bus services: only a small
subset of these tests (those that test normal behaviour, and aren't
too picky about implementation details) will work with the real version
of those services.

S
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-18 Thread Martin Pitt
Hello Travis,

Travis Reitter [2013-02-17 23:04 -0800]:
 (/home/ubuntu/gnome/checkout/folks/tests/eds/.libs/lt-persona-store-tests:100517):
  CRITICAL **: backend.vala:140: Error calling StartServiceByName for 
 org.gnome.evolution.dataserver.Sources1: 
 GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process 
 /home/ubuntu/gnome/packages/libexec/evolution-source-registry exited with 
 status 1
 
 But it also looks like it could be related to the structure of the
 buildbot environment. Is there any way I could debug this further?

This might or might not provide a first clue:

$ xvfb-run jhbuild run 
/home/ubuntu/gnome/packages/libexec/evolution-source-registry
Migrating mail accounts from GConf...
Migrating addressbook sources from GConf...
Migrating calendar sources from GConf...
Migrating task list sources from GConf...
Migrating memo list sources from GConf...
Registering EGoogleBackendFactory ('google')
Registering EOwncloudBackendFactory ('owncloud')
Registering EYahooBackendFactory ('yahoo')
Registering ECollectionBackendFactory ('none')
/home/ubuntu/.config/evolution/sources: Unable to find default local directory 
monitor type

... and then aborts.

Please note that there is no system D-BUS and no default session D-BUS
running. If you need those, then the tests should start dbus-launch or
use GTestDBus.

I'm happy to log into the build box and run debugging commands for you
or Matthew. Probably best to catch me on IRC? I'm pitti on both
GNOME (#gnome-hackers) and Freenode (#ubuntu-qa or #ubuntu-devel).

Thanks!

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: Digital signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-17 Thread Travis Reitter
On Fri, 2013-02-15 at 11:54 +0100, Martin Pitt wrote:
 Martin Pitt [2013-02-12  7:43 +0100]:
https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/
  
  Right now there are 151 successes (blue), 5 modules fail to build
  (red), and 4 modules build but fail in make check (yellow). It's
  been like that for a week or two now, so I'd say we are doing
  reasonably well for now.
 
 You may have seen the sudden large increase of test failures on Feb
 13, we are back at ~ 40 make check failures. It turns out that this
 isn't due to a regression in our test environment after all, as I
 initially suspected, but because of a bug fix. Until Feb 13, make
 check wasn't run on a build which previously succeeded already, due
 to an accidentally dropped --force jhbuild option. It has been put
 back.
 
 So the current failures have been there all along. Sorry for the
 confusion!

Speaking of test failures - is it possible to get more details? All I
can seem to turn up for the folks module test failures are:

make check failed for folks
Check log file for details

But I can't seem to find the corresponding log from here:

https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%
20Gnome/job/jhbuild-amd64-folks/lastSuccessfulBuild/testReport/junit/folks/test/make_check/

Also, most modules say 1 of 1 tests {failed, succeeded}. I guess this
is a side effect of make check being somewhat monolithic. Is there any
way modules can increase the granularity reported for tools like
Jenkins?

Thanks,
-Travis


smime.p7s
Description: S/MIME cryptographic signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-17 Thread Martin Pitt
Hello Travis,

Travis Reitter [2013-02-17 22:24 -0800]:
 Speaking of test failures - is it possible to get more details? All I
 can seem to turn up for the folks module test failures are:
 
 make check failed for folks
 Check log file for details
 
 But I can't seem to find the corresponding log from here:
 
 https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%
 20Gnome/job/jhbuild-amd64-folks/lastSuccessfulBuild/testReport/junit/folks/test/make_check/

On the main page, click on folks, which leads you to

https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/job/jhbuild-amd64-folks/

There, click on the topmost build in the left column, or use one of
the permalinks like last build:

  
https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/job/jhbuild-amd64-folks/lastBuild/

There you see the complete log file.

 Also, most modules say 1 of 1 tests {failed, succeeded}. I guess this
 is a side effect of make check being somewhat monolithic. Is there any
 way modules can increase the granularity reported for tools like
 Jenkins?

I think that ought to be possible by integrating gtester-report and
add a new target such as make check-report or make gtester-report
(I think glib already does that), which we then could call from
jhbuild and (1) export as an artifact, and (2) present as test results
instead of the monolithic pass/fail one.

Martin


-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: Digital signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing -- mystery of recent flood of failures solved

2013-02-17 Thread Travis Reitter
On Mon, 2013-02-18 at 07:39 +0100, Martin Pitt wrote:
 Hello Travis,
 
 Travis Reitter [2013-02-17 22:24 -0800]:
  Speaking of test failures - is it possible to get more details? All I
  can seem to turn up for the folks module test failures are:
  
  make check failed for folks
  Check log file for details
  
  But I can't seem to find the corresponding log from here:
  
  https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%
  20Gnome/job/jhbuild-amd64-folks/lastSuccessfulBuild/testReport/junit/folks/test/make_check/
 
 On the main page, click on folks, which leads you to
 
 https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/job/jhbuild-amd64-folks/
 
 There, click on the topmost build in the left column, or use one of
 the permalinks like last build:
 
   
 https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/job/jhbuild-amd64-folks/lastBuild/
 
 There you see the complete log file.

This specific problem may be due to e-d-s (its process is segfaulting)
for some reason I've never seen before:

make[3]: Entering directory
`/home/ubuntu/gnome/checkout/folks/tests/eds'
/PersonaStoreTests/persona store tests: 
**
(/home/ubuntu/gnome/checkout/folks/tests/eds/.libs/lt-persona-store-tests:100517):
 CRITICAL **: backend.vala:140: Error calling StartServiceByName for 
org.gnome.evolution.dataserver.Sources1: 
GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process 
/home/ubuntu/gnome/packages/libexec/evolution-source-registry exited with 
status 1
Trace/breakpoint trap (core dumped)
FAIL: persona-store-tests

But it also looks like it could be related to the structure of the
buildbot environment. Is there any way I could debug this further?

Matthew, any ideas?

Thanks,
-Travis


smime.p7s
Description: S/MIME cryptographic signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-15 Thread Martin Pitt
Tristan Van Berkom [2013-02-14 17:55 +0900]:
  That's a trickier thing. For most commits, one should actually be able
  to build them independently, but sometimes those in between breaks
  are inevitable. Say, you make an API change in a library and then
  update your application to the new API, then in between you will get a
  build failure. The next iteration should fix it again.
  We have that problem independently of the frequency we build stuff of
  course, as we can always hit a bad time.
 
 As someone mentioned/proposed earlier in this thread, this kind of temporary
 error could probably be ruled out with a timeout (perhaps not a real timeout,
 but a measurement in elapsed time between commits).

The longer the timeout, the less useful the notifications get, though.
E. g., if you change API and then update your consumer, for the
notification to not hit you in between you'd need to defer it at least
by an hour. However, if you accidentally break something, you might
not even be at the computer an hour later, or are busy with something
different.

I do agree that this kind of delay needs to be adjusted according to
which type of notification we send. For a plain email, I think there
should be no delay at all. If you break something and fix it again,
you'll just get two mails that say OMGbroken! and then YAYfixed \o/
and you have your piece of mind. For a bug report we certainly want to
wait a bit to avoid too much noise.

Thanks,

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-15 Thread Martin Pitt
Hello Bastien,

Bastien Nocera [2013-02-13 11:23 +0100]:
  I think build issues should be filed as Bugzilla bugs. At most we maybe
  want to set some keyword / status whiteboard. But I guess the summary
  would be consistent and people will quickly learn of it.
 
 They should be filed as Bugzilla bugs *after a timeout*. We already see
 build failures on ostree popping up on the #testable channel, and those
 are usually fixed in a timely manner.
 
 If you file the bugs as soon as they're visible, you'll end up filing
 outdated bugs, and severely reducing the good will of the people fixing
 those bugs.

Yes, I agree (I just responded with more detail in another response in
this thread). Of course the automatically opened bug would be
automatically closed again once build and checks work again, but it
would still be noise.

That said, my gut feeling is that we should start with plain email
notifications first, before we dive head-first into auto-filing bugs.
They are much less intrusive and can be ignored/filtered away if
desired, and don't require a long delay.

Thoughts?

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Martin Pitt
Martin Pitt [2013-02-14  7:36 +0100]:
 So yesterday evening we were down to 5 failures, but over night we got
 a swath of new test failures.

Yesterday I did a git pull in jhbuild itself, which changed a few
components such as pulling in a new ibus version. We'll do this daily
from now on, to avoid the errors come in in such large batches.

A lot of the new failures are real. I'm filing bugs now, such as

  cogl: https://bugzilla.gnome.org/show_bug.cgi?id=693767 (bad commit 
identified)
  pango: https://bugzilla.gnome.org/show_bug.cgi?id=693766  (bad commit 
identified)
  ibus: http://code.google.com/p/ibus/issues/detail?id=1592 (very obvious)
  gtk: https://bugzilla.gnome.org/show_bug.cgi?id=693769 (unstable test, need 
help here)
  gobject-introspection: https://bugzilla.gnome.org/show_bug.cgi?id=693539 
(already existed)

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: Digital signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Tristan Van Berkom
On Thu, Feb 14, 2013 at 3:12 PM, Martin Pitt martin.p...@ubuntu.com wrote:
 Hello Tristan,

 Tristan Van Berkom [2013-02-14  6:42 +0900]:
 Upon reading this particular part (and I noticed before you are
 using mostly jhbuild mechanics), it leads me to wonder, how
 granular exactly are these rebuilds ?


Hi !

Thanks for answering in detail (and Colin and Emmanuele too, very
interesting stuff).

 Right now, every 15 minutes. Sometimes longer, when the previous run
 is still running.

 I think ideally it would be great if builds could be triggered by
 commit. In other words, commits are serialized chronologically and
 each and every commit should trigger an entire rebuild, each rebuild
 should build everything in the moduleset up to the latest commit...
 separately, one after the other.

 That is indeed the long-term plan, but there's still some work to be
 done before we can do that. The machine we are running this on has 64
 2.7 GHz cores and 64 GB of RAM, that really isn't a bottleneck right
 now. The main two problems right now are that the jhbuild update
 stage takes some 5 minutes to update all the ~ 160 git trees, and
 that jhbuild build doesn't parallelize at all, i. e. build modules
 which don't depend on each other could build in parallel.

 Once we solve both, and we dramatically reduce the time of one run
 from several hours (which is currently needed if e. g. a glib change
 happens, which rebuilds pretty much everything) to  15 minutes.

 The way I imagine this works now (and this is a big assumption,
 correct me if I'm wrong), is that a commit in a given module triggers
 a jhbuild build, which would mean that:

a.) Several commits could have been made in a given module
 by the time jhbuild actually runs... meaning we dont know
 which of the given commits in that lapse of time actually
 caused the fault.

 That's right. It's massively better to know that one commit in these
 15 minutes triggered it than something in the past week, but still
 not perfect as you say.

b.) Module foo triggers a rebuild... and while jhbuild builds,
 it also pulls in new changes from module bar, in this
 case it's possible that a recent commit in module bar
 caused another module baz to be effected,  but in the
 end it's module foo who is blamed (since module foo
 essentially /triggered a rebuild/)

 That's a trickier thing. For most commits, one should actually be able
 to build them independently, but sometimes those in between breaks
 are inevitable. Say, you make an API change in a library and then
 update your application to the new API, then in between you will get a
 build failure. The next iteration should fix it again.
 We have that problem independently of the frequency we build stuff of
 course, as we can always hit a bad time.

As someone mentioned/proposed earlier in this thread, this kind of temporary
error could probably be ruled out with a timeout (perhaps not a real timeout,
but a measurement in elapsed time between commits).

In other words, no need to alert people if a breakage was almost immediately
addressed and fixed.

I'm sure with some more time and development we'll find the right approach
and refine things further (may include some kind of graph theory, trying some
builds with different modules' changes applied in different orders and
eliminating
false breakages this way).

Anyway, very exciting work, thank you for doing this :)

Cheers,
-Tristan


PS: One of the fun things this will allow is... to hand out build breaker
awards (something we used to do in a company I worked at, was to
hand out an award to the committer which introduced the most
build breaks this month, mostly just for giggles).
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Tim-Philipp Müller
On Thu, 2013-02-14 at 07:36 +0100, Martin Pitt wrote:

Hi,

   - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
 recent changes in streamer? That smells like a case of broken by
 change in dependency, needs updating to new API
 
 Still outstanding.

That issue was fixed ~2 days ago, but it then failed to build the tests,
for other reasons, and now there are some test failures on the build bot
left to sort out. Probably tests relying on plugins from other modules
that aren't available in this case without checking for them.

 Cheers
  -Tim

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Travis Reitter
On Wed, 2013-02-13 at 23:08 +, Emmanuele Bassi wrote:
 hi;
 
 On 13 February 2013 22:11, Colin Walters walt...@verbum.org wrote:
  On Thu, 2013-02-14 at 06:42 +0900, Tristan Van Berkom wrote:
 
  I know, it sounds like some CPU will be melting quickly
  at the rate gnome-wide commits are made... but it would be
  simply awesome, if we could automatically pull out the exact
  commit which introduced exactly which failed build report in
  which module (and then as you mentioned, we probably need
  to notify both the author of the commit, and the maintainer
  of the effected module).
 
  If basically you want to ensure that each commit is buildable,
  the best way to do that is to have builders try *queued patches*, not
  just master.
 
 this is what the try server at Mozilla does:
 
   https://wiki.mozilla.org/ReleaseEngineering/TryServer
 
 the try server is a *great* tool (even if, sadly, is affected by
 Mercurial being arse) and it makes contributing code much, much safer.
 the try server can also be told to send the result of a patch set
 straight to a bug, so that the build status and the test suite result
 is recorded along with the bug.

I think gated commits like this could be a huge benefit to GNOME by
automating basic checks (eg, coding style) and build/testing checks
before maintainers review the patches and approve them (at which point,
they should be automatically pushed to the real master repos).

Contributors would get immediate or fairly quick feedback for their code
passing or failing these checks and they wouldn't have to worry about
breaking the upstream code (which I think would be a non-trivial gain
for some new contributors). And maintainers would save some time on
(inconsistently) enforcing basic checks, waiting on builds while
reviewing, etc.

And the whole stack would be more stable in terms of buildability and
functionality.

-Travis


smime.p7s
Description: S/MIME cryptographic signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Olav Vitters
On Thu, Feb 14, 2013 at 07:12:10AM +0100, Martin Pitt wrote:
 That is indeed the long-term plan, but there's still some work to be
 done before we can do that. The machine we are running this on has 64
 2.7 GHz cores and 64 GB of RAM, that really isn't a bottleneck right
 now. The main two problems right now are that the jhbuild update
 stage takes some 5 minutes to update all the ~ 160 git trees, and
 that jhbuild build doesn't parallelize at all, i. e. build modules
 which don't depend on each other could build in parallel.

Could you perhaps do a test that does 10 git checkouts at once (real
ones, while things are updated and so on)? I think you might eventually
run into issues with the bandwidth between Red Hat NOC and Canonical
NOC.

If you see a bottleneck problem, please say so so that it can be worked
on at the same time as parallelizing jhbuild. E.g. there was a plan to
mirror git.gnome.org, but there are also a few GNOME servers @
Canonical that could be reused.

-- 
Regards,
Olav
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-14 Thread Tristan Van Berkom
On Fri, Feb 15, 2013 at 7:57 AM, Olav Vitters o...@vitters.nl wrote:
 On Thu, Feb 14, 2013 at 07:12:10AM +0100, Martin Pitt wrote:
 That is indeed the long-term plan, but there's still some work to be
 done before we can do that. The machine we are running this on has 64
 2.7 GHz cores and 64 GB of RAM, that really isn't a bottleneck right
 now. The main two problems right now are that the jhbuild update
 stage takes some 5 minutes to update all the ~ 160 git trees, and
 that jhbuild build doesn't parallelize at all, i. e. build modules
 which don't depend on each other could build in parallel.

 Could you perhaps do a test that does 10 git checkouts at once (real
 ones, while things are updated and so on)? I think you might eventually
 run into issues with the bandwidth between Red Hat NOC and Canonical
 NOC.

 If you see a bottleneck problem, please say so so that it can be worked
 on at the same time as parallelizing jhbuild. E.g. there was a plan to
 mirror git.gnome.org, but there are also a few GNOME servers @
 Canonical that could be reused.

Fixing that bottleneck also sounds like one of the first steps towards
setting up a patch queue approach, i.e. set up a local mirror of all
remote git repos which is periodically updated... and fixup jhbuild
to optionally update from those local mirror instead of accessing
remote ones all the time.

Of course, making jhbuild parallelize the actual module builds is
a bit more complex (probably just a bit of python scripting ? not sure).

Another thought, if this gets integrated in a 'mostly a feature of jhbuild'
fashion, this can have some really interesting additional benefits.

Many projects that use the gnome platform libs already do create
their own modulesets, and can then easily set this up on their own
build server for their own project (improving the gnome developer
story in a big/real way), first thing that comes to mind is the osx
builds of Clutter and GTK+ (which are mostly just customized jhbuild
modulesets), not sure about win32... do we use jhbuild inside MSYS
I can't remember ?

Cheers,
  -Tristan
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Olav Vitters
On Wed, Feb 13, 2013 at 06:53:17AM +0100, Martin Pitt wrote:
 Travis Reitter [2013-02-12 13:21 -0800]:
  On Tue, 2013-02-12 at 07:43 +0100, Martin Pitt wrote:
   To make this really useful, we can't rely on developers checking this
   every hour or every day, of course; instead we need push notifications
   as soon as a module starts failing. That's the bit which needs broader
   discussion and consent.
  
  I'd like to offer:
  
  (0) auto-file an urgent/critical bug against the module in the case of
  build breaks (and maybe high/major for test breaks?)
 
 Claudio Saavedra [2013-02-12 23:24 +0200]:
  (3) Automatically filing a bug in the broken module with the details of
  the breakage and additionally CC: whoever might be interested in keeping
  an eye on the continuous integration?
 
 This creates more overhead, but I like this proposal as well. I guess
 one can use python-bzutils and the like to auto-create bugs, and
 remember on the Jenkins side which bug was opened for which module,
 and auto-close it once the build works again.
 
 The main issue that I see with this is that it's much harder to filter
 away/opt out, so it requires some broader consensus that we want to do
 this. We can still add a module blacklist if needed, though.

I think build issues should be filed as Bugzilla bugs. At most we maybe
want to set some keyword / status whiteboard. But I guess the summary
would be consistent and people will quickly learn of it.

You can use information from DOAP files to figure out the Bugzilla
product. This is not always needed. Then commit whatever DOAP fixes are
needed (the DOAP files are not bug free :P. In case someone wants to opt
out, we can add a new gnome-specific option to the DOAP file specifying
that any Continuous Integration is not welcome.

What I do in ftpadmin (see sysadmin-bin) is:
- Check if there is a bug-database with 'bugzilla.gnome.org'
  For those URL(s): Check if the URL containers product=$SOMETHING
- If there is a good product: use that
- else: assume tarball = bugzilla product (will fail for jhbuild, often
  modules are renamed and you don't know the real one)

Note: Not all products are on bugzilla.gnome.org. Maybe after GNOME
bugzilla, strive for 'bugs.freedesktop.org'?

For monitoring these bugs, you can easily watch people in Bugzilla.

-- 
Regards,
Olav
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Olav Vitters
On Wed, Feb 13, 2013 at 11:23:52AM +0100, Bastien Nocera wrote:
 On Wed, 2013-02-13 at 10:59 +0100, Olav Vitters wrote:
  On Wed, Feb 13, 2013 at 06:53:17AM +0100, Martin Pitt wrote:
   The main issue that I see with this is that it's much harder to filter
   away/opt out, so it requires some broader consensus that we want to do
   this. We can still add a module blacklist if needed, though.
  
  I think build issues should be filed as Bugzilla bugs. At most we maybe
  want to set some keyword / status whiteboard. But I guess the summary
  would be consistent and people will quickly learn of it.
 
 They should be filed as Bugzilla bugs *after a timeout*. We already see
 build failures on ostree popping up on the #testable channel, and those
 are usually fixed in a timely manner.

Hmm, makes sense. Maybe start of small?

-- 
Regards,
Olav
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Andre Klapper
On Wed, 2013-02-13 at 10:59 +0100, Olav Vitters wrote:
 I think build issues should be filed as Bugzilla bugs. At most we maybe
 want to set some keyword / status whiteboard. But I guess the summary
 would be consistent and people will quickly learn of it.

Don't see a need for keywords here: Could filter by reporter account if
a dummy account (e.g. jenkins-buildbot@gnomebugs) was set up that
automatically files reports via Bugzilla's XML-RPC interface on build
failure (if that's wanted).

 You can use information from DOAP files to figure out the Bugzilla
 product. This is not always needed. Then commit whatever DOAP fixes are
 needed (the DOAP files are not bug free :P.

When I checked three months ago, 79 of 654 non-archived Git modules had
no DOAP file at all. 287 out of 575 modules with a DOAP file had an
entry for bug-database. GNOME does not even list bug-database in
https://live.gnome.org/action/recall/Git/FAQ?action=recallrev=29#How_do_I_add_a_description_to_the_git_web_view.3F__What_is_this_.22blah.doap.22.3F

andre
-- 
Andre Klapper  |  ak...@gmx.net
http://blogs.gnome.org/aklapper/

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Bastien Nocera
On Wed, 2013-02-13 at 10:59 +0100, Olav Vitters wrote:
 On Wed, Feb 13, 2013 at 06:53:17AM +0100, Martin Pitt wrote:
  Travis Reitter [2013-02-12 13:21 -0800]:
   On Tue, 2013-02-12 at 07:43 +0100, Martin Pitt wrote:
To make this really useful, we can't rely on developers checking this
every hour or every day, of course; instead we need push notifications
as soon as a module starts failing. That's the bit which needs broader
discussion and consent.
   
   I'd like to offer:
   
   (0) auto-file an urgent/critical bug against the module in the case of
   build breaks (and maybe high/major for test breaks?)
  
  Claudio Saavedra [2013-02-12 23:24 +0200]:
   (3) Automatically filing a bug in the broken module with the details of
   the breakage and additionally CC: whoever might be interested in keeping
   an eye on the continuous integration?
  
  This creates more overhead, but I like this proposal as well. I guess
  one can use python-bzutils and the like to auto-create bugs, and
  remember on the Jenkins side which bug was opened for which module,
  and auto-close it once the build works again.
  
  The main issue that I see with this is that it's much harder to filter
  away/opt out, so it requires some broader consensus that we want to do
  this. We can still add a module blacklist if needed, though.
 
 I think build issues should be filed as Bugzilla bugs. At most we maybe
 want to set some keyword / status whiteboard. But I guess the summary
 would be consistent and people will quickly learn of it.

They should be filed as Bugzilla bugs *after a timeout*. We already see
build failures on ostree popping up on the #testable channel, and those
are usually fixed in a timely manner.

If you file the bugs as soon as they're visible, you'll end up filing
outdated bugs, and severely reducing the good will of the people fixing
those bugs.

I don't think it would personally take me very long to yell I KNOW at
the bugmail and want to opt-out.

Bugmail should be for long-standing issues. If the problem can be solved
under 10/15 minutes, don't start nagging people watching the bug mail.

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Matthias Clasen
On Wed, Feb 13, 2013 at 5:23 AM, Bastien Nocera had...@hadess.net wrote:

 If you file the bugs as soon as they're visible, you'll end up filing
 outdated bugs, and severely reducing the good will of the people fixing
 those bugs.

 I don't think it would personally take me very long to yell I KNOW at
 the bugmail and want to opt-out.

 Bugmail should be for long-standing issues. If the problem can be solved
 under 10/15 minutes, don't start nagging people watching the bug mail.

Indeed, I've cleaned out plenty of 5-10 year old build breakage bugs
from gtk bugzilla in the last week. Having failures from this system
show up in #testing would be brilliant, on the other hand.
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread David King

On 2013-02-13 11:58, Andre Klapper ak...@gmx.net wrote:

When I checked three months ago, 79 of 654 non-archived Git modules had
no DOAP file at all. 287 out of 575 modules with a DOAP file had an
entry for bug-database. GNOME does not even list bug-database in
https://live.gnome.org/action/recall/Git/FAQ?action=recallrev=29#How_do_I_add_a_description_to_the_git_web_view.3F__What_is_this_.22blah.doap.22.3F


I just added a bug-database item to that DOAP template.

Should we have a GNOME Goal to tidy up the DOAP files and add a minimum 
set of recommended fields? It might not be a great deal of use, so just 
some better guidance on the wiki would probably be sufficient.


--
http://amigadave.com/


pgpHlNCm61kgi.pgp
Description: PGP signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Andre Klapper
On Wed, 2013-02-13 at 18:44 +, David King wrote:
 On 2013-02-13 11:58, Andre Klapper ak...@gmx.net wrote:
 When I checked three months ago, 79 of 654 non-archived Git modules had
 no DOAP file at all. 287 out of 575 modules with a DOAP file had an
 entry for bug-database. GNOME does not even list bug-database in
 https://live.gnome.org/action/recall/Git/FAQ?action=recallrev=29#How_do_I_add_a_description_to_the_git_web_view.3F__What_is_this_.22blah.doap.22.3F
 
 I just added a bug-database item to that DOAP template.
 
 Should we have a GNOME Goal to tidy up the DOAP files and add a minimum 
 set of recommended fields? It might not be a great deal of use, so just 
 some better guidance on the wiki would probably be sufficient.

That sounds like a good plan for 3.9. 
Comparing GNOME's usage of DOAP files with the DOAP usage in Apache
Software Foundation there's a lot more that GNOME can improve though,
but it will take me some time to outline it. 
Which I'll have in March again. Hopefully. 
If not, ping me. Please.

andre
-- 
Andre Klapper  |  ak...@gmx.net
http://blogs.gnome.org/aklapper/

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Tristan Van Berkom
First, this sounds like really interesting stuff, great news.

On Tue, Feb 12, 2013 at 3:43 PM, Martin Pitt martin.p...@ubuntu.com wrote:
 Hello fellow GNOME developers,

 this already came up as a side issue recently[1], but now we are at a
 point where have reasonably stabilized our GNOME jhbuild continuous
 builds/integration test server to become actually useful:

   https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/

 This is building gnome-suites-core-3.8.modules, which currently
 consists of 160 modules. Builds are updated every 15 minutes, and
 triggered whenever there was a new commit in a module or any of its
 dependencies. This mostly uses the smarts of jhbuild, we just have
 some extra scripts around to pick the results apart for Jenkins and
 drive the whole thing [2]. You can click through all the modules, all
 their builds, and get their build logs.

 Right now there are 151 successes (blue), 5 modules fail to build
 (red), and 4 modules build but fail in make check (yellow). It's
 been like that for a week or two now, so I'd say we are doing
 reasonably well for now. Some details:

 Build failures:
  - colord: recently started depending on libsystemd-login, which we
don't have yet; that's a fault on the Ubuntu side
  - e-d-s: calls an undeclared g_cond_timed_wait(), not sure what this
is about
  - folks: this started failing very recently, and thus is a perfect
example why this is useful (unqualified ambiguous usage of
HashTable)
  - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
recent changes in streamer? That smells like a case of broken by
change in dependency, needs updating to new API
  - mutter: worked until Jan 7, now failing on unknown XIBarrierEvent;
that might be a fault in Ubuntu's X.org packages or upstream, I
haven't investigated this yet

 Test failures:
  - gst-plugins-good, empathy: one test failure, the other tests work
  - realmd: This looks like the test suite is making some assumptions
about the environment which aren't true in a headless server?
  - webkit: I don't actually see an error in the log; we'll investigate
this closer on our side

 This was set up by Jean-Baptiste Lallement, I mostly help out with
 reviewing the daily status and cleaning up after some build/test
 failures which are due to broken checkouts, stale files, new missing
 build dependencies, and so on. It's reasonably maintenance intensive,
 but that's something which the two of us are willing to do if this
 actually gets used.

 The main difference to Colin's ostree builds is that this also runs
 make check, which is one of the main points of this: We want to know
 as soon as possible if e. g. a new commit in glib breaks something in
 gvfs or evolution-data-server. Where soon is measured in minutes
 instead of days/weeks, so that the knowledge what got changed and why
 is still fresh in the developer's head. That's also why I recently
 started to add integration tests to e. g. gvfs or
 gnome-settings-daemon, so that over time we can cover more and more
 functionality tests in these.

 To make this really useful, we can't rely on developers checking this
 every hour or every day, of course; instead we need push notifications
 as soon as a module starts failing. That's the bit which needs broader
 discussion and consent.

 I see some obvious options here what to do when the status of a module
 (OK/fails tests/fails build) changes:

  (1) mail the individual maintainers, as in the DOAP files
(1a) do it for everyone, and let people who don't want this filter
them out on a particular mail header (like X-GNOME-QA:)
(1b) do this as opt-in

This most often reaches the people who can do something about the
failure. Of course there are cases where it's not the module's fault, but a
dependency changed/got broken. There is no way we can automatically
determine whether it was e. g. a deliberate API break which modules
need to adjust to, or indeed a bug in the depending library, so we
might actually need to mail both the maintainers of the module that
triggered the rebuild, and the maintainers of the module which now
broke.

Upon reading this particular part (and I noticed before you are
using mostly jhbuild mechanics), it leads me to wonder, how
granular exactly are these rebuilds ?

I think ideally it would be great if builds could be triggered by
commit. In other words, commits are serialized chronologically and
each and every commit should trigger an entire rebuild, each rebuild
should build everything in the moduleset up to the latest commit...
separately, one after the other.

I know, it sounds like some CPU will be melting quickly
at the rate gnome-wide commits are made... but it would be
simply awesome, if we could automatically pull out the exact
commit which introduced exactly which failed build report in
which module (and then as you mentioned, we probably need
to notify both the 

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Colin Walters
On Thu, 2013-02-14 at 06:42 +0900, Tristan Van Berkom wrote:

 I know, it sounds like some CPU will be melting quickly
 at the rate gnome-wide commits are made... but it would be
 simply awesome, if we could automatically pull out the exact
 commit which introduced exactly which failed build report in
 which module (and then as you mentioned, we probably need
 to notify both the author of the commit, and the maintainer
 of the effected module).

If basically you want to ensure that each commit is buildable,
the best way to do that is to have builders try *queued patches*, not
just master.

That's how competently-run projects like OpenStack work; to pick
a random example:

https://review.openstack.org/#/c/21611/

You can see in Comment 2 the jenkins builder ran through a number of
gating tests.

I plan to do this eventually for the OSTree builder - the system
is designed around cloning, so it should be quite cheap to clone
the master builder, apply a patch series from bugzilla, build that,
and run the tests.



___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Emmanuele Bassi
hi;

On 13 February 2013 22:11, Colin Walters walt...@verbum.org wrote:
 On Thu, 2013-02-14 at 06:42 +0900, Tristan Van Berkom wrote:

 I know, it sounds like some CPU will be melting quickly
 at the rate gnome-wide commits are made... but it would be
 simply awesome, if we could automatically pull out the exact
 commit which introduced exactly which failed build report in
 which module (and then as you mentioned, we probably need
 to notify both the author of the commit, and the maintainer
 of the effected module).

 If basically you want to ensure that each commit is buildable,
 the best way to do that is to have builders try *queued patches*, not
 just master.

this is what the try server at Mozilla does:

  https://wiki.mozilla.org/ReleaseEngineering/TryServer

the try server is a *great* tool (even if, sadly, is affected by
Mercurial being arse) and it makes contributing code much, much safer.
the try server can also be told to send the result of a patch set
straight to a bug, so that the build status and the test suite result
is recorded along with the bug.

ciao,
 Emmanuele.

--
W: http://www.emmanuelebassi.name
B: http://blogs.gnome.org/ebassi/
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Martin Pitt
Hello Tristan,

Tristan Van Berkom [2013-02-14  6:42 +0900]:
 Upon reading this particular part (and I noticed before you are
 using mostly jhbuild mechanics), it leads me to wonder, how
 granular exactly are these rebuilds ?

Right now, every 15 minutes. Sometimes longer, when the previous run
is still running.

 I think ideally it would be great if builds could be triggered by
 commit. In other words, commits are serialized chronologically and
 each and every commit should trigger an entire rebuild, each rebuild
 should build everything in the moduleset up to the latest commit...
 separately, one after the other.

That is indeed the long-term plan, but there's still some work to be
done before we can do that. The machine we are running this on has 64
2.7 GHz cores and 64 GB of RAM, that really isn't a bottleneck right
now. The main two problems right now are that the jhbuild update
stage takes some 5 minutes to update all the ~ 160 git trees, and
that jhbuild build doesn't parallelize at all, i. e. build modules
which don't depend on each other could build in parallel.

Once we solve both, and we dramatically reduce the time of one run
from several hours (which is currently needed if e. g. a glib change
happens, which rebuilds pretty much everything) to  15 minutes.

 The way I imagine this works now (and this is a big assumption,
 correct me if I'm wrong), is that a commit in a given module triggers
 a jhbuild build, which would mean that:
 
a.) Several commits could have been made in a given module
 by the time jhbuild actually runs... meaning we dont know
 which of the given commits in that lapse of time actually
 caused the fault.

That's right. It's massively better to know that one commit in these
15 minutes triggered it than something in the past week, but still
not perfect as you say.

b.) Module foo triggers a rebuild... and while jhbuild builds,
 it also pulls in new changes from module bar, in this
 case it's possible that a recent commit in module bar
 caused another module baz to be effected,  but in the
 end it's module foo who is blamed (since module foo
 essentially /triggered a rebuild/)

That's a trickier thing. For most commits, one should actually be able
to build them independently, but sometimes those in between breaks
are inevitable. Say, you make an API change in a library and then
update your application to the new API, then in between you will get a
build failure. The next iteration should fix it again.
We have that problem independently of the frequency we build stuff of
course, as we can always hit a bad time.

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-13 Thread Martin Pitt
Hello again,

first, thanks for everyone who jumped in and helped to fix failures! I
wanted to send a quick update.

Martin Pitt [2013-02-12  7:43 +0100]:
  - colord: recently started depending on libsystemd-login, which we
don't have yet; that's a fault on the Ubuntu side

I installed the libsystemd-login libs and pushed a fix for a missing
library link to trunk. Working now.

  - e-d-s: calls an undeclared g_cond_timed_wait(), not sure what this
is about

I filed a bug, Matthew Barnes quickly fixed it in trunk. After that it
was working for two runs. (Not any more, see below)

  - folks: this started failing very recently, and thus is a perfect
example why this is useful (unqualified ambiguous usage of
HashTable)

Philip quickly fixed that one, working now.

  - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
recent changes in streamer? That smells like a case of broken by
change in dependency, needs updating to new API

Still outstanding.

  - mutter: worked until Jan 7, now failing on unknown XIBarrierEvent;
that might be a fault in Ubuntu's X.org packages or upstream, I
haven't investigated this yet

Jasper pointed out the problem in Ubuntu'x libxi packages, I'll take
care of this.

 Test failures:
  - gst-plugins-good, empathy: one test failure, the other tests work

gst-plugins-good got fixed by Tim, empathy still outstanding.

  - realmd: This looks like the test suite is making some assumptions
about the environment which aren't true in a headless server?
  - webkit: I don't actually see an error in the log; we'll investigate
this closer on our side

Still outstanding.

So yesterday evening we were down to 5 failures, but over night we got
a swath of new test failures. Wading through them now, but some look
quite nonobvious. 

nautilus failed on

  evaluated: nautilus_file_get_name (file_1)
  expected: eazel:///
  got: eazel:

It failed on exactly that until January 5, then was working until
yesterday, and now failing again. For such issues I'd be glad if one
of the nautilus maintainers could work this out with me.

libgdata once again failed on the youtube test, as it did in the past.
In general, any test that tries to access remote servers is pretty
much doomed on that machine I'm afraid, as it can only do http and
https through a proxy (it has $http{,s}_proxy set).

I'll look at the other failures now.

Martin
-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)


signature.asc
Description: Digital signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Matthias Clasen
On Tue, Feb 12, 2013 at 1:43 AM, Martin Pitt martin.p...@ubuntu.com wrote:
 Hello fellow GNOME developers,

 this already came up as a side issue recently[1], but now we are at a
 point where have reasonably stabilized our GNOME jhbuild continuous
 builds/integration test server to become actually useful:

   https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/

Cool ! That is great stuff, Martin


  - e-d-s: calls an undeclared g_cond_timed_wait(), not sure what this
is about

Fallout from 
http://git.gnome.org/browse/glib/commit/?id=d632713a7716db10eca4524e7438cbc52f0ea230
would be my guess.

  - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
recent changes in streamer? That smells like a case of broken by
change in dependency, needs updating to new API

Same.

  - mutter: worked until Jan 7, now failing on unknown XIBarrierEvent;
that might be a fault in Ubuntu's X.org packages or upstream, I
haven't investigated this yet

There was some bugzilla discussion around version checks vs feature
checks for XI2.3 (seems the Ubuntu package is patched in a way that
confuses mutters configure)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Sriram Ramkrishna
Christmas has come early this year!  This is fantastic news.  Perhaps I can
try to get some volunteers to help file bugs on the build?  What can we do
to make this a sustaining success?

sri


On Mon, Feb 11, 2013 at 10:43 PM, Martin Pitt martin.p...@ubuntu.comwrote:

 Hello fellow GNOME developers,

 this already came up as a side issue recently[1], but now we are at a
 point where have reasonably stabilized our GNOME jhbuild continuous
 builds/integration test server to become actually useful:

   https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/

 This is building gnome-suites-core-3.8.modules, which currently
 consists of 160 modules. Builds are updated every 15 minutes, and
 triggered whenever there was a new commit in a module or any of its
 dependencies. This mostly uses the smarts of jhbuild, we just have
 some extra scripts around to pick the results apart for Jenkins and
 drive the whole thing [2]. You can click through all the modules, all
 their builds, and get their build logs.

 Right now there are 151 successes (blue), 5 modules fail to build
 (red), and 4 modules build but fail in make check (yellow). It's
 been like that for a week or two now, so I'd say we are doing
 reasonably well for now. Some details:

 Build failures:
  - colord: recently started depending on libsystemd-login, which we
don't have yet; that's a fault on the Ubuntu side
  - e-d-s: calls an undeclared g_cond_timed_wait(), not sure what this
is about
  - folks: this started failing very recently, and thus is a perfect
example why this is useful (unqualified ambiguous usage of
HashTable)
  - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
recent changes in streamer? That smells like a case of broken by
change in dependency, needs updating to new API
  - mutter: worked until Jan 7, now failing on unknown XIBarrierEvent;
that might be a fault in Ubuntu's X.org packages or upstream, I
haven't investigated this yet

 Test failures:
  - gst-plugins-good, empathy: one test failure, the other tests work
  - realmd: This looks like the test suite is making some assumptions
about the environment which aren't true in a headless server?
  - webkit: I don't actually see an error in the log; we'll investigate
this closer on our side

 This was set up by Jean-Baptiste Lallement, I mostly help out with
 reviewing the daily status and cleaning up after some build/test
 failures which are due to broken checkouts, stale files, new missing
 build dependencies, and so on. It's reasonably maintenance intensive,
 but that's something which the two of us are willing to do if this
 actually gets used.

 The main difference to Colin's ostree builds is that this also runs
 make check, which is one of the main points of this: We want to know
 as soon as possible if e. g. a new commit in glib breaks something in
 gvfs or evolution-data-server. Where soon is measured in minutes
 instead of days/weeks, so that the knowledge what got changed and why
 is still fresh in the developer's head. That's also why I recently
 started to add integration tests to e. g. gvfs or
 gnome-settings-daemon, so that over time we can cover more and more
 functionality tests in these.

 To make this really useful, we can't rely on developers checking this
 every hour or every day, of course; instead we need push notifications
 as soon as a module starts failing. That's the bit which needs broader
 discussion and consent.

 I see some obvious options here what to do when the status of a module
 (OK/fails tests/fails build) changes:

  (1) mail the individual maintainers, as in the DOAP files
(1a) do it for everyone, and let people who don't want this filter
them out on a particular mail header (like X-GNOME-QA:)
(1b) do this as opt-in

This most often reaches the people who can do something about the
failure. Of course there are cases where it's not the module's fault,
 but a
dependency changed/got broken. There is no way we can automatically
determine whether it was e. g. a deliberate API break which modules
need to adjust to, or indeed a bug in the depending library, so we
might actually need to mail both the maintainers of the module that
triggered the rebuild, and the maintainers of the module which now
broke.

  (2) one big mailing list with all failures, and machine parseable
  headers for module/test

This might be more interesting for e. g. the release team (we can
CC: the release team in (1) as well, of course), but will be rather
high-volume, and pretty much forces maintainers to carefully set up
filters.

 My gut feeling is that we might start with (2) for a while, see how it
 goes, and later switch to (1) when we got some confidence in this?

 Opinions most welcome!

 Also, I'll gladly work with the developers of the currently failing
 modules to get them succeeding. I have full access to the build
 machine in case errors 

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Jasper St. Pierre
The libXi patches are because Ubuntu ships a libXi package that's marked as
if it's a new version, but doesn't contain the new XI stuff for some reason:

http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/raring/libxi/raring/view/head:/debian/patches/revert-xi2.3.diff

The easiest fix is to mark the package as an old version of libXi, which
will cause mutter to understand it has the old libXi, and not attempt to
use the new libXi features.



On Tue, Feb 12, 2013 at 2:06 PM, Sriram Ramkrishna s...@ramkrishna.mewrote:

 Christmas has come early this year!  This is fantastic news.  Perhaps I
 can try to get some volunteers to help file bugs on the build?  What can we
 do to make this a sustaining success?

 sri


 On Mon, Feb 11, 2013 at 10:43 PM, Martin Pitt martin.p...@ubuntu.comwrote:

 Hello fellow GNOME developers,

 this already came up as a side issue recently[1], but now we are at a
 point where have reasonably stabilized our GNOME jhbuild continuous
 builds/integration test server to become actually useful:

   https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/

 This is building gnome-suites-core-3.8.modules, which currently
 consists of 160 modules. Builds are updated every 15 minutes, and
 triggered whenever there was a new commit in a module or any of its
 dependencies. This mostly uses the smarts of jhbuild, we just have
 some extra scripts around to pick the results apart for Jenkins and
 drive the whole thing [2]. You can click through all the modules, all
 their builds, and get their build logs.

 Right now there are 151 successes (blue), 5 modules fail to build
 (red), and 4 modules build but fail in make check (yellow). It's
 been like that for a week or two now, so I'd say we are doing
 reasonably well for now. Some details:

 Build failures:
  - colord: recently started depending on libsystemd-login, which we
don't have yet; that's a fault on the Ubuntu side
  - e-d-s: calls an undeclared g_cond_timed_wait(), not sure what this
is about
  - folks: this started failing very recently, and thus is a perfect
example why this is useful (unqualified ambiguous usage of
HashTable)
  - gst-plugins-bad: unknown type GStaticRecMutex; this might be due to
recent changes in streamer? That smells like a case of broken by
change in dependency, needs updating to new API
  - mutter: worked until Jan 7, now failing on unknown XIBarrierEvent;
that might be a fault in Ubuntu's X.org packages or upstream, I
haven't investigated this yet

 Test failures:
  - gst-plugins-good, empathy: one test failure, the other tests work
  - realmd: This looks like the test suite is making some assumptions
about the environment which aren't true in a headless server?
  - webkit: I don't actually see an error in the log; we'll investigate
this closer on our side

 This was set up by Jean-Baptiste Lallement, I mostly help out with
 reviewing the daily status and cleaning up after some build/test
 failures which are due to broken checkouts, stale files, new missing
 build dependencies, and so on. It's reasonably maintenance intensive,
 but that's something which the two of us are willing to do if this
 actually gets used.

 The main difference to Colin's ostree builds is that this also runs
 make check, which is one of the main points of this: We want to know
 as soon as possible if e. g. a new commit in glib breaks something in
 gvfs or evolution-data-server. Where soon is measured in minutes
 instead of days/weeks, so that the knowledge what got changed and why
 is still fresh in the developer's head. That's also why I recently
 started to add integration tests to e. g. gvfs or
 gnome-settings-daemon, so that over time we can cover more and more
 functionality tests in these.

 To make this really useful, we can't rely on developers checking this
 every hour or every day, of course; instead we need push notifications
 as soon as a module starts failing. That's the bit which needs broader
 discussion and consent.

 I see some obvious options here what to do when the status of a module
 (OK/fails tests/fails build) changes:

  (1) mail the individual maintainers, as in the DOAP files
(1a) do it for everyone, and let people who don't want this filter
them out on a particular mail header (like X-GNOME-QA:)
(1b) do this as opt-in

This most often reaches the people who can do something about the
failure. Of course there are cases where it's not the module's fault,
 but a
dependency changed/got broken. There is no way we can automatically
determine whether it was e. g. a deliberate API break which modules
need to adjust to, or indeed a bug in the depending library, so we
might actually need to mail both the maintainers of the module that
triggered the rebuild, and the maintainers of the module which now
broke.

  (2) one big mailing list with all failures, and machine parseable
  headers for module/test

This might be more 

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Travis Reitter
On Tue, 2013-02-12 at 07:43 +0100, Martin Pitt wrote:
 Hello fellow GNOME developers,
 
 this already came up as a side issue recently[1], but now we are at a
 point where have reasonably stabilized our GNOME jhbuild continuous
 builds/integration test server to become actually useful:
 
   https://jenkins.qa.ubuntu.com/view/Raring/view/JHBuild%20Gnome/

This is really nice and exactly something we need. Thank you and
Jean-Baptiste very much for setting this up!

 Build failures:

  - folks: this started failing very recently, and thus is a perfect
example why this is useful (unqualified ambiguous usage of
HashTable)

I was not aware of this (as it's a recent break due to libxml changes).
I'll discuss notifications below.

And, at any rate, we've fixed this now.

 To make this really useful, we can't rely on developers checking this
 every hour or every day, of course; instead we need push notifications
 as soon as a module starts failing. That's the bit which needs broader
 discussion and consent.
 
 I see some obvious options here what to do when the status of a module
 (OK/fails tests/fails build) changes:
 
  (1) mail the individual maintainers, as in the DOAP files
(1a) do it for everyone, and let people who don't want this filter
them out on a particular mail header (like X-GNOME-QA:)
(1b) do this as opt-in
 
This most often reaches the people who can do something about the
failure. Of course there are cases where it's not the module's fault, but a
dependency changed/got broken. There is no way we can automatically
determine whether it was e. g. a deliberate API break which modules
need to adjust to, or indeed a bug in the depending library, so we
might actually need to mail both the maintainers of the module that
triggered the rebuild, and the maintainers of the module which now
broke.
 
  (2) one big mailing list with all failures, and machine parseable
  headers for module/test
 
This might be more interesting for e. g. the release team (we can
CC: the release team in (1) as well, of course), but will be rather
high-volume, and pretty much forces maintainers to carefully set up
filters.
 
 My gut feeling is that we might start with (2) for a while, see how it
 goes, and later switch to (1) when we got some confidence in this?
 
 Opinions most welcome!

I'd like to offer:

(0) auto-file an urgent/critical bug against the module in the case of
build breaks (and maybe high/major for test breaks?)

Buildability is incredibly important, and I, for one, would be perfectly
happy to get such high-priority bugs if my module fails to build for
anyone.

And it's even more critical if we expect people to use a tool like
jhbuild, where any build break lower in the stack wastes other
developers' time.

Getting everyones' tests to run reliably will be a challenge (there are
a couple tests in Folks that I'm trying to fix currently), but it's
again very important to be (and stay) functional if we want our stack to
work consistently.

 Also, I'll gladly work with the developers of the currently failing
 modules to get them succeeding. I have full access to the build
 machine in case errors aren't reproducible.

Thanks again. I'm really excited to have a continuous build and testing
system to keep GNOME in good shape.

-Travis


smime.p7s
Description: S/MIME cryptographic signature
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Claudio Saavedra
This is really awesome stuff!

On Tue, 2013-02-12 at 07:43 +0100, Martin Pitt wrote:

 To make this really useful, we can't rely on developers checking this
 every hour or every day, of course; instead we need push notifications
 as soon as a module starts failing. That's the bit which needs broader
 discussion and consent.
 
 I see some obvious options here what to do when the status of a module
 (OK/fails tests/fails build) changes:
 
  (1) mail the individual maintainers, as in the DOAP files
(1a) do it for everyone, and let people who don't want this filter
them out on a particular mail header (like X-GNOME-QA:)
(1b) do this as opt-in
 
This most often reaches the people who can do something about the
failure. Of course there are cases where it's not the module's fault, but a
dependency changed/got broken. There is no way we can automatically
determine whether it was e. g. a deliberate API break which modules
need to adjust to, or indeed a bug in the depending library, so we
might actually need to mail both the maintainers of the module that
triggered the rebuild, and the maintainers of the module which now
broke.
 
  (2) one big mailing list with all failures, and machine parseable
  headers for module/test

(3) Automatically filing a bug in the broken module with the details of
the breakage and additionally CC: whoever might be interested in keeping
an eye on the continuous integration?

Claudio

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Martin Pitt
Hello all,

Travis Reitter [2013-02-12 13:21 -0800]:
 On Tue, 2013-02-12 at 07:43 +0100, Martin Pitt wrote:
  To make this really useful, we can't rely on developers checking this
  every hour or every day, of course; instead we need push notifications
  as soon as a module starts failing. That's the bit which needs broader
  discussion and consent.
 
 I'd like to offer:
 
 (0) auto-file an urgent/critical bug against the module in the case of
 build breaks (and maybe high/major for test breaks?)

Claudio Saavedra [2013-02-12 23:24 +0200]:
 (3) Automatically filing a bug in the broken module with the details of
 the breakage and additionally CC: whoever might be interested in keeping
 an eye on the continuous integration?

This creates more overhead, but I like this proposal as well. I guess
one can use python-bzutils and the like to auto-create bugs, and
remember on the Jenkins side which bug was opened for which module,
and auto-close it once the build works again.

The main issue that I see with this is that it's much harder to filter
away/opt out, so it requires some broader consensus that we want to do
this. We can still add a module blacklist if needed, though.

Thanks!

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Announcement/RFC: jhbuild continuous integration testing

2013-02-12 Thread Martin Pitt
Hello Sriram,

Sriram Ramkrishna [2013-02-12 11:06 -0800]:
 Christmas has come early this year!  This is fantastic news.  Perhaps I can
 try to get some volunteers to help file bugs on the build?

I filed one on e-d-s yesterday, which already got fixed. I also
discussed colord with Richard and the gstreamer bits with Tim, and we
got some fixes there as well. Philip fixed folks. It's really great to
see the list shrink down even further! I'll look into the remaining
bits today; if you want to join the effort, let's talk in
#gnome-hackers?

 What can we do to make this a sustaining success?

In the long run, we need to automate the notifications and/or bug
filing IMHO, otherwise it's too tedious, and more importantly, we'll
get too long delays.

Thanks!

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list