On Mon, Dec 6, 2010 at 8:43 PM, Niles <nil...@gmail.com> wrote:
>
>
> On Dec 6, 10:53 pm, Jason Grout <jason-s...@creativetrax.com> wrote:
>> Robert (and whoever else is working on the buildbot [1]),
>>
>> First of all, THANK YOU!!!  This is amazing!  I love how it is hooked up
>> to show the status on trac.
>
> Indeed -- THANKS ! ! !

Your welcome.

>> Could we add building documentation to the things that it checks for
>> each ticket?

Done.

> I have a couple of other feature requests/suggestions, which of course
> you're welcome to ignore since this is already great work!  But if
> you're interested, here are my thoughts:

One of the next steps is to get this cleaned up enough to get into
Sage, so people can start improving it and running it on their own
machines. The way it's set up is that there's a single server that can
handle any number of build slaves.

> 1. When I visit the buildbot page for a particular ticket, I think it
> would be useful to show, in addition to the build/test history, the
> list of patches which buildbot plans to apply on its next run -- this
> would help people verify that the correct attachments/dependencies are
> computed by buildbot without having to just wait for the next run.

The patches it intends to apply are exactly those listed at the top of
the page.

> 2. It might be nice to know when buildbot plans to test the ticket
> next -- even a rough estimate would be helpful so that reviewers can
> determine if it would be more efficient for them to build and run
> tests themselves or wait for the buildbot

That's an interesting idea, but it's not easy to expose that
information. The way it works is that a build slave requests a list of
tickets, rates them according to its own internal settings (which may
be different for each bot), and then starts on a ticket (notifying the
server to avoid overlap). I suppose as well as a pending field list,
one could add something that collects different bots ratings at
different points in time and their respective speeds and does an
estimate.

Ideally, as well as the "main" buildbot that just churns through
tickets, this would be incentive for people to run their own private
buildbots according to their interests (it can be configured to give
priority to components, authors, etc). I also intend it to be very
easy to run a bot on a list of tickets, which isn't as good as having
the results up there right away, but better than doing things manually
(and the results are shared). Of couse we can't all be pounding away
on sage.math with -tp 12...

> 3. It seems that there are 2 or 3 tests that consistently fail, and
> which most people ignore for the purposes of reviewing (e.g. the
> startup.py test for startup time on sage.math.washington.edu).  This
> means that probably no ticket will ever have "TestsPassed" status, and
> the "TestsFailed" status is less useful because it doesn't distinguish
> between failures which indicate a ticket needs more work and failures
> which should be ignored (for the purposes of reviewing, that is).  I
> don't really want to reignite the debate about whether these tests
> should be modified or not, but perhaps buildbot could have a separate
> status when the only doctest failures are from among a certain set of
> files -- this would give more useful information to reviewers.

Yes. Though it's potentially dangerous to filter on files rather than
individual doctests. The first step would be to establish a baseline
for each server, but beyond that I probably won't have time to
implement much in the near term.

> 4. On a similar topic, I think someone else suggested that the failure
> summary from doctests could be posted directly to each buildbot ticket
> page -- for most people, that's probably the only part of the log
> that's really important.

Well, coverage is nice, to look at too. For sure there'll be a summary
log like David Roe requested, but I'd also like each machine report to
be somewhat compact so one can more easily see the results of several
machines. Certainly worth experimenting with.

There's probably a wealth of data that can be extracted by (1)
analyzing the sources before and after and (2) analyzing the full log,
and I'm hoping it won't be too hard for people to play with that.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to