On 17 Apr, 2007, at 13:47, Heikki Toivonen wrote:
Only Bryan and Andi have answered to some of the questions I asked
in my
emails yesterday, so trying again in a poll format. I'd like to know
what tools and procedures we currently have in place add value, and
which do not. Based on this I can then drop those procedures and
reports
that aren't cost effective to do.
If something is just nice to look at, like trend graph, but you don't
actually do anything with the information, then I'd like you to answer
in the negative for that case.
You can answer privately or on the list; I'll summarize the answers by
Thursday evening the latest.
1. Are regression bugs usable to you?
Not really: Performance fixes are in general incremental, and rarely
involve "fixing" a particular regression. So, the bugs tend to float
around getting retargetted from release to release, and occasionally
when a perf # gets good enough, they bug gets closed or moves over to
future.
However, I think that the information as to when a perf regression
occurs is very useful, since it helps point to where we can improve
performance in the future. (As you know, the perf tests aren't
exactly run against every revision, and there can be pretty large
variability in a single test, so it's not always possible, based on
the tinderbox numbers, to pin down a problem to a single revision).
I'm not sure, though, what the best way to record this information,
and associated conversations between developers, is.
2. Are trend graphs usable to you?
http://builds.osafoundation.org/perf_data/trends.html
3. Are the tables from which trend graphs are drawn from usable to
you?
For example
http://builds.osafoundation.org/perf_data/
detail_20070318_20070417.html
4. Are the full results for today usable to you? For example
http://builds.osafoundation.org/perf_data/detail_20070417.html
5. Are the daily graphs usable to you? For example
http://builds.osafoundation.org/perf_data/detail_20070417.html
6. Are historical daily reports usable to you? For example
http://builds.osafoundation.org/perf_data/detail_20070416.html
[2--6] I do look at these occasionally, but not regularly. Having the
graphs makes it easier to tell where Things (Good and Bad) Happened,
but probably not essential.
7. Are the deltas and std.dev usable to you on
http://builds.osafoundation.org/perf_data/tbox.html
I understand them in the context or rt.py --repeat. But the ones in
tinderbox data, are the standard deviations taken across different
revisions? If so, they're not very good numbers statistically. In
addition, the deltas don't make sense to me: we shouldn't be
comparing a single revision to itself, for example. Lastly (as I
think you mentioned in an earlier email), clearing out the data at
the beginning of each day, instead of tracking the previous 24 hours,
makes sense. (I've had at least one case where I never got to see the
nice green improvement from a perf fix because it was checked in
around 10pm).
8. If you would like to change the colors or format of
http://builds.osafoundation.org/perf_data/tbox.html please list the
changes here:
9. Is rt.py -p usable to you?
Yup ... I use it all the time.
10. Is rt.py -t usable to you?
I use it, although often in conjunction with --dry-run so that I can
use pdb on the "real" test invocation.
As an aside: Why is "-t" even an option? Couldn't rt.py just treat
all arguments (i.e. things that aren't options on the command line)
as tests to be run?
11. Is rt.py -P usable to you?
Lately, when I've tried to use it, hotshot has not been happy with
loading the profile afterwards. So, I've typically switched to insert
hotshot.runcall in the source(*), or sometimes util.easyprof
(*) But you knew that already :)
12. Is rt.py --repeat usable to you?
Yes, I use it quite often. E.g. if I'm trying to figure out if a
patch really makes a difference to performance, I'll run it
(sometimes overnight) with some number of --repeats, so as to
minimize uncertainty of crazy out-of-range values.
13. Any other things you can think of that either add value or are
irrelevant or you would like changed:
--
Heikki Toivonen
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev