Heikki,

I do think our performance is important, and that it's worthwhile to have a set of metrics to shoot for, and to have times (like right now) where we focus on those problems. However, I don't think performance-regression bugs targeting a particular commit are useful or positive for me as a developer:

- Performance analysis works better when all the changes affecting a metric are in place: we can analyze the whole chain, not just one piece at a time. - It's not like we can just back out the commit and discard the feature requirement. Most of the time, it's not possible to implement new features without some performance cost. - It's *really* demoralizing to work hard on a feature, then have a performance-regression bug filed against it a day or so later (usually after you've started to dig into the next feature).

I also have trouble with our performance-monitoring mechanisms: many of the measurements vary widely, even when run with the same version of the code: here's 19 runs against a single revision on a single platform, and the standard deviation is 1/3 of the average time!

http://builds.osafoundation.org/perf_data/detail_20070415.html#creating_a_new_event_in_the_cal_view_after_large_data_import.double_click_in_the_calendar_view

That's a weekend day: on a weekday, or a slower platform, there may be only one perf run of each revision (or none at all). Because of this, the graphs cover too short a period to reliably see the effect of a single commit.

This means that the graphs are really only indicators of an area where investigation is necessary. I'm still working out how best to do that investigation... for me, so far, it's been best to start with profiling part of the code being run by the performance test; I've learned to avoid profiling the voodoo idles and sleeps that the functional tests contain: they only throw off the measurements. Once there's a profile, I can look for a method or functional area that's too slow or called too often, and rewrite as necessary. To do this, I don't look back at whatever revisions happened in the past.

(While we're on the subject: I also don't like the way we state our the performance targets: If we say that 1 second is "acceptable", but the "goal" is .1 seconds, I'm going to stop looking at a problem once it reaches "acceptable" and switch to another problem, and won't try to improve the first one further until all the other metrics are at the "acceptable" level -- and probably not until after all other bugs are fixed, too, which hasn't happened yet. I'd be happier if the table on the tbox page used a shade of green once a measurement got to "acceptable", and a brighter shade if it got to the "goal": the table would make us look a lot less screwed than the red and orange mess there now, which makes it look like we're making no progress at all.)

...Bryan

Heikki Toivonen wrote:
I've been under the impression that we cared about performance
regressions, although not (yet?) enough to back out a change that caused
one. My thought was that we'd get the bugs filed when they happened, and
analyzed the changes later to determine what caused the regression. I
also thought that it would be easier to determine the cause of the
regression (and fixing it) by working with the change information rather
than just doing regular performance work (start with profile etc.).

However, some engineers at least don't want to work with this past
change information, and won't be looking at performance bugs at all, as
far as I understand.

I'd like to know if everyone feels this way. If yes, then I can stop
filing performance regression bugs. I'd probably be able to simplify the
performance reporting tools quite a bit if we didn't care about
regressions (like maybe only reporting the absolute test results on
tbox, and nuking all other results).

If some feel perf regression bugs are usable, then I have a question
about what to do with perf regression bugs that are caused by an
engineer who does not want to look at performance regression bugs.

I am somewhat annoyed by the situation, mostly because I'd like to avoid
doing work that is considered useless. I do understand people have
different styles of approaching problems.

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to