I've been under the impression that we cared about performance regressions, although not (yet?) enough to back out a change that caused one. My thought was that we'd get the bugs filed when they happened, and analyzed the changes later to determine what caused the regression. I also thought that it would be easier to determine the cause of the regression (and fixing it) by working with the change information rather than just doing regular performance work (start with profile etc.).
However, some engineers at least don't want to work with this past change information, and won't be looking at performance bugs at all, as far as I understand. I'd like to know if everyone feels this way. If yes, then I can stop filing performance regression bugs. I'd probably be able to simplify the performance reporting tools quite a bit if we didn't care about regressions (like maybe only reporting the absolute test results on tbox, and nuking all other results). If some feel perf regression bugs are usable, then I have a question about what to do with perf regression bugs that are caused by an engineer who does not want to look at performance regression bugs. I am somewhat annoyed by the situation, mostly because I'd like to avoid doing work that is considered useless. I do understand people have different styles of approaching problems. -- Heikki Toivonen
signature.asc
Description: OpenPGP digital signature
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Open Source Applications Foundation "chandler-dev" mailing list http://lists.osafoundation.org/mailman/listinfo/chandler-dev
