Agree with Katie's comments, plus I'd add I think it's important to
identify "typical" case requirements, e.g., a typical calendar might
be 500 items (or whatever) and then build performance goals around
the typical cases.
On Dec 6, 2005, at 10:47 AM, Katie Capps Parlante wrote:
Heikki Toivonen wrote:
There are a couple of approaches we could take. One is to
concentrate on
the cases that are furthest from our 0.6 goals. The other is to
take a
broad look and work on all the cases that are not yet under the ideal
limits. In the first case our numeric goals would remain
unchanged. In
the second case we would set new, tighter goals for the cases that
did
make it under acceptable limits but are not yet under ideal limits.
Personally I am in favor of the latter. I have some initial
suggestions
here:
http://wiki.osafoundation.org/bin/view/Journal/
ZeroSevenPerfGoals20051204
Another approach we could take is to increase the size of the
collections (and repository) that we expect to handle. Instead of a
3000 item calendar, we could say a 10,000 item repository, with
several collections/calendars. Nailing down the tenets might help
us pick a good target, but the general goal here would be to
ratchet up the size of the repository. We could combine this with
focus on a specific set of cases and/or ratcheting down the
response times.
There's also a question of policy. Do we want to mandate a strict
policy
of not allowing performance regressions (regressions backed out, no
questions asked), or do we consider each regression on a case by case
basis? The danger with the second approach is that if we leave the
regression in, and other code starts piling on top of those
changes, it
can be hard to back out even if we wanted to. Personally I am
slightly
in favor of a strict policy, but the other approach might also
work well
if we have just a few regressions.
Mozilla has long maintained a very strict policy where regressions
are
simply backed out. If you really wanted your fix/feature in, there
were
two options: make a change such that performance does not regress, or
convince everyone that your change is so important it trumps
performance.
This approach seems pretty reasonable for Mozilla (or Safari, which
I hear has a similar system), where the intense focus is on a
particular well-known use case. In our case, we're continuing to
tweak the functionality and even change the formulation of the use
cases that we measure. I think it would be overly constraining at
this point to have a formal "back all regressions out" policy.
Cheers,
Katie
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev