One thing missing from the 0.6 scenarios was a use case for synching
performance. We really didn't focus on this for 0.6 specifically but
I think it would be worthwhile to come up with some goals for 0.7.
On Feb 4, 2006, at 3:30 PM, Katie Capps Parlante wrote:
Hi Heikki and Sheila,
I think we can identify 3 different ways to frame the performance
tests that we ought to be focusing on in 0.7...
1) A set of use cases and target times that represent "acceptable"
performance for our 0.7 target user. This means being clear about
our target user and the set of 0.7 use cases. It means identifying
a target data size (target calendar size, target # of tasks, that
sort of thing.) I don't think we're quite ready to do that, as
Sheila mentioned, but we're close and should put this on the list
of things we need from design.
2) Use cases/tests that are consistent with the previous release so
that we can compare progress across releases. As Heikki mentioned,
we want to avoid regressions (unless there is some good reason) and
improve somewhat.
3) Tests that push the size requirement up an order of magnitude,
so that we feel confident that we are making progress to a 1.0 goal
of large amounts of email and other data. (It would be great to
formalize this 1.0 goal, that should be part of working out the 1.0
roadmap, which we can focus on after 0.7 planning has been flushed
out).
I think we should try pick a suite of tests and targets for 0.7
that cover all 3 of these needs.
Heikki, your list covers #2. Perhaps we could trim a few tests:
- New event creation (file menu) and (in place) could become one
test for both empty repository and 3000 item calendar
- Cut the test that creates a new calendar on empty repository
I don't think we need to keep thinking of that set of 9 tests as
our focus tests -- the tests we write to cover (1) should have that
role.
We could also proceed with adding a test or two that exercises a
bigger repository, some 10,000 item test. I'm not sure that a
10,000 item calendar test is appropriate, though. Perhaps we could
add the email test that Andi wrote (or a variant), or revive the
old RSS test.
I'm not sure that I agree with your target time changes, but want
to mull this over before giving you a counter proposal.
Cheers,
Katie
Heikki Toivonen wrote:
Sheila Mooney wrote:
Heikki, is this meant to be only calendar related workflows or
all the
performance goals for 0.7?
As I mentioned in the first message on Tuesday, we will be
creating new
tests as well. So no, it is not the complete list of performance
goals.
My purpose was to finalize the numbers for the existing tests, but
I see
my wording was a little misleading.
A major tenet for 0.7 is now around the table and task management
and we
haven't really discussed what (if any) new use cases we should be
handling. Should we be trying to optimize for more than 3000
items? We
had talked about that back in the fall when we put together the 1.0
stickie plan. Also, I know we aren't intending on having a "usable"
dashboard for 0.7 but it might be worthwhile to at least start
tracking
some table related use cases such as searching and sorting.
Good, this is the kind of feedback I was hoping for.
And since we haven't discussed that much at all, I'm calling off the
Monday deadline.
I would prefer to keep the existing 3000 events tests we have because
they give us historical data on how we are doing, and create new
tests
for other cases. However, there is a thing about too many tests as
well
(takes too long to run the suite) so we need to strike a balance.
---------------------------------------------------------------------
---
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev