On 5/15/07, Priscilla Chung <[EMAIL PROTECTED]> wrote:
For 0.7, this may be ok. However, over the longer term there will be a need for people to have access to everything they have 'DONE'. I realize this may complicate the UI for 0.7 and agreed we should find the happy medium on the limit of how far back to go.
and, well, we might need to think harder about how to provide that access. let me try to explain the concerns that matthew and i have. it's likely that a typical chandler user will, over time, develop a very large set of DONE items, a smaller set of LATER items, and a relatively small number of NOW items. matthew is concerned that the set of DONE + LATER + NOW items will be so large that it will be slow and inefficient to sort and page them in the browser. very important to avoid. i'm concerned that the same set will be so large that it will be slow and inefficient for the server to sort and page them, especially when you consider that the server will be handling requests from many users simultaneously. clients each represent an additional cpu that the server doesn't have, and it makes sense to distribute as much work as possible to each of these additional cpus to keep the server dedicated to things that only it can do. the best option for making us both happy is to constrain the data set that the server has to return to the client. yes, you could argue that's what paging is for. but paging across the "superset" (DONE + LATER + NOW) doesn't seem to make sense for usability, and anyway, paging requires sorting to be done first, so if you give paging to the server, you have to give it both, and we're trying to avoid that. an alternative to "page the superset" is what we've suggested: define constraints for each subset that naturally limits the number of data items that come back from the server sorted in "natural" order. natural order might be by triage status rank (that negative floating number) or by chronology (reversed for DONE items). i don't know if it matters. let's say i have 2000 DONE items, 5 NOW items and 500 LATER items in my collection. using the "page the superset" method, the server would have to sort 2505 items and then slice out the requested page to return to the browser. with the "constrained subsets" method, the server would perform 3 queries and return, say, last week's DONE items (maybe 20?), the 5 NOW items, and next week's LATER items (5?). the client could sort these 30 items in a fraction of the time the server could sort 2505, and it wouldn't have to bother with slicing out a page at all, unless the numbers are a lot bigger than i posit here, which would be pretty surprising. and even then, it could page each subset individually, which opens up some interesting possibilities. of course, we could always build in ways to expand the subset time-ranges or otherwise alter their constraints. taken to the logical extreme, we could even remove the constraints. this would effectively burden the client with doing all of the sorting and paging. but hey, 80% (probably more) of our users will never bother to do that, so only the power ones will be (potentially) affected. that's way better, imo, than automatically forcing all of this work onto the server, decreasing its overall capacity and throughput.
One thing we could do is ask 'dogfooders' how far back their 'DONE' items go and do they ever reference it. Secondly, how far back do users view their e-mail to reference things?
yes please. i suspect many fewer people would be inconvenienced by my proposal than by the original idea. re email, i use gmail, so i just use search to find whatever i need. i certainly don't try to browse sequentially through millions of messages. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Open Source Applications Foundation "Design" mailing list http://lists.osafoundation.org/mailman/listinfo/design
