Em Terça 02 Maio 2006 08:33, Kevin Dangoor escreveu:

> This one looks like a reasonable implementation, though. One thing
> that can throw people off: the query is re-run every time. This is

I haven't looked at it, but I don't believe this is a problem.  Maybe for 
really complex queries, but then, one would be using views and / or the 
equivalent to Oracle's materialized views (that can be built with PostgreSQL 
using triggers, for example).

> very different from a typical search engine scenario where the list of
> results is held on to and paging is done through that (ensuring a
> consistent view of the data). It depends on your data set (and how
> often it changes) whether or not that matters.

Hmmmm...  The problem is that we don't keep the cursor open, I believe.  So, 
when running the first time, we get the first "20" results, then we go having 
lunch, washing the car, etc. and come back and ask for the next "20" results 
getting those in a "different database" than it used to be on the first 
query. 

This is bad if one is using that to make calculations -- bad design, IMHO, 
since these calculations should already be done automatically.  On the other 
hand, it is also good since it reflects changes in a dynamic environment.  
Take a look at your gmail and what happens when you delete a message on your 
first page: a new message from the second page comes to the first page, one 
from the third goes to the seconde, one from n+1 goes to n.  If a new message 
appears, the last one is rolled to the second page and the same logic as 
above -- but reversed this time -- happens.  Dynamically.

> This does also require 2 queries to retrieve the data on every hit.

This might be a problem on really heavy loaded database servers...  But I 
don't see how to do less than "x + 1" queries -- being x the number of pages 
-- if there's some transaction isolation in place solving the above problem 
or really 2 queries per hit if we doesn't solve that (I don't see an easy way 
in an asynchronous connection, specially using a connection pool, but I 
haven't thought much about it).  We'll always need to retrieve the number of 
records that resulted from the search and then we need to show "n" records on 
screen, so there are really 2 queries here.  Retrieving everything is a no-no 
because there might be people with tables containing millions of rows -- a 
log watcher, some statistics on machines running, experiments results, etc. 
-- that would "kill" both the server and the client machine.

What we might go for, though, is providing, e.g., an SOResultSet to the 
datagrid *and* a record count (this last one is optional, if not provided it 
will be the number of records at the result set) for each page.  This frees 
us from doing this logic but also "kills" some of the easiness provided the a 
fully automatic datagrid...

> All of that aside, this API does look friendly and complete enough
> that I think we can take it.

+1 for that.

-- 
Jorge Godoy      <[EMAIL PROTECTED]>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears Trunk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/turbogears-trunk
-~----------~----~----~----~------~----~------~--~---

Reply via email to