Lester Caine wrote: [snip]
Don't know what you are using ;)
Transferring 1000 records is always going to take time, when you only need 10 to be displayed.
[snip] I wasn't referring to transferring the record, only running the query. [snip]
[snip]Run the query without the limit, this gives you the count - don't actually fetch the rows. Now run the same query, with the limit. If your database is worth anything (most any is), it has this query cached and it takes negligible extra time, and you don't have to spend time 'skipping' ahead X rows. If your database interface functions support 'skipping' ahead - use that instead.
The trick with any transactional database is to maintain the most used counts in a second table, and manage those counts with triggers, so you only need a single record read to access them. The counts will always be valid for your view of the database then.
This only works for known queries - anything that is filtered or otherwise limited won't benefit from this - but you still need to spend time maintaining the count. I'm speaking in a general, global, automated statement creation sense - you can't just wrap a count(*) around any given sql query - and in many cases, even when you can - it still ends up computing just as much information, so either way it calculates all the rows that would be returned.
[snip]Obviously, for page 1 of a paginated list, this performs worse than just running the single query. But if you get to page 99, you'll likely find this is faster. Feel free to do your own tests, many factors can change all of these findings, and it's best to match them to suit your own scenario.
The ADOdb pager only needs to know how many pages to indicate in the navigate bar, and how many records to download. In theory this can be very fast, and only slows down where a database engine does not support a simple limited read. pgsql driver in ADOdb supports LIMIT so it's only the calculation of COUNT(*) that needs replacing with a faster pre-calculated count.
I've been talking about using LIMIT the whole time, I'm not sure where you're going with this - I was merely further explaining what was likely obvious anyway. Naturally when you pull in * ordered by an indexed column you gain speed with limits, but when you start putting conditions on things, the queries have to run across (many|all) the rows.
I try to speak in general terms on this list, not everyone is using ADOdb or mysql (though I often slip into speaking about mysql since people either use it, are familiar with it, or at the very least - any standard sql engine will support the same features)
My statements were always related to using pagination in a general sense - e.g. a class you create for pagination, that can paginate any manner of query cleanly. If you're dealing solely with hard-coded / hand-crafted queries, you can obviously build your pagination to fit.
- Martin Norland, Sys Admin / Database / Web Developer, International Outreach x3257
The opinion(s) contained within this email do not necessarily represent those of St. Jude Children's Research Hospital.
-- PHP Database Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php