Alex Turner <[EMAIL PROTECTED]> writes: > The problem with this approach is TTFB (Time to first Byte). The > initial query is very slow, but additional requests are fast. In most > situations we do not want the user to have to wait a disproportionate > amount of time for the initial query. If this is the first time using > the system this will be the impression that will stick with them. I > guess we could experiment and see how much extra time creating a cache > table will take...
You could cheat and do queries with an offset of 0 directly but also start up a background job to fetch the complete results and cache them. queries with a non-zero offset would have to wait until the complete cache is built. You have to be careful about people arriving from bookmarks to non-zero offsets and people hitting reload before the cache is finished being built. As someone else suggested you could look into other systems for storing the cache. If you don't need to join against other database tables and you don't need the reliability of a database then there are faster solutions like memcached for example. (The problem of joining against database tables is even solvable, look up pgmemcached. No idea how it performs though.) But I think you're running into a fundamental tension here. The feature you're looking for: being able to jump around in an arbitrary non-indexed query result set which can be arbitrarily large, requires a lot of work. All you can do is shift around *when* that work is done. There's not going to be any way to avoid doing the work entirely. -- greg ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster