On Fri, 31 Mar 2006 14:42:40 +0100
Jeff <[EMAIL PROTECTED]> wrote:

> -------- Original Message --------
> From: [EMAIL PROTECTED] (Tomas Zerolo)
> To: modperl@perl.apache.org
> Subject: Database transaction across multiple web requests
> Date: 31/3/2006 12:59
> 
> > (a) Create a database cursor and page through it
> > (b) Repeat the query (making sure there is a sort criterium) at each
> >     page request, starting at a variable offset and limiting the
> >     result set
> > (c) Do the whole query at once, putting the results in some kind
> >     of array.
> > 
> 
> MySQL's query cache was designed to do this for you.
> 
> ...[SNIP]...
>
> Obviously if the underlying data changes, the server re-executes the 
> query and you get what you asked for (so your pagination / position
> can change if records are deleted, added or sort values change).
> 
> Don't know if this helps you though, as you are not using MySQL.

  Yeah MySQL's query cache isn't going to help him much in
  PostgreSQL! :)  

  However, with PostgreSQL if you use a placeholder for your LIMIT
  and OFFSET values it will cache the query plan for you.  Provided
  you have a recent version of both PostgreSQL and DBD::Pg that is. 

  If your WHERE clause is *really* complex and your data doesn't
  change frequently you should look into using a materialized view which
  will really speed things up. 

 ---------------------------------
   Frank Wiles <[EMAIL PROTECTED]>
   http://www.wiles.org
 ---------------------------------

Reply via email to