Chris,  People:

(Dropped SQL list because we're cross-posting unnecessarily)

> I am not sure I agree with you. I have done similar things with Oracle and
> found that the second query will execute much more quickly than the first.
> It could be made to work in at least two scenarios

Actually, PostgreSQL often DOES cache data, it just uses the Kernel cache 
rather than any memory application built into Postgres, and it caches the 
underlying data, not the final query results.  Base data for query sets gets 
cached in RAM after a query, and the second query often *does* run much 

For example, I was running some queries against the TPC-R OSDL database, and 
the first time I ran the queries they took about 11 seconds each, the second 
time (for each query) it was about 0.5 seconds because the data hadn't 
changed and the underlying rowsets were in memory.

I think it's likely that your machine has *already* cached the data in memory, 
which is why you don't see improvement on the second run.  The slow execution 
time is the result of bad planner decisions and others are helping you adjust 

Now, regarding caching final query results in memory: This seems like a lot of 
effort for very little return to me.   Doing so would require that all 
underlying data stay the same, and on a complex query would require an 
immense infrastructure of data-change tracking to verify.

If you want a data snapshot, ignoring the possibility of changes, there are 
already ways to do this:
a) use a temp table;
b) use your middleware to cache the query results

Now, if someone were to present us with an implementation which effectively 
built and automated form of option (b) above into a optional PG plug-in, I 
wouldn't vote against it.   But I couldn't see voting for putting it on the 
TODO list, either.

Josh Berkus
Aglio Database Solutions
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to