The idea is not to have it accross multiple backends and having it in sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications sending just a few complex queries to the server. The problem is: If you have many queries where the relation "time planner"/"time executor" is very high (eg. complex joins with just one value as the result).
These applications stay the same for a long time (maybe even years) and so there is no need to worry about new tables and so forth - maybe there is not even a need to worry about new data. In these cases we could speed up the database significantly just by avoiding the use of the planner:

An example:
I have a join across 10 tables + 2 subselects across 4 tables
on the machine I use for testing:
planner: 12 seconds
executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan into the backend.
There is no way to use a persistent connection (many clients on different machines, dynamic IPs, etc. ...)
There is no way to have an "invalid" execution plan because there are no changes (new tables etc.) in the database.

Also: If people execute a prepared query and it fails they will know why - queries will fail if people drop a table even if these queries are not prepared.
A new feature like the one we are discussing might be used rarely but if people use it they will benefit A LOT.

If we had a simple ASCII interface to load the stuff into the planner people could save MANY cycles.
When talking about tuning it is nice to gain 10% or even 20% but in many cases it does not solve a problem - if a problem can be reduced by 90% it is a REAL gain.
Gaining 10% can be done by tweaking the database a little - gaining 1000% cannot be done so it might be worth thinking about it even it the feature is only used by 20% of those users out there. 20% of all postgres users is most likely more than 15.000 people.

Again; it is not supposed to be a every-day solution. It is a solution for applications staying the same for a very long time.


Tom Lane wrote:

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[EMAIL PROTECTED]> writes:

I wonder if there is a way to store a parsed/rewritten/planned query in a table so that it can be loaded again.

The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements. We rejected that for a number of
reasons, one being the increased difficulty of keeping such a cache up
to date. I think actually storing the plans on disk would have all the
same problems, but worse.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75 <>, <>, <>, <>

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to