Re: [HACKERS] PREPARE/EXECUTE across backends?

2003-10-02 Thread Karel Zak

On Wed, Oct 01, 2003 at 09:01:23PM -0400, Neil Conway wrote:
 On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
  From the document, it seems that PREPARE/EXECUTE works only in the same 
  session. I am wondering whether postgres can prepare a query (save the plan) 
  for difference backends.
 
 The decision to store prepared statements per-backend, rather than in
 shared memory, was made deliberately. In fact, an early version of the
 PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
 in shared memory. But I decided to remove this, because:

 That's all the reasons I can think of off the top of my head for doing
 things the way we do. However, I'm open to being convinced: if you think
 we should store prepared statements in shm, feel free to make a case for
 it.

 I think  the current non-shared PREPARE/EXECUTE  is right solution. The
 shared version  require define  new type  of memory  managment routines
 which is full compatible with the standard and abstract PostgreSQL mmgr
 tier.  I worked  on this because it was interesting  experiment and now
 we know that write something like this is possible :-)

 I  think final  and right  solution are  persisten pre-forked  backends
 which know  to keep cached PREPARE/EXECUTE  stuff (and a lot  of others
 things) in  own memory. It's nice  and simple solution than  use shared
 memory.
 
Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] PREPARE/EXECUTE across backends?

2003-10-01 Thread Neil Conway
On Wed, 2003-10-01 at 20:25, Jingren Zhou wrote:
 From the document, it seems that PREPARE/EXECUTE works only in the same 
 session. I am wondering whether postgres can prepare a query (save the plan) 
 for difference backends.

The decision to store prepared statements per-backend, rather than in
shared memory, was made deliberately. In fact, an early version of the
PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
in shared memory. But I decided to remove this, because:

  - it is more complex

  - since shared memory must be allocated statically on postmaster
startup, it would make prepared statements more fragile: at some point
we would run out of room in shm, and need to either remove prepared
statements, or swap them out to disk

  - it would encourage poor application design, since it wouldn't be
trivial to tell whether a given prepared query has already been prepared
by a different backend, and what name it is using

  - the performance gains are not that dramatic: preparing a statement
once per active backend is not that expensive. In most of the cases
where prepared statements are useful, since the # of backends is usually
far smaller than the # of times you're executing a given prepared
statement

That's all the reasons I can think of off the top of my head for doing
things the way we do. However, I'm open to being convinced: if you think
we should store prepared statements in shm, feel free to make a case for
it.

-Neil



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] PREPARE/EXECUTE across backends?

2003-10-01 Thread Kris Jurka


On Wed, 1 Oct 2003, Jingren Zhou wrote:

 Hi,

 From the document, it seems that PREPARE/EXECUTE works only in the same
 session. I am wondering whether postgres can prepare a query (save the plan)
 for difference backends.

 I am working on a project which requires executing psql -c 'query' in
 command line multiple times. Since the performance is critical, it would be
 nice to prepare the same query first to avoid being parsed/optimized each
 time. But psql opens a new backend each time, it looks like that
 PREPARE/EXECUTE doesn't work. Is there any workaround?

Your real overhead here isn't from having to prepare the query each time,
it's from having to start psql and open a new connection each time.
Perhaps you need to rethink your design and go with something that will
maintain a persistent connection.

Kris Jurka



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] PREPARE/EXECUTE across backends?

2003-10-01 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 The decision to store prepared statements per-backend, rather than in
 shared memory, was made deliberately. In fact, an early version of the
 PREPARE/EXECUTE patch (written by Karel Zak) stored prepared statements
 in shared memory. But I decided to remove this, because:
 [ several good reasons ]

Another issue is that we currently don't have a mechanism for flushing
query plans when they become obsolete (eg, an index is added or
removed).  Locally-cached plans are relatively easy to refresh: just
start a fresh session.  A shared plan cache would retain bogus plans
forever, short of a postmaster restart.

Obviously we need a mechanism for detecting and handling cached-plan
invalidations, and I hope someone will get around to that soon.
But we *cannot* consider a shared plan cache until that mechanism
exists.

If I recall correctly, Karel's original shared plan cache also triggered
a lot of concern about contention for the shared data structure ...
I'm not convinced that it would be a big bottleneck, but there's
definitely an issue to think about there ...

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] PREPARE/EXECUTE across backends?

2003-10-01 Thread Neil Conway
On Wed, 2003-10-01 at 22:43, Tom Lane wrote:
 Another issue is that we currently don't have a mechanism for flushing
 query plans when they become obsolete (eg, an index is added or
 removed).  Locally-cached plans are relatively easy to refresh: just
 start a fresh session.  A shared plan cache would retain bogus plans
 forever, short of a postmaster restart.

Well, keep in mind we already have DEALLOCATE for removing prepared
statements, which would continue to be available if we switched to
storing prepared statements in shared memory. However, using DEALLOCATE
to get around invalid cached plans is obviously not a good solution.

 Obviously we need a mechanism for detecting and handling cached-plan
 invalidations, and I hope someone will get around to that soon.

Agreed.

 But we *cannot* consider a shared plan cache until that mechanism
 exists.

Given the presence of DEALLOCATE, I think this overstates the case
somewhat: longer-lived prepared statements that are stored in shared
memory makes handling invalidated plans more of an issue, of course.

-Neil



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] PREPARE / EXECUTE

2002-10-28 Thread Hans-Jürgen Schönig



The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.

Andreas
 


Does it reduce the time taken by the planner?
Are server side SQL functions optimized at runtime or at create 
function time?

If the function is optimized at runtime it is not a gain.

   Hans


--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] PREPARE / EXECUTE

2002-10-24 Thread Rod Taylor
On Wed, 2002-10-23 at 10:39, Greg Copeland wrote:
 If you were using them that frequently, couldn't you just keep a
 persistent connection?  If it's not used that often, wouldn't the
 overhead of preparing the query following a new connection become noise?

Especially by the time you add in the dependency tracking (drop table,
query must go), and modifications to analyze to clear out the stored
list.

 On Wed, 2002-10-23 at 09:24, Hans-Jürgen Schönig wrote:
  First of all PREPARE/EXECUTE is a wonderful thing to speed up things 
  significantly.
  I wonder if there is a way to store a parsed/rewritten/planned query in 
  a table so that it can be loaded again.
  
  This might be useful when it comes to VERY complex queries ( 10 tables).
  I many applications the situation is like that:
  
  a. The user connects to the database.
  b. The user sends various different queries to the server (some might be 
  the same)
  c. The user disconnects.
  
  If there was a way to store execution plans in a table the user could 
  load the execution plans of the most time consuming stuff into the 
  backend without parsing and optimizing it every time he authenticates.
  
  Does it sound useful to anybody? Is it possible to do it or are there 
  some technical problems?
  
  Maybe this is worth thinking about.
  
  Hans
  
  -- 
  *Cybertec Geschwinde u Schoenig*
  Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
  Tel: +43/1/913 68 09; +43/664/233 90 75
  www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
  http://cluster.postgresql.at, www.cybertec.at 
  http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at
  
  
  ---(end of broadcast)---
  TIP 6: Have you searched our list archives?
  
  http://archives.postgresql.org
 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 
-- 
  Rod Taylor


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PREPARE / EXECUTE

2002-10-24 Thread Karel Zak
On Wed, Oct 23, 2002 at 11:02:14AM -0400, Tom Lane wrote:
 =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
  I wonder if there is a way to store a parsed/rewritten/planned query in 
  a table so that it can be loaded again.
 
 The original version of the PREPARE patch used a shared-across-backends
 cache for PREPAREd statements.  We rejected that for a number of
 reasons, one being the increased difficulty of keeping such a cache up
 to date.  I think actually storing the plans on disk would have all the
 same problems, but worse.

 Right.

 There's solution: persisten backend (for example like classic
 apache). This solve problem with lifetime of all persistent caches.
 It's already in TODO.

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Greg Copeland
If you were using them that frequently, couldn't you just keep a
persistent connection?  If it's not used that often, wouldn't the
overhead of preparing the query following a new connection become noise?

Greg


On Wed, 2002-10-23 at 09:24, Hans-Jürgen Schönig wrote:
 First of all PREPARE/EXECUTE is a wonderful thing to speed up things 
 significantly.
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.
 
 This might be useful when it comes to VERY complex queries ( 10 tables).
 I many applications the situation is like that:
 
 a. The user connects to the database.
 b. The user sends various different queries to the server (some might be 
 the same)
 c. The user disconnects.
 
 If there was a way to store execution plans in a table the user could 
 load the execution plans of the most time consuming stuff into the 
 backend without parsing and optimizing it every time he authenticates.
 
 Does it sound useful to anybody? Is it possible to do it or are there 
 some technical problems?
 
 Maybe this is worth thinking about.
 
 Hans
 
 -- 
 *Cybertec Geschwinde u Schoenig*
 Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
 Tel: +43/1/913 68 09; +43/664/233 90 75
 www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
 http://cluster.postgresql.at, www.cybertec.at 
 http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://archives.postgresql.org



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Tom Lane
=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.

The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements.  We rejected that for a number of
reasons, one being the increased difficulty of keeping such a cache up
to date.  I think actually storing the plans on disk would have all the
same problems, but worse.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Bruno Wolff III
On Wed, Oct 23, 2002 at 18:04:01 +0200,
  Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 
 An example:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
planner: 12 seconds
executor: 1 second
 
 The application will stay the same forever.
 I could be 10 times faster if there was a way to load the execution plan 
 into the backend.

One option you have is to explicitly give the join order. You can look at
explain to see what order the joins are done in and then rewrite the sql
to force them to be done in that order. This should keep things simple
for the planner.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
   Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
 planner: 12 seconds
 executor: 1 second

 One option you have is to explicitly give the join order.

Yes, this is exactly the sort of situation where forcing the join order
is a big performance win.  See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/explicit-joins.html

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
This is exactly what we do in case of complex stuff. I know that it can 
help to reduce the problem for the planner.
However: If you have explicit joins across 10 tables the SQL statement 
is not that readable any more and it is still slower than a prepared 
execution plan.

I guess it is worth thinking about prepared plans somewhere on disk.
Is there a way to transform ASCII - plan?

   Hans



Bruno Wolff III wrote:

On Wed, Oct 23, 2002 at 18:04:01 +0200,
 Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 

An example:
I have a join across 10 tables  + 2 subselects across 4 tables
on the machine I use for testing:
  planner: 12 seconds
  executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan 
into the backend.
   


--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
The idea is not to have it accross multiple backends and having it in 
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications 
sending just a few complex queries to the server. The problem is: If you 
have many queries where the relation time planner/time executor is 
very high (eg. complex joins with just one value as the result).
These applications stay the same for a long time (maybe even years) and 
so there is no need to worry about new tables and so forth - maybe there 
is not even a need to worry about new data. In these cases we could 
speed up the database significantly just by avoiding the use of the planner:

An example:
I have a join across 10 tables  + 2 subselects across 4 tables
on the machine I use for testing:
   planner: 12 seconds
   executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan 
into the backend.
There is no way to use a persistent connection (many clients on 
different machines, dynamic IPs, etc. ...)
There is no way to have an invalid execution plan because there are no 
changes (new tables etc.) in the database.

Also: If people execute a prepared query and it fails they will know why 
- queries will fail if people drop a table even if these queries are not 
prepared.
A new feature like the one we are discussing might be used rarely but if 
people use it they will benefit A LOT.

If we had a simple ASCII interface to load the stuff into the planner 
people could save MANY cycles.
When talking about tuning it is nice to gain 10% or even 20% but in many 
cases it does not solve a problem - if a problem can be reduced by 90% 
it is a REAL gain.
Gaining 10% can be done by tweaking the database a little - gaining 
1000% cannot be done so it might be worth thinking about it even it the 
feature is only used by 20% of those users out there.  20% of all 
postgres users is most likely more than 15.000 people.

Again; it is not supposed to be a every-day solution. It is a solution 
for applications staying the same for a very long time.

   Hans


Tom Lane wrote:

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
 

I wonder if there is a way to store a parsed/rewritten/planned query in 
a table so that it can be loaded again.
   


The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements.  We rejected that for a number of
reasons, one being the increased difficulty of keeping such a cache up
to date.  I think actually storing the plans on disk would have all the
same problems, but worse.

			regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 



--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Greg Copeland
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy?  I believe this would result in the desired
performance boost and behavior.

Now, the next obvious question...anyone know of any proxy apps available
for postgresql?

Regards,

Greg


On Wed, 2002-10-23 at 11:04, Hans-Jürgen Schönig wrote:
 The idea is not to have it accross multiple backends and having it in 
 sync with the tables in the database. This is not the point.
 My problem is that I have seen many performance critical applications 
 sending just a few complex queries to the server. The problem is: If you 
 have many queries where the relation time planner/time executor is 
 very high (eg. complex joins with just one value as the result).
 These applications stay the same for a long time (maybe even years) and 
 so there is no need to worry about new tables and so forth - maybe there 
 is not even a need to worry about new data. In these cases we could 
 speed up the database significantly just by avoiding the use of the planner:
 
 An example:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
 planner: 12 seconds
 executor: 1 second
 
 The application will stay the same forever.
 I could be 10 times faster if there was a way to load the execution plan 
 into the backend.
 There is no way to use a persistent connection (many clients on 
 different machines, dynamic IPs, etc. ...)
 There is no way to have an invalid execution plan because there are no 
 changes (new tables etc.) in the database.
 
 Also: If people execute a prepared query and it fails they will know why 
 - queries will fail if people drop a table even if these queries are not 
 prepared.
 A new feature like the one we are discussing might be used rarely but if 
 people use it they will benefit A LOT.
 
 If we had a simple ASCII interface to load the stuff into the planner 
 people could save MANY cycles.
 When talking about tuning it is nice to gain 10% or even 20% but in many 
 cases it does not solve a problem - if a problem can be reduced by 90% 
 it is a REAL gain.
 Gaining 10% can be done by tweaking the database a little - gaining 
 1000% cannot be done so it might be worth thinking about it even it the 
 feature is only used by 20% of those users out there.  20% of all 
 postgres users is most likely more than 15.000 people.
 
 Again; it is not supposed to be a every-day solution. It is a solution 
 for applications staying the same for a very long time.
 
 Hans
 
 
 Tom Lane wrote:
 
 =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
   
 
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.
 
 
 
 The original version of the PREPARE patch used a shared-across-backends
 cache for PREPAREd statements.  We rejected that for a number of
 reasons, one being the increased difficulty of keeping such a cache up
 to date.  I think actually storing the plans on disk would have all the
 same problems, but worse.
 
  regards, tom lane
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
   
 
 
 
 -- 
 *Cybertec Geschwinde u Schoenig*
 Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
 Tel: +43/1/913 68 09; +43/664/233 90 75
 www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
 http://cluster.postgresql.at, www.cybertec.at 
 http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at
 
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
Greg Copeland wrote:


Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy?  I believe this would result in the desired
performance boost and behavior.

Now, the next obvious question...anyone know of any proxy apps available
for postgresql?

Regards,

	Greg

 


There is one load balancing software available which is based on the ACE 
library.
Just have a look at:

http://freshmeat.net/projects/dbbalancer/

I haven't tested it up to now.

I am now looking for a workaround - I am sure that there are many 
workarounds for this issue (explicit joins, persistent connections, etc. 
...).
I thought it might be useful to have something like a data type (or 
maybe a binary field) used to store execution plans.

People could use this feature as some sort of server side function or 
so ...
It can be seend as some sort of optimized function in the backend which 
can be loaded/executed more efficiently.

Maybe others would like to see that feature as well.

   Hans

--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Zeugswetter Andreas SB SD

 The idea is not to have it accross multiple backends and having it in 
 sync with the tables in the database. This is not the point.
 My problem is that I have seen many performance critical applications 
 sending just a few complex queries to the server. The problem is: If you 
 have many queries where the relation time planner/time executor is 
 very high (eg. complex joins with just one value as the result).

The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.

Andreas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Neil Conway
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 The standard approach to such a scenario would imho be to write
 stored procedures for the complex queries (e.g. plpgsql) and use
 that from the client.  Maybe even eliminate a few ping pongs between
 client and server.

Since PL/PgSQL cached query plans are flushed when the backend exits,
how would this help?

Regarding the original suggestion of storing prepared plans on disk, I
agree with Tom -- it's basically the same idea as storing plans in
shared memory, which we previously considered (and Karel implemented),
but ultimately decided to remove. IMHO, the utility of this feature
doesn't justify the problems that would come with implementing it (see
the archives for the original implementation discussions).

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org