----- Original Message -----
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Aaron Werman" <[EMAIL PROTECTED]>
Cc: "Iain" <[EMAIL PROTECTED]>; "Jim C. Nasby" <[EMAIL PROTECTED]>;
Sent: Tuesday, September 28, 2004 9:58 AM
Subject: Re: [PERFORM] Caching of Queries
> "Aaron Werman" <[EMAIL PROTECTED]> writes:
> > I imagine a design where a shared plan cache would consist of the plans,
> > indexed by a statement hash and again by dependant objects. A statement
> > be planned would be hashed and matched to the cache. DDL would need to
> > synchronously destroy all dependant plans. If each plan maintains a
> > flag, changing the cache wouldn't have to block so I don't see where
> > would be contention.
> You have contention to access a shared data structure *at all* -- for
> instance readers must lock out writers. Or didn't you notice the self-
> contradictions in what you just said?
> Our current scalability problems dictate reducing such contention, not
> adding whole new sources of it.
You're right - that seems unclear. What I meant is that there can be a
global hash table that is never locked, and the hashes point to chains of
plans that are only locally locked for maintenance, such as gc and chaining
hash collisions. If maintenance was relatively rare and only local, my
assumption is that it wouldn't have global impact.
The nice thing about plan caching is that it can be sloppy, unlike block
cache, because it is only an optimization tweak. So, for example, if the
plan has atomic refererence times or counts there is no need to block, since
overwriting is not so bad. If the multiprocessing planner chains the same
plan twice, the second one would ultimately age out....
> regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster