We have seen similar issues when we have had massive load on our web
server. My determination was that simply the act of spawning and
stopping postgres sessions was very heavy on the box, and by
implementing connection pooling (sqlrelay), we got much higher
throughput, and better response on the server then we would get any
other way. 

-----Original Message-----
[mailto:[EMAIL PROTECTED] On Behalf Of Jason Coene
Sent: Thursday, September 23, 2004 10:53 AM
To: 'Mr Pink'; 'Scott Kirkwood'
Subject: Re: [PERFORM] Caching of Queries

I'm not an expert, but I've been hunting down a killer performance
for a while now.  It seems this may be the cause.

At peak load, our database slows to a trickle.  The CPU and disk
are normal - 20-30% used CPU and disk performance good.

All of our "postgres" processes end up in the "semwai" state - seemingly
waiting on other queries to complete.  If the system isn't taxed in CPU
disk, I have a good feeling that this may be the cause.  I didn't know
planning queries could create such a gridlock, but based on Mr Pink's
explanation, it sounds like a very real possibility.

We're running on SELECT's, and the number of locks on our "high traffic"
tables grows to the hundreds.  If it's not the SELECT locking (and we
get that many INSERT/UPDATE on these tables), could the planner be doing

At peak load (~ 1000 queries/sec on highest traffic table, all very
similar), the serialized queries pile up and essentially create a DoS on
service - requiring a restart of the PG daemon.  Upon stop & start, it's
back to normal.

I've looked at PREPARE, but apparently it only lasts per-session -
worthless in our case (web based service, one connection per

Does this sound plausible?  Is there an alternative way to do this that
don't know about?  Additionally, in our case, I personally don't see any
downside to caching and using the same query plan when the only thing
substituted are variables.  In fact, I'd imagine it would help
significantly in high-volume web applications.



> -----Original Message-----
> [EMAIL PROTECTED] On Behalf Of Mr Pink
> Sent: Thursday, September 23, 2004 11:29 AM
> To: Scott Kirkwood; [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Caching of Queries
> Not knowing anything about the internals of pg, I don't know how this
> relates, but in theory,
> query plan caching is not just about saving time re-planning queries,
> about scalability.
> Optimizing queries requires shared locks on the database metadata,
> as I understand it
> causes contention and serialization, which kills scalability.
> I read this thread from last to first, and I'm not sure if I missed
> something, but if pg isnt
> caching plans, then I would say plan caching should be a top priority
> future enhancements. It
> needn't be complex either: if the SQL string is the same, and none of
> tables involved in the
> query have changed (in structure), then re-use the cached plan.
> DDL and updated
> statistics would have to invalidate plans for affected tables.
> Preferably, it should work equally for prepared statements and those
> pre-prepared. If you're
> not using prepare (and bind variables) though, your plan caching down
> drain anyway...
> I don't think that re-optimizing based on values of bind variables is
> needed. It seems like it
> could actually be counter-productive and difficult to asses it's
> That's the way I see it anyway.
> :)

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to