On 3/9/09 1:40 PM, "Oliver Jowett" <oli...@opencloud.com> wrote:

Scott Carey wrote:
>
>    1. And how do you do that from JDBC?  There is no standard concept of

I've suggested that as a protocol-level addition in the past, but it
would mean a new protocol version. The named vs. unnamed statement
behaviour was an attempt to crowbar it into the protocol without
requiring a version change. If it's really a planner behaviour thing,
maybe it does belong at the SET level, but I believe that there's
usually an aversion to having to SET anything per query to get
reasonable plans.

There's a strong aversion, but I find myself re-writing queries to get good 
plans, a de-facto hint really.  Its mandatory in the land of partitioned tables 
and large aggregates, much more rare elsewhere.  I have a higher aversion to 
rewriting queries then telling the planner to use more information or to 
provide it with more information.

>    2. How do you use those on a granularity other than global from jdbc?

prepareThreshold=N (as part of a connection URL),
org.postgresql.PGConnection.setPrepareThreshold() (connection-level
granularity), org.postgresql.PGStatement.setPrepareThreshold()
(statement-level granularity). See the driver docs.

I know I've tried the connection URL thing one time and that did not fix the 
performance problem.  I did not know if it was user error.  Without knowing how 
to trace what the query really was or if the setting was working properly, or 
having any other easy avenue to see if an unnamed prepared statement even fixed 
my problem, I had to resort to what would clearly fix it (there was only 1 day 
to fix it, and there was one proven way to fix it).  I would love to be able to 
try out an unnamed prepared statement in psql, to prove that it even works to 
solve the query planning issue or not.  In the end, it was simpler to change 
the code and probably less time consuming than all the options other than the 
connection URL setting.


>          ( - I tried setting max_prepared_transactions to 0 but this
>       didn't seem to work either, and it would be global if it did).

max_prepared_transactions is to do with two-phase commit, not prepared
statements.

Thanks! Good to know, the configuration documentation could be more clear... I 
got the two prepares confused.

> In the end, we had to write our own client side code to deal with sql
> injection safely and avoid jdbc prepared statements to get acceptable

It's unfortunate that ended up doing this, because it >is< all
configurable on the JDBC side. Did you ask on pgsql-jdbc?

-O

I searched the archives, and did find a reference to the connection URL setting 
and recall trying that but not seeing the expected result.  Rather than 
debugging, a decision was made to go with the solution that worked and be done 
with it.  This was also when we were in production on 8.3.1 or  8.3.2 or so, so 
the bugs there might have caused some confusion in the rush to solve the issue.

I'm still not sure that unnamed prepared statements will help my case.  If the 
driver is using unnamed prepared statements for the first 5 uses of a query 
then naming it, I should see the first 5 uses significantly faster than those 
after.  I'll keep an eye out for that in the places where we are still using 
prepared statements that can cause problems and in the old log files.  Until 
another issue comes up, there isn't sufficient motivation to fix what is no 
longer broken for us.

Thanks for the good info on dealing with configuring unnamed prepared 
statements with the jdbc driver.  That may come in very handy later.

Reply via email to