At some point in time, [EMAIL PROTECTED] (Mike Moran) wrote:

>Hi. I currently have a fairly large query which I have been optimizing
>for later use in a function. There are things in the query which I
>have been keeping constant whilst optimizing, but which are variables
>in the function. When I run this query as sql, with the `variables'
>constant, I get a runtime of about 3 or 4 seconds. However, when I
>place this same sql in an sql function, and then pass my constants
>from before in as arguments, I get a runtime of about 215 seconds.
>
>I am trying to understand how this could be. How does putting the sql
>in a function affect the query optimiser? Would putting it in as a
>plpsql function help? How else can I retain the original speed?

My first guess would be that the indexes being used in the query are
mis-matching on data type compared to your function arguments. For instance,


create function foobar(text) as
'begin
 select * from foobar_table where col_a=$1;
 end' ...

I may have some syntax wrong up there, but the idea is that you are passing in a
parameter of some datatype (text here) and then using it in a select statement
against a column which we will assume is of some datatype other than "text". If
the index your query uses is not picking up the datatype conversion properly,
then you may have a sequential scan instead.

To verify this, you might do these:

EXPLAIN
select * from foobar_table where
col_a=<THE_CONSTANT_VALUE>;

vs.

EXPLAIN
select * from foobar_table where
col_a=<THE_CONSTANT_VALUE>::<THE_PARAMETER_TYPE>;



--
~~~~~~~~~~~~~~~~| Genius may have its limitations, but stupidity is not
Jeff Boes       | thus handicapped.
[EMAIL PROTECTED]   |        --Elbert Hubbard (1856-1915), American author



---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to