Roman Konoval <> writes:
> I have a problem on 9.3.14 with a query that accesses table:

I think the root of the problem is your intermediate function:

> CREATE OR REPLACE FUNCTION public.get_doc_path(document_id character varying)
>  RETURNS ltree
>  LANGUAGE plpgsql
> AS $function$
>     path ltree;
>     select id_path into path from document_head where id = document_id;
>     RETURN path;
> END $function$

This is quite expensive, as it involves another table search, but the
planner doesn't know that since you've not marked it as having higher than
normal cost.  The seqscan formulation of the query results in evaluating
this function afresh at most of the rows, whereas shoving it into an
uncorrelated sub-select causes it to be evaluated only once.  That, I
think, and not the seqscan-vs-indexscan aspect, is what makes the bitmap
formulation go faster.  Certainly you'd not expect that a bitmap scan that
has to hit most of the rows anyway is going to win over a seqscan.

The fact that the planner goes for a bitmap scan in the second formulation
is an artifact of the fact that it doesn't try to pre-evaluate sub-selects
for selectivity estimation purposes, so you end up with a default estimate
that says that the <@ condition only selects a small fraction of the rows.
Not sure if we should try to change that or not.

I'd suggest setting the function's cost to 1000 or so and seeing if that
doesn't improve matters.

(BTW, what tipped me off to this was that the "buffers hit" count for
the seqscan node was so high, several times more than the actual size
of the table.  I couldn't account for that until I realized that the
function itself would be adding a few buffer hits per execution.)

                        regards, tom lane

Reply via email to