Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> Applied along with some other hacking to reduce the costs of the
>> lower-level functions that this example shows to be inefficient.
>> They'd still be slow in large queries, whether CE applies or not.

> BIG difference. The case that caused swapping and took almost 15m to plan is
> now down to 2.5s. The profile still looks a bit odd but  I can't argue with
> the results.

I'm still feeling a bit annoyed with the behavior of the stats machinery
(pgstat_initstats and related macros).  Yesterday I fixed it so that
pgstat_initstats doesn't perform redundant searches of the tabstat
arrays, but there's still an issue, which is that any rel that's
heap_opened or index_opened within a transaction is going to get a
tabstat entry, whether any events are subsequently counted or not.
In typical scenarios I don't think this is a big deal, but in examples
such as yours we're going to be sending a whole lot of all-zero tabstat
messages; there'll be one for every heap or index that the planner even
momentarily considered.  That means more UDP traffic and more work for
the stats collector.  gprof won't show the resulting overhead since
it doesn't know anything about kernel-level overhead or activity in the
stats collector.  (Hm, might be able to measure it in oprofile

We could fix this by not doing pgstat_initstats at heap_open time,
but postponing it until something more interesting happens.  The trouble
is that that'd add at least a small amount of overhead at the places
where "something more interesting" is counted, since the pgstat macros
would have to check validity of the tabstat pointer.  The added overhead
should be only about one extra comparison, but maybe that's enough to
make someone object?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?


Reply via email to