Martijn van Oosterhout <email@example.com> writes:
> I note Tom made some changes to this patch after it went in. For the
> record, it was always my intention that samplecount count the number of
> _tuples_ returned while sampling, rather than the number of
> _iterations_. I'll admit the comment in the header was wrong.
> While my original patch had a small error in the case of multiple
> tuples returned, it would've been correctable by counting the actual
> number of sample. The way it is now, it will show a bias if the number
> of tuples returned increases after the first sampled 50 tuples.
How so? The number of tuples doesn't enter into it at all. What the
code is now assuming is that the time per node iteration is constant.
More importantly, it's subtracting off an overhead estimate that's
measured per iteration. In the math you had before, the overhead was
effectively assumed to be per tuple, which is clearly wrong.
For nodes that return a variable number of tuples, it might be sensible
to presume that the node iteration time is roughly linear in the number
of tuples returned, but I find that debatable. In any case the sampling
overhead is certainly not dependent on how many tuples an iteration
This is all really moot at the moment, since we have only two kinds of
nodes: those that always return 1 tuple (until done) and those that
return all their tuples in a single iteration. If we ever get into
nodes that return varying numbers of tuples per iteration --- say,
exposing btree's page-at-a-time behavior at the plan node level ---
we'd have to rethink this. But AFAICS we'd need to count both tuples
and iterations to have a model that made any sense at all, so the
extra counter I added is needed anyway.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend