But with all due respect to Joe, I think the reason that stuff got
trimmed is that it didn't work very well. In most cases it's
*hard* to write an estimator for a SRF. Let's see you produce
one for dblink() for instance ...
Good one...
Well in some cases it'll be impossible, but suppose I
My solution would be a lot simpler, since we could simply populate
pg_proc.proestrows with 1000 by default if not changed by the DBA. In
an
even better world, we could tie it to a table, saying that, for example,
proestrows = my_table*0.02.
What if the estimated row is a function of a
2 things to point out from this last run:
50% of the time is taken scanning tblassociate
- Seq Scan on tblassociate a (cost=0.00..38388.79 rows=199922 width=53)
(actual time=62.000..10589.000 rows=176431 loops=1)
Filter: ((clientnum)::text = 'SAKS'::text)
If you had an index on
On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote:
Not too many releases ago, there were several columns in pg_proc that
were intended to support estimation of the runtime cost and number of
result rows of set-returning functions. I believe in fact that these
were the remains of Joe
Jim C. Nasby [EMAIL PROTECTED] writes:
On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote:
But with all due respect to Joe, I think the reason that stuff got
trimmed is that it didn't work very well. In most cases it's
*hard* to write an estimator for a SRF. Let's see you produce
one
Hello,
I'm just in the middle of performance tunning of our database running
on PostgreSQL, and I've several questions (I've searched the online
docs, but without success).
1) When I first use the EXPLAIN ANALYZE command, the time is much
larger than in case of subsequent invocations of
Tom Lane wrote:
Not too many releases ago, there were several columns in pg_proc that
were intended to support estimation of the runtime cost and number of
result rows of set-returning functions. I believe in fact that these
were the remains of Joe Hellerstein's thesis on expensive-function
Tom Lane wrote:
The larger point is that writing an estimator for an SRF is frequently a
task about as difficult as writing the SRF itself
True, although I think this doesn't necessarily kill the idea. If
writing an estimator for a given SRF is too difficult, the user is no
worse off than they