Re: [Fwd: Re: [PERFORM] Functionscan estimates]

2005-04-18 Thread elein
Hmmm. My brain is being jostled and I'm confusing illustra-postgres, informix-postgres and postgresql. Some things had functions and some things had constants and I do not remember which products had what combination. But probably how they are in postgresql, post hellerstein, is how I am

Re: [Fwd: Re: [PERFORM] Functionscan estimates]

2005-04-14 Thread elein
...how did they do it? Best Regards, Simon Riggs Forwarded Message From: Tom Lane [EMAIL PROTECTED] To: Alvaro Herrera [EMAIL PROTECTED] Cc: Josh Berkus josh@agliodbs.com, Michael Fuhr [EMAIL PROTECTED], Subject: Re: [PERFORM] Functionscan estimates Date: Sat, 09 Apr

Re: [Fwd: Re: [PERFORM] Functionscan estimates]

2005-04-14 Thread Alvaro Herrera
On Thu, Apr 14, 2005 at 10:39:03AM -0700, elein wrote: All functions could have a cost associated with them, set by the writer of the function in order for the planner to reorder function calls. The stonebraker airplane level example was: select ... from ... where f(id) = 3 and

Re: [PERFORM] Functionscan estimates

2005-04-10 Thread Josh Berkus
People: (HACKERS: Please read this entire thread at http://archives.postgresql.org/pgsql-performance/2005-04/msg00179.php Sorry for crossing this over.) The larger point is that writing an estimator for an SRF is frequently a task about as difficult as writing the SRF itself True,

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread PFC
But with all due respect to Joe, I think the reason that stuff got trimmed is that it didn't work very well. In most cases it's *hard* to write an estimator for a SRF. Let's see you produce one for dblink() for instance ... Good one... Well in some cases it'll be impossible, but suppose I

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread PFC
My solution would be a lot simpler, since we could simply populate pg_proc.proestrows with 1000 by default if not changed by the DBA. In an even better world, we could tie it to a table, saying that, for example, proestrows = my_table*0.02. What if the estimated row is a function of a

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread Jim C. Nasby
On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote: Not too many releases ago, there were several columns in pg_proc that were intended to support estimation of the runtime cost and number of result rows of set-returning functions. I believe in fact that these were the remains of Joe

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes: On Sat, Apr 09, 2005 at 12:00:56AM -0400, Tom Lane wrote: But with all due respect to Joe, I think the reason that stuff got trimmed is that it didn't work very well. In most cases it's *hard* to write an estimator for a SRF. Let's see you produce one

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread Neil Conway
Tom Lane wrote: Not too many releases ago, there were several columns in pg_proc that were intended to support estimation of the runtime cost and number of result rows of set-returning functions. I believe in fact that these were the remains of Joe Hellerstein's thesis on expensive-function

Re: [PERFORM] Functionscan estimates

2005-04-09 Thread Neil Conway
Tom Lane wrote: The larger point is that writing an estimator for an SRF is frequently a task about as difficult as writing the SRF itself True, although I think this doesn't necessarily kill the idea. If writing an estimator for a given SRF is too difficult, the user is no worse off than they

[PERFORM] Functionscan estimates

2005-04-08 Thread Josh Berkus
Folks, I'm wondering if it might be useful to be able to add estimated selectivity to a function definition for purposes of query estimation. Currently function scans automatically return a flat default 1000 estimated rows. It seems like the DBA ought to be able to ALTER FUNCTION and give

Re: [PERFORM] Functionscan estimates

2005-04-08 Thread Michael Fuhr
On Fri, Apr 08, 2005 at 03:15:50PM -0700, Josh Berkus wrote: I'm wondering if it might be useful to be able to add estimated selectivity to a function definition for purposes of query estimation. Currently function scans automatically return a flat default 1000 estimated rows. It seems

Re: [PERFORM] Functionscan estimates

2005-04-08 Thread Alvaro Herrera
On Fri, Apr 08, 2005 at 04:38:20PM -0600, Michael Fuhr wrote: On Fri, Apr 08, 2005 at 03:15:50PM -0700, Josh Berkus wrote: I'm wondering if it might be useful to be able to add estimated selectivity to a function definition for purposes of query estimation. Currently function scans

Re: [PERFORM] Functionscan estimates

2005-04-08 Thread Alvaro Herrera
On Fri, Apr 08, 2005 at 04:04:27PM -0700, Josh Berkus wrote: My solution would be a lot simpler, since we could simply populate pg_proc.proestrows with 1000 by default if not changed by the DBA. In an even better world, we could tie it to a table, saying that, for example, proestrows =

Re: [PERFORM] Functionscan estimates

2005-04-08 Thread Tom Lane
Not too many releases ago, there were several columns in pg_proc that were intended to support estimation of the runtime cost and number of result rows of set-returning functions. I believe in fact that these were the remains of Joe Hellerstein's thesis on expensive-function evaluation, and are