"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

> Tom Lane wrote:
>> Gregory Stark <[EMAIL PROTECTED]> writes:
>>> There are downsides: 
>> Insurmountable ones at that.  This one already makes it a non-starter:
>>> a) the overhead of counting rows and loops is there for every query 
>>> execution,
>>> even if you don't do explain analyze.

Note that this doesn't include the gettimeofdays. It's just a couple integer
increments and assigments per tuple.

>> and you are also engaging in a flight of fantasy about what the
>> client-side code might be able to handle.  Particularly if it's buried
>> inside, say, httpd or some huge Java app.  Yeah, you could possibly make
>> it work for the case that the problem query was manually executed in
>> psql, but that doesn't cover very much real-world territory.

> I think there's two different use cases here. The one that Greg's proposal
> would be good for is a GUI, like pgAdmin. It would be cool to see how a query
> progresses through the EXPLAIN tree when you run it from the query tool. That
> would be great for visualizing the executor; a great teaching tool.

It also means if a query takes suspiciously long you don't have to run explain
in another session (possibly getting a different plan) and if it takes way too
long such that it's too long to wait for results you can get an explain
analyze for at least partial data.

> Yeah, something like this would be better for monitoring a live system.
> The number of rows processed by execMain.c would only count the number of rows
> processed by the top node of the tree, right? For a query that for example
> performs a gigantic sort, that would be 0 until the sort is done, which is not
> good. It's hard to come up with a single counter that's representative :-(.

Alternately you could count the number of records which went through
ExecProcNode. That would at least get something which gives you a holistic
view of the query. I don't see how you would know what the expected end-point
would be though.

I think a better way to get a real "percentage done" would be to add a method
to each node which estimates its percentage done based on the percentage done
its children report and its actual and expected rows and its costs.

So for example a nested loop would calculate P1-(1-P2)/ER1 where P1 is the
percentage done of the first child and P2 is the percentage done of the second
child and ER1 is the expected number of records from the first child. Hash
Join would calculate (P1*C1 + P2*C2)/(C1+C2).

That could get a very good estimate of the percentage done, basically as good
as the estimated number of records.

  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:

Reply via email to