On Wed, Feb 10, 2010 at 12:32 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > The reason that EXPLAIN prints things the way it does is so that actual > costs/times are comparable to estimated costs.
Oh, that was a thought I had along the way but forgot to mention in my email: since the buffer usage isn't related to the cost there isn't nearly the impetus to divide by loops except to be consistent with the time. Another point is that changing the actual times to report total times doesn't actually make much sense either. Total time to produce the *first* record is pretty meaningless for example. Perhaps instead of looking to change the "actual" times we should look at a way to include total time spent in each node. I had been experimenting with using getrusage to get more profiling data. It also makes little sense to divide by loops since again it's all stuff that makes sense to compare with outside data sources and little sense to compare with the estimated costs. Perhaps we could add the existing wall clock time to this (after pruning things like nivcsw and minflt etc once we know what's not useful.) postgres=# explain (analyze,buffers,resource) select * from i; QUERY PLAN ----------------------------------------------------------------------------------- ----------------------------- Seq Scan on i (cost=0.00..63344.86 rows=2399986 width=101) (actual time=0.104..4309.997 rows=2400000 loops=1) Buffers: shared hit=256kB read=307.1MB blocking=392kB Resource Usage: user time=656.042 system time=3252.197 read=2.859MB nvcsw=63 nivcsw=173 minflt=65 Total runtime: 7881.809 ms -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers