On 04/24/2017 08:52 PM, Andres Freund wrote:
On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
The explain analyze of the hash step of a hash join reports something like

   ->  Hash  (cost=458287.68..458287.68 rows=24995368 width=37) (actual
rows=24995353 loops=1)
         Buckets: 33554432  Batches: 1  Memory Usage: 2019630kB

Should the HashAggregate node also report on Buckets and Memory Usage?  I
would have found that useful several times.  Is there some reason this is
not wanted, or not possible?

I've wanted that too.  It's not impossible at all.

Why wouldn't that be possible? We probably can't use exactly the same approach as Hash, because hashjoins use custom hash table while hashagg uses dynahash IIRC. But why couldn't measure the amount of memory by looking at the memory context, for example?


Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to