Peter Geoghegan <p...@bowt.ie> writes:
> I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
> *extremely* aggressively, then eventually the hash agg would be
> in-memory. And without actually using all that much memory.

No, he already tried, upthread.  The trouble is that he's on a Windows
machine, so get_hash_mem is quasi-artificially constraining the product
to 2GB.  And he needs it to be a bit more than that.  Whether the
constraint is hitting at the ngroups stage or it's related to actual
memory consumption isn't that relevant.

What I'm wondering about is whether it's worth putting in a solution
for this issue in isolation, or whether we ought to embark on the
long-ignored project of getting rid of use of "long" for any
memory-size-related computations.  There would be no chance of
back-patching something like the latter into v13, though.

                        regards, tom lane


Reply via email to