On Mon, 2018-06-11 at 19:33 +0200, Tomas Vondra wrote:
> For example we hit the work_mem limit after processing 10% tuples, 
> switching to sort would mean spill+sort of 900GB of data. Or we
> might 
> say - hmm, we're 10% through, so we expect hitting the limit 10x, so 
> let's spill the hash table and then do sort on that, writing and
> sorting 
> only 10GB of data. (Or merging it in some hash-based way, per
> Robert's 
> earlier message.)

Your example depends on large groups and a high degree of group
clustering. That's fine, but it's a special case, and complexity does
have a cost, too.

Regards,
        Jeff Davis


Reply via email to