On Mon, 2006-01-16 at 20:02 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Sure hash table is dynamic, but we read all inner rows to create the
> > hash table (nodeHash) before we get the outer rows (nodeHJ).
> 
> But our idea of the number of batches needed can change during that
> process, resulting in some inner tuples being initially assigned to the
> wrong temp file.  This would also be true for hashagg.

So we correct that before we start reading the outer table.

> > Why would we continue to dynamically build the hash table after the
> > start of the outer scan?
> 
> The number of tuples written to a temp file might exceed what we want to
> hold in memory; we won't detect this until the batch is read back in,
> and in that case we have to split the batch at that time.  For hashagg
> this point would apply to the aggregate states not the input tuples, but
> it's still a live problem (especially if the aggregate states aren't
> fixed-size values ... consider a "concat" aggregate for instance).

OK, I see what you mean. Sounds like we should have a new definition for
Aggregates, "Sort Insensitive" that allows them to work when the input
ordering does not effect the result, since that case can be optimised
much better when using HashAgg. Since we know that applies to the common
cases of SUM, AVG etc this will certainly help people.

For sort-sensitive aggregates sounds like we either:
1. Write to a single file, while we remember the start offset of the
first row of each batch.
2. Write to multiple files, adding a globally incrementing sequenceid.
Batches are then resorted on the sequenceid before processing.
3. We give up, delete the existing batches and restart the scan from the
beginning of the outer table.

Sounds like (1) is best, since the overflow just becomes a SortedAgg.
But all of them sound ugly.

Best Regards, Simon Riggs


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to