kazuyukitanimura commented on code in PR #7400: URL: https://github.com/apache/arrow-datafusion/pull/7400#discussion_r1319744106
########## datafusion/core/src/physical_plan/aggregates/row_hash.rs: ########## @@ -120,6 +132,56 @@ use super::AggregateExec; /// hash table). /// /// [`group_values`]: Self::group_values +/// +/// # Spilling +/// +/// The sizes of group values and accumulators can become large. Before that causes +/// out of memory, this hash aggregator spills those data to local disk using Arrow +/// IPC format. For every input [`RecordBatch`], the memory manager checks whether +/// the new input size meets the memory configuration. If not, spilling happens, and +/// later stream-merge sort the spilled data to read back. As the rows cannot be +/// grouped between spilled data stored on disk, the read back merged data needs to +/// be re-grouped again. +/// +/// ```text +/// Partial Aggregation [batch_size = 2] (max memory = 3 rows) Review Comment: Changed to spilling only on the Final phase, and output early on the partial phase. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
