kazuyukitanimura commented on code in PR #8004:
URL: https://github.com/apache/arrow-datafusion/pull/8004#discussion_r1378633704
##########
datafusion/physical-plan/src/aggregates/row_hash.rs:
##########
@@ -673,7 +673,16 @@ impl GroupedHashAggregateStream {
let spillfile =
self.runtime.disk_manager.create_tmp_file("HashAggSpill")?;
let mut writer = IPCWriter::new(spillfile.path(), &emit.schema())?;
// TODO: slice large `sorted` and write to multiple files in parallel
- writer.write(&sorted)?;
+ let mut offset = 0;
+ let total_rows = sorted.num_rows();
+
+ while offset < total_rows {
+ let length = std::cmp::min(total_rows - offset, self.batch_size);
+ let batch = sorted.slice(offset, length);
+ offset += batch.num_rows();
+ writer.write(&batch)?;
+ }
+
Review Comment:
Got it.
Would you mind explaining how reading smaller files reduces memory usage? I
thought we are streaming when reading back for merging. Also, when merge-sort
is going on, we need to open all files anyway? Just making sure the original
issue statement in #8003
Additionally, I think we have to create a new writer otherwise, we keep
appending to the same temp file? So we end up having the same file size?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]