alamb commented on code in PR #8004:
URL: https://github.com/apache/arrow-datafusion/pull/8004#discussion_r1377727526


##########
datafusion/physical-plan/src/aggregates/row_hash.rs:
##########
@@ -673,7 +673,18 @@ impl GroupedHashAggregateStream {
         let spillfile = 
self.runtime.disk_manager.create_tmp_file("HashAggSpill")?;
         let mut writer = IPCWriter::new(spillfile.path(), &emit.schema())?;
         // TODO: slice large `sorted` and write to multiple files in parallel
-        writer.write(&sorted)?;
+        let mut offset = 0;
+        let total_rows = sorted.num_rows();
+
+        while offset < total_rows {
+            // TODO: we could consider smaller batch size as there may be 
hundreds of batches

Review Comment:
   this comment seems backwards, shouldn't it be 'consider *larger* batchsize` ?
   
   But in any event, hundreds of batches seems reasonable if they are all 8k 
rows a piece
   
   cc @kazuyukitanimura 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to