kazuyukitanimura commented on code in PR #8004:
URL: https://github.com/apache/arrow-datafusion/pull/8004#discussion_r1378185727


##########
datafusion/physical-plan/src/aggregates/mod.rs:
##########
@@ -2155,7 +2155,7 @@ mod tests {
         spill: bool,
     ) -> Result<()> {
         let task_ctx = if spill {
-            new_spill_ctx(2, 2812)
+            new_spill_ctx(2, 2886)

Review Comment:
   Is this related?



##########
datafusion/physical-plan/src/aggregates/row_hash.rs:
##########
@@ -673,7 +673,16 @@ impl GroupedHashAggregateStream {
         let spillfile = 
self.runtime.disk_manager.create_tmp_file("HashAggSpill")?;
         let mut writer = IPCWriter::new(spillfile.path(), &emit.schema())?;
         // TODO: slice large `sorted` and write to multiple files in parallel
-        writer.write(&sorted)?;
+        let mut offset = 0;
+        let total_rows = sorted.num_rows();
+
+        while offset < total_rows {
+            let length = std::cmp::min(total_rows - offset, self.batch_size);
+            let batch = sorted.slice(offset, length);
+            offset += batch.num_rows();
+            writer.write(&batch)?;
+        }
+

Review Comment:
   Is it possible to write in parallel? So that this becomes less blocking.
   
   Additional improvement would be that chunking before sorting. I remember the 
discussion was `sorted` keeps a copy in memory. We can slice `emit` before 
`sort_batch` and sort right before writing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to