2010YOUY01 commented on PR #19695:
URL: https://github.com/apache/datafusion/pull/19695#issuecomment-3722679298

   I have a question: let's say we have only 100MB of memory left, and there is 
a 1GB batch arriving at the `SortExec`, and this PR makes it possible to sort 
this batch in memory and write it to a single spill file.
   
   Sorting it in memory and incrementally appending it to the spill file still 
needs extra memory. The amount should be the memory usage of the sort columns 
in the original large batch, so in the worst case it is also around 1GB. This 
should not be possible. Are we trying to ignore the memory limit and sort and 
spill anyway in this PR?
   
   I believe that, for internal operators, outputting batches in `batch_size` 
is a convention. This convention can greatly simplify operator implementation; 
otherwise, all operators have to consider extremely large or extremely small 
input batches, which would make long-term maintenance very hard.
   
   The root cause of this issue, I think, is that `AggregateExec` is not 
respecting this convention and can potentially output batches that are much 
larger than `batch_size`. What do you think about moving this fix to 
`AggregateExec` instead, so it has internal spilling to ensure it does not 
output large batches? This seems like an issue that should be addressed outside 
`SortExec`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to