isidentical opened a new issue, #3596:
URL: https://github.com/apache/arrow-datafusion/issues/3596

   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   During sorting, when we receive a new record batch we try to allocate space 
for it. This is done with the assumption that the result of this sort will 
still be around, and we don't want to accidentally overflow the memory so we 
have to keep track of it. But after the 
https://github.com/apache/arrow-datafusion/pull/3510, this assumption might not 
hold for all cases (particularly when you have a fetch limit set on your 
sorting operation) so we might be over-allocating memory and constantly 
spilling for no good reason.
   
   **Describe the solution you'd like**
   Avoiding over-allocations by instructing the memory manager to shrink after 
each partial sort with a limit.
   
   **Describe alternatives you've considered**
   Leaving it as is, which would mean tons of unnecessary spills under a heavy 
load of data with a fixed limit.
   
   **Additional context**
   Originally posted here: 
https://github.com/apache/arrow-datafusion/issues/3579#issuecomment-1255596028
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to