faucct opened a new pull request, #43525:
URL: https://github.com/apache/spark/pull/43525

   Right now if the data is already sorted you are reading from partitions one 
by one, but the iterator travels the PriorityQueue up and down for each record.
   The perfect solution would be to use 
`scala.collection.mutable.PriorityQueue#fixDown` after `.next()` instead of 
enqueue+deque, but its access is forbidden.
   The behaviour should not change. I did not benchmark it, but I think this 
should improve the performance in average.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to