liupc commented on issue #23602: [SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame URL: https://github.com/apache/spark/pull/23602#issuecomment-459945164 Seems that we can gain huge memory saving with little time spent.(at most ~ 500milis for a 1GB shuffle). This method has many advantanges: 1. For small shuffle block, this consolidation will never be triggered, for they don't satisfy the threshold, so it's good for small application -- as fast as current. 2. For large shuffle block, this consolidation will save huge memory, and in some cases, it can avoid direct memory oom, for the bytebuf are consolidated early. so it's also good for large applications -- saving memory.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
