liupc commented on issue #23602: [SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame URL: https://github.com/apache/spark/pull/23602#issuecomment-459944058 > CompositeByteBuf frame = buffers.getFirst().alloc().compositeBuffer(); @srowen I just run a benchmark test for the above code, and it's true that it's rather slow. the test report is as below: // --- Test Reports for plan 2 ------ // // [test consolidate 1000 buffers each with 1m, 50% used for 1 loop] // Allocating 524288 bytes // Time cost with 1 loop for consolidating: 5338 millis // // [test consolidate 1000 buffers each with 1m, 100% used for 1 loop] // Allocating 1048576 bytes // Time cost with 1 loop for consolidating: 10220 millis // // [test consolidate 1000 buffers each with 1m, 50% used for 10 loop] // Allocating 524288 bytes // Time cost with 10 loop for consolidating: 49249 millis // // [test consolidate 1000 buffers each with 1m, 100% used for 10 loop] // Allocating 1048576 bytes // Time cost with 10 loop for consolidating: 99247 millis // // [test consolidate 1000 buffers each with 1m, 50% used for 50 loop] // Allocating 524288 bytes // Time cost with 50 loop for consolidating: 249160 millis // ...... too slow
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
