dongjoon-hyun commented on pull request #34846: URL: https://github.com/apache/spark/pull/34846#issuecomment-1002354003
What do you mean? I agreed that this makes Spark utilize more memory without waste. > that you don't think this causes more mem allocation? Yes, indeed. I also agree with that. My point is `G1HeapRegionSize - Platform.LONG_ARRAY_OFFSET` is not a silver bullet to guarantee **always-win**. > It seems like it should be a better idea in a lot of cases. Let me ask you in this way. Do you think `spark.buffer.pageSize = 1MB(G1HeapRegionSize) - Platform.LONG_ARRAY_OFFSET` is always better than the other values in the production? > What's the perf regression you have in mind? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
