srowen commented on a change in pull request #23278: [SPARK-24920][Core] Allow
sharing Netty's memory pool allocators
URL: https://github.com/apache/spark/pull/23278#discussion_r242629020
##########
File path:
common/network-common/src/main/java/org/apache/spark/network/util/NettyUtils.java
##########
@@ -36,6 +36,22 @@
* Utilities for creating various Netty constructs based on whether we're
using EPOLL or NIO.
*/
public class NettyUtils {
+
+ /**
+ * Specifies an upper bound on the number of Netty threads that Spark
requires by default.
+ * In practice, only 2-4 cores should be required to transfer roughly 10
Gb/s, and each core
+ * that we use will have an initial overhead of roughly 32 MB of off-heap
memory, which comes
+ * at a premium.
+ *
+ * Thus, this value should still retain maximum throughput and reduce wasted
off-heap memory
+ * allocation. It can be overridden by setting the number of serverThreads
and clientThreads
+ * manually in Spark's configuration.
+ */
+ private static int MAX_DEFAULT_NETTY_THREADS = 8;
+
+ private static PooledByteBufAllocator[] _sharedPooledByteBufAllocator =
Review comment:
I guess you could check `allowCache` twice to figure out what reference to
check for null, and which to update. OK I don't feel strongly about it.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]