attilapiros commented on a change in pull request #23278: [SPARK-24920][Core]
Allow sharing Netty's memory pool allocators
URL: https://github.com/apache/spark/pull/23278#discussion_r242555503
##########
File path:
common/network-common/src/main/java/org/apache/spark/network/util/NettyUtils.java
##########
@@ -95,6 +111,38 @@ public static String getRemoteAddress(Channel channel) {
return "<unknown remote>";
}
+ /**
+ * Returns the default number of threads for both the Netty client and
server thread pools.
+ * If numUsableCores is 0, we will use Runtime get an approximate number of
available cores.
+ */
+ public static int defaultNumThreads(int numUsableCores) {
+ final int availableCores;
+ if (numUsableCores > 0) {
+ availableCores = numUsableCores;
+ } else {
+ availableCores = Runtime.getRuntime().availableProcessors();
+ }
+ return Math.min(availableCores, MAX_DEFAULT_NETTY_THREADS);
+ }
+
+ /**
+ * Returns the lazily created shared pooled ByteBuf allocator for the
specified allowCache
+ * parameter value.
+ */
+ public static synchronized PooledByteBufAllocator
getSharedPooledByteBufAllocator(
+ boolean allowDirectBufs,
Review comment:
It can be a problem as although it comes from a configuration (this is why
are ignored it in the first place) its value can vary depending on the
transport modules (like shuffle and rpc).
As I have seen it used by the community (there are some Jira issue and
github comments where `preferDirectBufs=false` was used) so what about
introducing a new configuration:
`spark.network.sharedByteBufAllocators.io.preferDirectBufs`.
And of course both parameters should be documented in `configuration.md`
what I will fix, too.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]