Victsm commented on a change in pull request #25907: [SPARK-29206][SHUFFLE] 
Make number of shuffle server threads a multiple of number of chunk fetch 
handler threads.
URL: https://github.com/apache/spark/pull/25907#discussion_r329308432
 
 

 ##########
 File path: 
common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
 ##########
 @@ -111,8 +111,30 @@ public int numConnectionsPerPeer() {
   /** Requested maximum length of the queue of incoming connections. Default 
is 64. */
   public int backLog() { return conf.getInt(SPARK_NETWORK_IO_BACKLOG_KEY, 64); 
}
 
-  /** Number of threads used in the server thread pool. Default to 0, which is 
2x#cores. */
-  public int serverThreads() { return 
conf.getInt(SPARK_NETWORK_IO_SERVERTHREADS_KEY, 0); }
+  /**
+   * The configured ratio between number of server threads and number of chunk 
fetch handler
+   * threads. Default to 1, which sets the size of both thread pools to be 
equal. Number of
+   * server threads needs to be a multiple of this ratio. See SPARK-29206.
+   */
+  private int getChunkFetchHandlerThreadsRatio() {
+    return conf.getInt("spark.shuffle.server.chunkFetchHandlerThreadsRatio", 
1);
+  }
+
+  /**
+   * Number of threads used in the server thread pool. Default to 0, which is 
2x#cores.
+   * If spark.shuffle.server.chunkFetchHandlerThreadsRatio is configured, and 
the Netty server
+   * is for shuffle, then the actual # of server threads will round up to the 
nearest int that
+   * is a multiple of the configured ratio.
+   */
+  public int serverThreads() {
+    int configuredServerThreads = 
conf.getInt(SPARK_NETWORK_IO_SERVERTHREADS_KEY, 0);
+    if (this.getModuleName().equalsIgnoreCase("shuffle")) {
+      int chunkFetchHandlerThreadsRatio = getChunkFetchHandlerThreadsRatio();
+      return (int) Math.ceil(configuredServerThreads / 
(chunkFetchHandlerThreadsRatio * 1.0));
 
 Review comment:
   The reason for not applying 
spark.shuffle.server.chunkFetchHandlerThreadsRatio when server threads is not 
explicitly configured is because the behavior to set # server threads to 2 * 
cores is inside Netty. We cannot guarantee 2 * cores can be divided by the 
int_ratio or multiplier for a better name, unless we also overwrite the # 
server threads when it's not configured.
   
   I was previously thinking maybe the default behavior of server threads being 
2 * cores should be preserved, but I think you are right in that if 
spark.shuffle.server.chunkFetchHandlerThreadsRatio is configured, it should be 
honored.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to