tgravescs commented on issue #22173: [SPARK-24355] Spark external shuffle 
server improvement to better handle block fetch requests.
URL: https://github.com/apache/spark/pull/22173#issuecomment-570600344
 
 
   Can you clarify? The default for 
spark.shuffle.server.chunkFetchHandlerThreadsPercent is 0 which should be the 
same number of chunk fetcher threads as previously.  It wasn't previously 
unlimited as Netty would limit to 2*number of cores by default. 
(https://github.com/netty/netty/blob/9621a5b98120f9596b5d2a337330339dda199bde/transport/src/main/java/io/netty/channel/MultithreadEventLoopGroup.java#L40)
 
   
   Were you configuring Netty or something to make it unlimited?  Or perhaps 
the default for Netty changed or we missed something?
   
   The intention was no perf regression and same default as previous behavior. 
Note there was a follow on to this pr to fix the default calculation properly.  
   
https://github.com/apache/spark/blob/master/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java#L340
   It does introduce more overall threads and the chunk fetcher ones do have to 
go back through
   
   Please explain more what you are seeing and what settings you are using.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to